Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Microservices

7 Articles
article-image-red-hat-announces-the-general-availability-of-red-hat-openshift-service-mesh
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Red Hat announces the general availability of Red Hat OpenShift Service Mesh

Amrata Joshi
27 Aug 2019
3 min read
Last week, the team at Red Hat, a provider of enterprise open source solutions announced the general availability of Red Hat OpenShift Service Mesh for connecting, managing, observing and simplifying service-to-service communication of Kubernetes applications on Red Hat OpenShift 4.  The OpenShift Service Mesh is based on Istio, Kiali and Jaeger projects and is designed to deliver end-to-end developer experience around microservices-based application architectures. It manages the network connections between the containerized applications and decentralized applications. And eases the complex tasks of implementing bespoke networking services for applications and business logic.  Larry Carvalho, research director, IDC said in a statement to Business Wire, “Service mesh is the next big area of disruption for containers in the enterprise because of the complexity and scale of managing interactions with interconnected microservices. Developers seeking to leverage Service Mesh to accelerate refactoring applications using microservices will find Red Hat’s experience in hybrid cloud and Kubernetes a reliable partner with the Service Mesh solution." Developers can now improve the implementation of microservice architectures by natively integrating service mesh into the OpenShift Kubernetes platform. The OpenShift Service Mesh improves traffic management by including service observability and visualization of the mesh topology.  Ashesh Badani, Red Hat's senior VP of Cloud Platforms, said in a statement, "The addition of Red Hat OpenShift Service Mesh allows us to further enable developers to be more productive on the industry's most comprehensive enterprise Kubernetes platform by helping to remove the burdens of network connectivity and management from their jobs and allowing them to focus on building the next-generation of business applications." Features of Red Hat OpenShift Service Mesh Tracing OpenShift Service Mesh features tracing that uses Jaeger which is an open, distributed tracing system. Tracing helps developers in tracking a request between services. Tracing also helps in providing an insight into the request process right from the start till the end.  Visualization and observability  This Service Mesh also provides an easier way to view its topology and observe how the services interact. Visualization helps in understanding how the services are managed and how the traffic is flowing in near-real time which helps in easier management and troubleshooting. Service Mesh installation and configuration  OpenShift Service Mesh features “One-click” Service Mesh installation and configuration with the help of Service Mesh Operator and an Operator Lifecycle Management framework. The developers can deploy applications into a service mesh more easily. A Service Mesh Operator deploys Istio, Jaeger and Kiali together minimizes management burdens and automates tasks such as installation, service maintenance as well as lifecycle management. Developed with open projects OpenShift Service Mesh is developed with open projects and is built in collaboration with leading members of the Kubernetes community. Increases productivity of the developers The Service Mesh integrates communication policies without making changes to the application code or integrating language-specific libraries. To know more about Red Hat OpenShift Service Mesh, check out the official website. Red Hat joins the RISC-V foundation as a Silver level member Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman
Read more
  • 0
  • 0
  • 2761

article-image-amazon-announces-the-public-preview-of-aws-app-mesh-a-service-mesh-for-microservices-on-aws
Amrata Joshi
29 Nov 2018
3 min read
Save for later

Amazon announces the public preview of AWS App Mesh, a service mesh for microservices on AWS

Amrata Joshi
29 Nov 2018
3 min read
Yesterday, at AWS re:Invent, Amazon introduced AWS App Mesh, a service mesh for controlling and monitoring communication easily across the microservices on AWS. App Mesh standardizes the communication of microservices and gives users an end-to-end visibility. It can also be used with Amazon ECS and Amazon EKS to run containerized microservices. Earlier it was difficult to pinpoint the exact location of errors when the number of microservices grew within an application. In order to solve this problem one had to build monitoring and control logic directly into the code and redeploy the microservices. AWS App Mesh solves the problem by making it easy to run microservices by providing visibility and network traffic controls for every microservice in an application. It also removes the need for updating application code. With App Mesh, the logic for monitoring and controlling communications between the microservices is implemented as a proxy. This proxy runs alongside each microservice, instead of being built into the microservice code. App Mesh automatically sends the configuration information to each microservice proxy. The major advantage of placing a proxy in front of every microservice is that the metrics, logs, and traces between the services can automatically get captured. Key Features of AWS App Mesh Identifies issues with microservices App Mesh captures metrics, logs, and traces from every microservice and exports this data to multiple AWS and third-party tools, including AWS X-Ray, Amazon CloudWatch, etc. for monitoring and controlling. This helps in identifying and isolating issues with any microservice in order to optimize the application. Configures the traffic flow With App Mesh one can easily implement custom traffic routing rules for ensuring that every microservice is highly available during deployments and after failures. AWS App Mesh is responsible for deploying and configuring a proxy that manages all communications traffic to and from the containers. It also removes the need for configuring the microservice’s communication protocols, writing custom code, or implementing libraries for operating applications. Works with existing microservices App Mesh can be used with existing or new microservices that are running on Amazon ECS, AWS Fargate, Amazon EKS, and self-managed Kubernetes on AWS. App Mesh monitors and controls the communications for microservices that are running across orchestration systems, clusters. Uses Envoy Proxy for monitoring App Mesh also uses the open source Envoy proxy with a wide range of AWS partner and open source tools for monitoring the microservices. Envoy is a self-contained process, designed to run alongside every application server. To know more about this news, check out the Amazon’s official blog post. Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations
Read more
  • 0
  • 0
  • 3062

article-image-express-gateway-v1-13-0-releases-drops-support-for-node-6
Sugandha Lahoti
18 Oct 2018
2 min read
Save for later

Express Gateway v1.13.0 releases; drops support for Node 6

Sugandha Lahoti
18 Oct 2018
2 min read
Express Gateway v1.13.0 was released yesterday. Express Gateway is a simple, agnostic, organic, and portable, microservices API Gateway built on Express.js. The release 1.13.0 drops support for Node 6. What’s new in this version? Changes The development Dockerfile is updated to better leverage the caching. The COPY statements are included at the very bottom to leverage caching for all the layers. Developers need not manually create work directory, WORKDIR does that automatically. In the Express Gateway v1.13.0, the automated deployment process has been updated to provide updated README to the official Helm chart. The Express Gateway v1.13.0 policy file is updated to be exposed as a set of functions instead of as a class which does not really hold any state nor extended anywhere. It transforms the current policy to be a singleton class to an object which exports 3 functions. This might help people get started in hacking with Express Gateway. They have updated all their dependencies before the minor release. Fixes A lot of new changes have been made in Winston after the 3.0.0 migration. These include A better default log level info which avoids using console.log in production code They have updated all references in the code to use verbose to hide statements that do not matter Added color to log context to differentiate between timestamp, context, level, and message Deprecated different functions that aren't used anywhere but are harming the general test coverage Also, it is now possible to provide raw regular expression to Express Gateway’s CORS policy. This allows cors origin configuration to have regular expressions as values. Read more about the release on Github. Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js API Gateway and its need Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 2618

article-image-nginx-hybrid-application-delivery-controller-platform-improves-api-management-manages-microservices-and-much-more
Melisha Dsouza
15 Oct 2018
3 min read
Save for later

NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!

Melisha Dsouza
15 Oct 2018
3 min read
“Technology is front and center in every business strategy, and enterprises of all sizes and in all industries must embrace digital to attract, retain, and enrich customers,” -Gus Robertson, CEO, NGINX At the NGINX Conf 2018, the NGINX team has announced enhancements to its Application Platform that will serve as a common framework across monolithic and microservices based applications. The upgrade comes with 3 new releases; NGINX Plus, NGINX Controller, and NGINX Unit, which have been engineered to provide a built-in service mesh for managing microservices and an integrated application programming interface (API) management platform. They also maintain the traditional load balancing capabilities and a web application firewall (WAF). An application delivery controller (ADC) is used to improve the performance of web applications. The ADC acts as a mediator between web and application servers and their clients. It transfers requests and responses between them while enhancing performance using processes like load balancing, caching, compression, and offloading of SSL processing. The main aim of re-architecting NGINX’s platform and launching new updates was to provide a more comprehensive approach to integrating load balancing, service mesh technologies, and API management. This was to be done leveraging the modular architecture of the NGINX controller. Here is a gist of the three new NGINX product releases: #1 NGINX Controller 2.0 This controller is an upgrade on the NGINX Controller 1.0 that was launched in June of 2018. It was introduced with centralized management, monitoring, and analytics for NGINX Plus load balancers. Now, NGINX Controller 2.0 brings advanced NGINX Plus configuration. This includes version control, diffing, reverting and many more features. It also includes an all-newAPI Management Module which manages the NGINX Plus as an API gateway. Besides this, the controller will also include a future Service Mesh Module. #2 NGINX Plus R16 The R16 comes with dynamic clustering. It has a clustered state sharing and key-value stores for global rate limiting and DDoS mitigation. It also comes with load balancing algorithms for Kubernetes and microservices, enhanced UDP for VoIP and VDI, and AWS PrivateLink integration. #3 NGINX Unit 1.4 This unit improves security and language support while providing support for TLS. It also adds JavaScript with Node.js to extend existing Go, Perl, PHP, Python, and Ruby language support. Enterprises can now use the NGINX Application Platform to function as a Dynamic Application Gateway and a Dynamic Application Infrastructure. NGINX Plus and NGINX are used by popular, high-traffic sites such as Dropbox, Netflix, and Zynga. More than 319 million websites worldwide rely on NGINX Plus and NGINX application delivery platforms. To know more about this announcement, head over to DevOps.com Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0 Getting started with F# for .Net Core application development [Tutorial]  
Read more
  • 0
  • 0
  • 3699

article-image-openfaas-releases-full-support-for-stateless-microservices-in-openfaas-0-9-0
Melisha Dsouza
07 Sep 2018
4 min read
Save for later

OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0

Melisha Dsouza
07 Sep 2018
4 min read
OpenFaaS announced on the 5th of September 2018 that they have released support for stateless microservices in OpenFaaS 0.9.0. They assert that managing FaaS functions and microservices will now be easier. A stateless microservice can be deployed as if it were a FaaS Function and managed by a FaaS framework or Platform such as OpenFaaS. Hence, no special routes, flags or filters are needed in the OpenFaaS CLI, Gateway API or UI. Source: OpenFaaS The upgrade came as a follow-up to two requests from the microservices community. One of the users at Wireline.io raised a feature request to enhance the HTTP route functionality of functions and write functions to run on both, AWS Lambda and on OpenFaaS, without any additional changes. Then came the request from the CEO of GitLab, Sid Sijbrandi who wanted to learn more about Serverless and how it could benefit Gitlab. He was apprehensive whether OpenFaaS could be used to manage both, FaaS Functions and the microservices his team was more familiar (eg. Sinatra apps). He wanted to know more about scaling to zero when idle. To address these requests, the OpenFaaS blog has given its viewers an example of deploying a Ruby and Sinatra guestbook backed by MySQL deployed to OpenFaaS with Kubernetes. This is how the task can be done- Users have to start of by creating the Sinatra stateless microservices. They can then go on to create a hello-world service by supplying their own Dockerfile and executing the following commands $ mkdir -p sinatra-for-OpenFaaS/ \  && cd sinatra-for-OpenFaaS/ $ faas-cli new --prefix=alexellis2 --lang dockerfile frank-says They need to replace alexellis2 with their Docker Hub account or another Docker registry. This has to be followed by creating a Gemfile and the main.rb file: ./frank-says/main.rb: require 'sinatra' set :port, 8080 set :bind, '0.0.0.0' open('/tmp/.lock', 'w') { |f|  f.puts "Service started" } get '/' do  'Frank has entered the building' end get '/logout' do  'Frank has left the building' End   Things to note on OpenFaaS workloads while doing this- Bind to TCP port 8080 Write a file /tmp/.lock when ready to receive traffic The Dockerfile will add a non-root user, add the Ruby source and Gemfile then installs the Sinatra gem. Finally, it will add a healthcheck on a 5-second interval and set the start-up command. Users can now deploy the example using the OpenFaaS CLI. Login with account details $ docker login Run the up command which is an alias for build, push and deploy. $ faas-cli up --yaml frank-says.yml Deploying: frank-says. Deployed. 200 OK. URL: http://127.0.0.1:8080/function/frank-says To Deploy the Sinatra guestbook with MySQL, they need to execute- $ git clone https://github.com/OpenFaaS-incubator/OpenFaaS-sinatra-guestbook \  && cd OpenFaaS-sinatra-guestbook Configure MySQL database details in ./sql.yml. $ cp sql.example.yml sql.yml Finally deploy the guestbook: $ faas-cli up http://127.0.0.1:8080/function/guestbook The  URL given by the command above should be used to access the microservice. Now, Sign the guest book using the UI and then reset the MySQL table at any time by posting to /function/guestbook/reset. Source: OpenFaaS The guestbook code stores its state in a MySQL table. A key property of FaaS functions and stateless microservices is that it can be restarted at any time without losing data. For a detailed implementation of the guestbook example, head over to the OpenFaaS Blog post How to Enable Zero-Scale? To enable scaling to zero simply follow the documentation Next, users have to add a label to their stack.yml file to tell OpenFaaS that your function is eligible for zero-scaling: labels:      com.OpenFaaS.scale.zero: true Finally, redeploy the guestbook with faas-cli up. The faas-idler will now scale the function to zero replicas as soon as it is detected as idle. The default idle period is set at 5 minutes, which can be configured at deployment time. OpenFaaS has also deployed a stateless microservice written in Ruby that will scale to zero when idle and back again in time to serve traffic. It can be managed in exactly the same way as OpenFaaS existing FaaS functions. Thus, we have seen how the support for stateless microservices has made it easier for users to manage their microservices easily. Head over to the OpenFaaS blog for a detailed explanation of deploying a simple hello-world Sinatra service and to gain more insights about the upgrade. 6 Ways to blow up your Microservices! Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js
Read more
  • 0
  • 0
  • 4195

article-image-should-software-be-more-boring-the-boring-software-manifesto-thinks-so
Richard Gall
19 Jun 2018
3 min read
Save for later

Should software be more boring? The "Boring Software" manifesto thinks so

Richard Gall
19 Jun 2018
3 min read
Innovation is a word that seems to have emanated from the tech world and entered mainstream discourse. It's a term that has stuck to contemporary notions of progress and improvement. But is innovation and change really that great? Are we in danger of valorizing novelty at the expense of reliability, security and functionality? The "Boring Software" Manifesto, published on tqdev.com yesterday (18 June 2018) says yes. Written by software architect Maurits van der Schee, the "Boring Software" manifesto argues "as software developers we are tired of the false claims made by evangelists of the latest and greatest technology." Just days after we revealed data on developer attitudes to 'ninjas' and 'rockstars' the manifesto is further evidence of tension within the tech world. The tension is perhaps not so much one between 'innovators' and those concerned with ideals of security and reliability, but more about those actively selling innovation, speed, and efficiency and those with a more pragmatic approach to software engineering. Boring software vs. hyped and volatile technologies Schee's manifesto takes aim at what he calls 'hyped and volatile technologies'. He also appears to suggest that the demands of industry actually conflict with these 'hyped' technologies. Implicit in the piece is the idea is that there is a counter-industry of hype and evangelism that undermines how software can best serve industry today. 'In pursuit of "agility and craftsmanship", Schee writes, 'we have found "boring software" to be indispensable.' The most intriguing part of the manifesto features a number of examples that demonstrate the tension in the software world really clearly. For example: 3-tier applications are tried, tested and reliable; microservices, meanwhile, are hyped and volatile. Relational databases are 'simple and proven', while NoSQL is not, in Schee's view. Page reloads - also proven, whereas SPAs remain hyped. Unsurprisingly, reaction to the Boring Software manifesto is split. Many people have welcomed the intervention: https://twitter.com/overstood/status/1008956402050560000 Others, however, were more cautious. Innovation and invention only opens up new options, they argued: https://twitter.com/priyaprincess20/status/1008960699677081600 One Twitter user summed up the situation by suggesting the truth is probably somewhere between the two: https://twitter.com/ardave2002/status/1008984843403833344 This is likely to be a debate without a conclusion. However, the manifesto is a useful intervention in a discussion about how we should build software and what we should value most. What do you think about "boring software"? Is Maurits van der Schee correct? Or do we need to be open to new and emerging technologies and trends, even if they pose new challenges? Read next How Gremlin is making chaos engineering accessible [Interview] Are containers the end of virtual machines? Technical debt is damaging businesses
Read more
  • 0
  • 0
  • 4745
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-netflix-open-sources-zuul-2-cloud-gateway
Pavan Ramchandani
28 May 2018
2 min read
Save for later

Netflix open sources Zuul 2 cloud gateway

Pavan Ramchandani
28 May 2018
2 min read
Netflix in their tech blog announced that their popular cloud gateway Zuul 2 is now open-source. Zuul 2 was announced back in 2016 is Netflix's Java-based API gateway that handles all the request for Netflix's user base. Zuul 2 is the front door, acting as a filter to any request that comes into the Netflix's server. This gateway monitors the request and routes the request to the appropriate service to then act on the request. Zuul, in a way, is responsible for keeping Netflix standing strong and fulfilling your streaming requests. Netflix is known for open sourcing a lot of the tools developed in-house for the community. Zuul 2 is a battle-tested tool as it has been handling the massive Netflix infrastructure. Since its open sourcing, the developers have an option of a more resilient tool that can be used in their infrastructure architecture. Netflix promises to keep the security aspect intact for the open source Zuul 2. Also to add to this news, Netflix announced some more features for Zuul 2. Here are the feature additions: Server protocols: Zuul 2 has full support for HTTP/2 connections. Also, Mutual TLS will enhance Zuul's operation in secure infrastructure. Resiliency features: To increase the availability, Netflix will be adding a feature called Adaptive Retries that is used on Netflix. Also, it would be adding configurable concurrency limits for protecting the origins from getting overloaded and separating the other origins that run behind Zuul. Request Passport: This feature will enable the Zuul server to track all events that occur for each request. This will allow you to compute the asynchronous requests for better availability of your services. Status Categories: This feature helps you categorize the requests by extending the success and failure state in terms of HTTP status code. Request attempts: It tracks all the proxy attempts and provides you the status of each attempt. This really helps to identify the retries and to debug the routing. Zuul also has enhanced self-service routing, load balancing, anomaly detection, among other primary features that Netflix uses to keep the infrastructure secure and running. Netflix has released several other tools including Titus (container management), Conductor (microservice orchestration), Hystrix (cloud management), Vizceral (traffic management), among other efficient tools that can be used in large infrastructures. You can read Netflix's announcement blog to get more insights on the future development in Zuul 2. What software stack does Netflix use?
Read more
  • 0
  • 0
  • 5631