Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-introducing-quarkus-a-kubernetes-native-java-framework-for-graalvm-openjdk-hotspot
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot

Melisha Dsouza
08 Mar 2019
2 min read
Yesterday, RedHat announced the launch of ‘Quarkus’, a Kubernetes Native Java framework that offers developers “a unified reactive and imperative programming model” in order to address a wider range of distributed application architectures. The framework uses Java libraries and standards and is tailored for GraalVM and HotSpot. Quarkus has been designed keeping in mind serverless, microservices, containers, Kubernetes, FaaS, and the cloud and it provides an effective solution for running Java on these new deployment environments. Features of Quarkus Fast Startup enabling automatic scaling up and down of microservices on containers and Kubernetes as well as FaaS on-the-spot execution. Low memory utilization to help optimize container density in microservices architecture deployments that require multiple containers. Quarkus unifies imperative and reactive programming models for microservices development. Quarkus introduces a full-stack framework by leveraging libraries like Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more. Quarkus includes an extension framework for third-party framework authors can leverage and extend. Twitter was abuzz with Kubernetes users expressing their excitement on this news- describing Quarkus as “game changer” in the world of microservices: https://twitter.com/systemcraftsman/status/1103759828118368258 https://twitter.com/MarcusBiel/status/1103647704494804992 https://twitter.com/lazarotti/status/1103633019183738880 This open source framework is available under the Apache Software License 2.0 or compatible license. You can head over to the Quarkus website for more information on this news. Using lambda expressions in Java 11 [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 4589

article-image-google-cloud-security-launches-three-new-services-for-better-threat-detection-and-protection-in-enterprises
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Google Cloud security launches three new services for better threat detection and protection in enterprises

Melisha Dsouza
08 Mar 2019
2 min read
This week, Google Cloud Security announced a host of new services to empower customers with advanced security functionalities that are easy to deploy and use. This includes the Web Risk API, Cloud Armor, and HSM keys. #1 Web Risk API The Web Risk API has been released in the beta format to ensure the safety of users on the web. The Web Risk API includes data on more than a million unsafe URLs. Billions of URL’s are examined each day to keep this data up-to-date. Client applications can use a simple API call to check URLs against Google's lists of unsafe web resources. This list also includes social engineering sites, deceptive sites, and sites that host malware or unwanted software. #2 Cloud Armor Cloud Armor is a Distributed Denial of Service (DDoS) defense and Web Application Firewall (WAF) service for Google Cloud Platform (GCP) based on the technologies used to protect services like Search, Gmail and YouTube. Cloud Armor is generally available, offering L3/L4 DDoS defense as well as IP Allow/Deny capabilities for applications or services behind the Cloud HTTP/S Load Balance. It also allows users to either permit or block incoming traffic based on IP addresses or ranges using allow lists and deny lists. Users can also customize their defenses and mitigate multivector attacks through Cloud Armor’s flexible rules language. #3 HSM keys to protect data in the cloud Cloud HSM is now generally available and it allows customers to protect encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs. Customers do not have to worry about the operational overhead of HSM cluster management, scaling and patching. Cloud HSM service is fully integrated with Cloud Key Management Service (KMS), allowing users to create and use customer-managed encryption keys (CMEK) that are generated and protected by a FIPS 140-2 Level 3 hardware device. You can head over to Google Cloud Platform’s official blog to know more about these releases. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 3140

article-image-alphabets-chronicle-launches-backstory-for-business-network-security-management
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

Alphabet’s Chronicle launches ‘Backstory’ for business network security management

Melisha Dsouza
05 Mar 2019
3 min read
Alphabet’s ‘Chronicle’, launched last year, announced its first product, ‘Backstory’ at the ongoing RSA 2019. Backstory is a security data platform and stores huge amounts of business’ network data--including information from domain name servers to employee laptops and phones--into a Chronicle-installed collection of servers on a customer’s premises. This data is quickly indexed and organized. According to Forbes, customers can then carry out searches on the data, like “Are any of my computers sending data to Russian government servers?” Cybersecurity investigators can start asking questions such as: What kinds of information are the Russians taking, when and how?. This method of working is very similar to Google Photos. Backstory gives security analysts the ability to quickly understand the real vulnerabilities. According to the Backstory blog, “Backstory is a global security telemetry platform for investigation and threat hunting within your enterprise network. It is a specialized, cloud-native security analytics system, built on the core infrastructure that powers Google itself. Making security analytics instant, easy, and cost-effective.” The company states that this service requires zero customer hardware, maintenance, tuning, or ongoing management and can support security analytics against the largest customer networks with ease. Features of Backstory Backstory provides a real-time and retroactive instant indicator matching across all logs. It checks failure points such as if a domain flips from good to bad, Backstory shows all devices that have ever communicated with that domain). Prebuilt search results and smart filters designed for security-specific use cases. Displays data in real time to support security investigations and hunts. Backstory provides Intelligent analytics to derive insights to support security investigations. Backstory can automatically work with huge petabytes of data. Chronicle’s CEO Stephen Gillett told CNBC that the pricing model will not be based on volume. However, the licenses will be based on the size of the company and not on the size of the customer's data. Backstory also intends to partner with other cybersecurity companies rather than competing with them. Considering that Alphabet already has a history of obtaining sensitive customer information, it will be interesting to see how Backstory operates without this particular methodology. To know more about this news in detail, read Backstory’s official blog. Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company Google finally ends Forced arbitration for all its employees Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment  
Read more
  • 0
  • 0
  • 1857
Banner background image

article-image-vmware-essential-pks-use-upstream-kubernetes-to-build-a-flexible-cost-effective-cloud-native-platform
Melisha Dsouza
04 Mar 2019
3 min read
Save for later

VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform

Melisha Dsouza
04 Mar 2019
3 min read
Last week, Paul Fazzone, GM Cloud Native Applications, announced the launch of VMware Essential PKS “as a modular approach to cloud-native operation”. VMware Essential PKS includes upstream Kubernetes, reference architectures to help design decisions, and expert support to guide users through upgrades, maintenance and reactively troubleshoot when needed. Paul notes that more than 80% of containers run on virtual machines (VMs), with the percentage growing every year. This launch keeps up with the main objective of establishing VMware as the leading enabler of Kubernetes and cloud-native operation. Features of Essential PKS #1 Modular Approach Customers who have specific technological requirements for networking, monitoring, storage, etc. can build a more modular architecture on upstream Kubernetes. VMware Essential PKS will help these customers access upstream Kubernetes with proactive support.  The only condition being that these organizations should either have the in-house expertise to work with those components, the intention to grow that capability or the willingness to use an expert team. #2 Application portability Customers will be able to use the latest version of upstream Kubernetes, ensuring that they are never locked into a vendor-specific distribution. #3 Flexibility This service allows customers to implement a multi-cloud strategy that lets them choose tools and clouds as per their preference to build a flexible platform on upstream Kubernetes for their workloads. #4  Open-source community support VMware contributes to multiple SIGs and open-source projects that strengthen key technologies and fill up the gaps in the Kubernetes ecosystem. #5 Cloud native ecosystem support and guidance Customers will be able to access 24x7, SLA-driven support for Kubernetes and key open-source tooling. VMware experts will partner with customers to help them with architecture design reviews and help them evaluate networking, monitoring, backup, and other solutions to build a production-grade open source Kubernetes platform. The Kubernetes community has received this news with enthusiasm. https://twitter.com/cmcluck/status/1100506616124719104 https://twitter.com/edhoppitt/status/1100444712794615808 In November, VMware announced it was buying Heptio at VMworld. Heptio products work with upstream Kubernetes and help enterprises realize the impact of Kubernetes on their business. According to FierceTelecom, “PKS Essentials takes the Heptio approach of building a more modular, customized architecture for deploying software containers on upstream Kubernetes but with VMware support.” Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure Tumblr open sources its Kubernetes tools for better workflow integration
Read more
  • 0
  • 0
  • 3241

article-image-redhats-operatorhub-io-makes-it-easier-for-kuberenetes-developers-and-admins-to-find-pre-tested-operators-for-applications
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications

Melisha Dsouza
01 Mar 2019
2 min read
Last week, Red Hat launched OperatorHub.io in collaboration with Microsoft, Google Cloud, and Amazon Web Services, as a “public registry” for finding services backed by the Kubernetes Operator. According to the RedHat blog, the Operator pattern automates infrastructure and application management tasks using Kubernetes as the automation engine. Developers have shown a growing interest in Operators owing to features like accessing automation advantages of public cloud, enable the portability of the services across Kubernetes environments, and much more. RedHat also comments that the number of Operators available has increased but it is challenging for developers and Kubernetes administrators to find available Operators that meet their quality standards. To solve this challenge, they have come up with OperatorHub.io. Features of OperatorHub.io OperatorHub.io is a common registry to “publish and find available Operators”. This is a curation of Operator-backed services for a base level of documentation. It also includes active communities or vendor-backing to show maintenance commitments, basic testing, and packaging for optimized life-cycle management on Kubernetes. The platform will enable the creation of more Operators as well as an improvement to existing Operators. This is a centralized repository that helps users and the community to organize around Operators. Operators can be successfully listed on OperatorHub.io only when then show cluster lifecycle features and packaging that can be maintained through the Operator Framework’s Operator Lifecycle Management, along with acceptable documentation for intended users. Operators that are currently listed in OperatorHub.io include Amazon Web Services Operator, Couchbase Autonomous Operator, CrunchyData’s PostgreSQL, MongoDB Enterprise Operator and many more. This news has been accepted by the Kubernetes community with much enthusiasm. https://twitter.com/mariusbogoevici/status/1101185896777281536 https://twitter.com/christopherhein/status/1101184265943834624 This is not the first time that RedHat has tried to build on the momentum for the Kubernetes Operators. According to TheNewStack, last year, the company acquired CoreOS last year and went on to release Operator Framework, an open source toolkit that “provides an SDK, lifecycle management, metering, and monitoring capabilities to support Operators”. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover  
Read more
  • 0
  • 0
  • 2216

article-image-rancher-labs-announces-k3s-a-lightweight-distribution-of-kubernetes-to-manage-clusters-in-edge-computing-environments
Melisha Dsouza
27 Feb 2019
3 min read
Save for later

Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments

Melisha Dsouza
27 Feb 2019
3 min read
Yesterday, Rancher Labs announced K3s, a lightweight Kubernetes distribution to run Kubernetes in a resource-constrained environment. According to the official blog post, this project was launched to “address the increasing demand for small, easy to manage Kubernetes clusters running on x86, ARM64 and ARMv7 processors in edge computing environments”. To operate an edge computing on Kubernetes is a complex task. K3s will reduce the memory required to run Kubernetes and provide developers with a distribution of Kubernetes that requires less than 512 MB of RAM, ideally suited for edge use cases. Features of K3s #1 Simplicity of Installation K3s was designed to maximize the simplicity of installation and operations on a large scale Kubernetes cluster. It is a standards-compliant, Kubernetes distribution for “mission-critical, production use cases”. #2 Zero Host dependencies There is no requirement for an external installer to install Kubernetes--everything necessary to install it on any device is included in a single, 40MB binary.  A single command will enable the single-node k3s cluster to be provisioned or upgraded. Nodes can be simply added to the cluster running a single command on the new node, pointing it to the original server and passing through a secure token. #3 Automatic certificate and encryption key generation All of the certificates needed to establish TLS between the Kubernetes masters and nodes, as well as the encryption keys for service accounts are automatically created when a cluster is launched. #4 Reduces Memory footprint K3s reduces the memory required to run Kubernetes by removing old and non-essential code and any alpha functionality that is disabled by default. It also removes old features that have been deprecated, non-default admission controllers, in-tree cloud providers, and storage drivers. Users can add in any drivers they need. #5 Conservation of RAM Rancher’s K3s combines the processes that run on a Kubernetes management server into a single process. It also combines the Kubelet, kubeproxy and flannel agent processes that run on a worker node into a single process. Both of these techniques help in conserving RAM. #6 Reducing runtime footprint Rancher labs were able to cut down the runtime footprint significantly by using containerd instead of Docker as the runtime container engine. Functionalities like libnetwork, swarm, Docker storage drivers and other plugins have also been removed to achieve this aim. #7 SQLite as an optional datastore To provide a lightweight alternative to etcd, Rancher added SQLite as optional datastore in K3s. This was done because SQLite has “a lower memory footprint, as well as dramatically simplified operations.” Kelsey Hightower, a Staff Developer Advocate at Google Cloud Platform, commended Rancher Labs for removing features, instead of adding anything additional, to be able to focus on running clusters in low-resource computing environments. https://twitter.com/kelseyhightower/status/1100565940939436034 Kubernetes users have also welcomed the news with enthusiasm. https://twitter.com/toszos/status/1100479805106147330 https://twitter.com/ashim_k_saha/status/1100624734121689089 K3s is released with support for x86_64, ARM64 and ARMv7 architectures,  to work across any edge infrastructure. Head over to the K3s page for a quick demo on how to use the same. Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers Introducing Platform9 Managed Kubernetes Service CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure  
Read more
  • 0
  • 0
  • 3850
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-mwc-mobile-world-congress-day-1-hololens-2-azure-powered-kinect-camera-and-more
Melisha Dsouza
25 Feb 2019
4 min read
Save for later

Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more!

Melisha Dsouza
25 Feb 2019
4 min read
The ongoing Mobile World Conference 2019 at Barcelona, has an interesting line-up of announcements, keynote speakers, summits, seminars and more. It is the largest mobile event in the world, that brings together the latest innovations and leading-edge technology from more than two thousand leading companies. The theme of this year’s conference is ‘Intelligent Connectivity’ which comprises of the combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data. Microsoft unveiled a host of new products along the same theme on the first day of the conference. Let’s have a look at some of them. #1 Microsoft HoloLens 2 AR announced! Microsoft unveiled the HoloLens 2 AR device at the Mobile World Congress (MWC). This $3,500 AR device is aimed for businesses, and not for the average person, yet. It is designed primarily for situations where field workers might need to work hands-free, such as manufacturing workers, industrial designers and those in the military, etc. This device is definitely an upgrade from Microsoft’s very first HoloLens that recognized basic tap and click gestures. The new headset recognizes 21 points of articulation per hand and accounts for improved and realistic hand motions. The device is less bulky and its eye tracking can measure eye movement and use it to interact with virtual objects. It is built to be a cloud- and edge-connected device. The HoloLens 2 field of view more than doubles the area covered by HoloLens 1. Microsoft said it has plans to announce a  follow-up to HoloLens 2 in the next year or two. According to Microsoft, this device will be even more comfortable and easier to use, and that it'll do more than the HoloLens 2. HoloLens 2 is available on preorder and will be shipping later this year. The device has already found itself in the midst of a controversy after the US Army invested $480 million in more than 100,000 headsets. The contract has stirred dissent amongst Microsoft workers. #2 Azure-powered Kinect camera for enterprise The Azure-powered Kinect camera is an “Intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” according to Azure VP, Julia White. This AI-powered smart enterprise camera leverages Microsoft’s 3D imaging technology and can possibly serve as a companion hardware piece for HoloLens in the enterprise. The system has a 1-megapixel depth camera, a 12-megapixel camera and a seven-microphone array on board to help it work  with "a range of compute types, and leverage Microsoft’s Azure solutions to collect that data.” The system, priced at $399, is available for pre-order. #3 Azure Spatial Anchors Azure Spatial Anchors are launched as a part of the Azure mixed reality services. These services will help developers and business’ build cross-platform, contextual and enterprise-grade mixed reality applications. According to the Azure blog, these mixed reality apps can map, designate and recall precise points of interest which are accessible across HoloLens, iOS, and Android devices. Developers can integrate their solutions with IoT services and artificial intelligence, and protect their sensitive data using security from Azure. Users can easily infuse artificial intelligence (AI) and integrate IoT services to visualize data from IoT sensors as holograms. The Spatial Anchors will allow users to map their space and connect points of interest “to create wayfinding experiences, and place shareable, location-based holograms without any need for environmental setup or QR codes”. Users will also be able to manage identity, storage, security, and analytics with pre-built cloud integrations to accelerate their mixed reality projects. #4 Unreal Engine 4 Support for Microsoft HoloLens 2 During the  Mobile World Congress (MWC), Epic Games Founder and CEO, Tim Sweeney announced that support for Microsoft HoloLens 2 will be coming to Unreal Engine 4 in May 2019. Unreal Engine will fully support HoloLens 2 with streaming and native platform integration. Sweeney says that “AR is the platform of the future for work and entertainment, and Epic will continue to champion all efforts to advance open platforms for the hardware and software that will power our daily lives.” Unreal Engine 4 support for Microsoft HoloLens 2 will allow for "photorealistic" 3D in AR apps. Head over to Microsoft's official blog for an in-depth insight on all the products released. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft joins the OpenChain Project to help define standards for open source software compliance
Read more
  • 0
  • 0
  • 2426

article-image-google-to-acquire-cloud-data-migration-start-up-alooma
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

Google to acquire cloud data migration start-up ‘Alooma’

Melisha Dsouza
20 Feb 2019
2 min read
On Tuesday, Google announced its plans to acquire cloud migration company Alooma, which helps other companies move their data from multiple sources into a single data warehouse. Alooma not only provides services to help with migrating to the cloud but also helps in cleaning up this data and then using it for Artificial Intelligence and machine learning use cases. Google Cloud’s blog states that “ The addition of Alooma, subject to closing conditions, is a natural fit that allows us to offer customers a streamlined, automated migration experience to Google Cloud, and give them access to our full range of database services, from managed open source database offerings to solutions like Cloud Spanner and Cloud Bigtable” The financial details of the deal haven't been released yet. In early 2016, Alooma raised about $15 million, including an $11.2 million Series A round led by Lightspeed Venture Partners and Sequoia Capital. Aloomas’ blog states that “Joining Google Cloud will bring us one step closer to delivering a full self-service database migration experience bolstered by the power of their cloud technology, including analytics, security, AI, and machine learning” In a statement to TechCrunch, Google says “Regarding supporting competitors, yes, the existing Alooma product will continue to support other cloud providers. We will only be accepting new customers that are migrating data to Google Cloud Platform, but existing customers will continue to have access to other cloud providers.” This means that, after the deal is closed, Alooma will not accept any new customers who want to migrate data to any competitors--for instance, Amazon’s Azure. Those who use Alooma in combination with AWS, Azure and other non-Google services will likely start looking for other solutions. Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Autodesk acquires PlanGrid for $875 million, to digitize and automate construction workflows GitHub acquires Spectrum, a community-centric conversational platform
Read more
  • 0
  • 0
  • 2046

article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 8659

article-image-serverless-computing-101
Guest Contributor
09 Feb 2019
5 min read
Save for later

Serverless Computing 101

Guest Contributor
09 Feb 2019
5 min read
Serverless applications began gaining popularity when Amazon launched AWS Lambda back in the year 2014. Since then, we are becoming more familiar with Serverless Computing as it is exponentially growing in use and reference among the vendors who are entering the markets with their own solutions. The reason behind the hype of serverless computing is it requires no infrastructure management which is a modern approach for the enterprise to lessen up the workload. What is Serverless Computing? It is a special kind of software architecture which executes the application logic in an environment without visible processes, operating systems, servers, and virtual machines. Serverless Computing is also responsible for provisioning and managing the infrastructure entirely by the service provider. Serverless defines a cloud service that abstracts the details of the cloud-based processor from its user; this does not mean servers are no longer needed, but they are not user-specified or controlled. Serverless computing refers to serverless architecture which relates to the applications that depend on a third-party service (BaaS) and container (FaaS). Image Source: Tatvasoft The top serverless computing providers like Amazon, Microsoft, Google and IBM provide serverless computing like FaaS to companies like NetFlix, Coca-cola, Codepen and many more. FaaS Function as a Service is a mode of cloud computing architecture where developers write business logic functions or java development code which are executed by the cloud providers. In this, the developers can upload loads of functionality into the cloud that can be independently executed. The cloud service provider manages everything from execution to scaling it automatically. Key components of FaaS: Events - Something that triggers the execution of the function is regarded as an event. For instance: Uploading a file or publishing a message. Functions - It is regarded as an independent unit of deployment. For instance: Processing a file or performing a scheduled task. Resources - Components used by the function is defined as resources. For instance: File system services or database services. BaaS Backend as a Service allows developers to write and maintain only the frontend of the application and enable them by using the backend service without building and maintaining them. The BaaS service providers offer in-built pre-written software activities like user authentication, database management, remote updating, cloud storage and much more. The developers do not have to manage servers or virtual machines to keep their applications running which helps them to build and launch applications more quickly. Image courtesy - Gallantra Use-Cases of Serverless Computing Batch jobs scheduled tasks: Schedules the jobs that require intense parallel computation, IO or network access. Business logic: The orchestration of microservice workloads that execute a series of steps for applying your ideas. Chatbots: Helps to scale at peak demand times automatically. Continuous Integration pipeline: It has the ability to remove the need for pre-provisioned hosts. Captures Database change: Auditing or ensuring modifications in order to meet quality standards. HTTP REST APIs and Web apps: Sends traditional request and gives a response to the workloads. Mobile Backends: Can build on the REST API backend workload above the BaaS APIs. Multimedia processing: To execute a transformational process in response to a file upload by implementing the functions. IoT sensor input messages: Receives signals and scale in response. Stream processing at scale: To process data within a potentially infinite stream of messages. Should you use Serverless Computing? Merits Fully managed services - you do not have to worry about the execution process. Supports event triggered approach - sets the priorities as per the requirements. Offers Scalability - automatically handles load balancing. Only pay for Execution time - you need to pay just for what you used. Quick development and deployment - helps to run infinite test cases without worrying about other components. Cut-down time-to-market - you can look at your refined product in hours after creating it. Demerits Third-party dependency - developers have to depend on cloud service providers completely. Lacking Operational tools - need to depend on providers for debugging and monitoring devices. High Complexity - takes more time and it is difficult to manage more functions. Functions cannot stay for a longer period - only suitable for applications having shorter processes. Limited mapping to database indexes - challenging to configure nodes and indexes. Stateless Functions - resources cannot exist within a function after the function stops to exit. Serverless computing can be seen as the future for the next generation of cloud-native and is a new approach to write and deploy applications that allow developers to focus only on the code. This approach helps to reduce the time to market along with the operational costs and system complexity. Third-party services like AWS Lambda has eliminated the requirement to set up and configure physical servers or virtual machines. It is always best to take an expert's advice that holds years of experience in software development with modern technologies. Author Bio: Working as a manager in a Software outsourcing company Tatvasoft.com, Vikash Kumar has a keen interest in blogging and likes to share useful articles on Computing. Vikash has also published his bylines on major publication like Kd nuggets, Entrepreneur, SAP and many more. Google Cloud Firestore, the serverless, NoSQL document database, is now generally available Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications
Read more
  • 0
  • 0
  • 6051
article-image-introducing-platform9-managed-kubernetes-service
Amrata Joshi
04 Feb 2019
3 min read
Save for later

Introducing Platform9 Managed Kubernetes Service

Amrata Joshi
04 Feb 2019
3 min read
Today, the team at Platform9, a company known for its SaaS-managed hybrid cloud, introduced a fully managed, enterprise-grade Kubernetes service that works on VMware with full SLA guarantee. It enables enterprises to deploy and run Kubernetes easily without the need of management overhead and advanced Kubernetes expertise. It features enterprise-grade capabilities including multi-cluster operations, zero-touch upgrades, high availability, monitoring, and more, which are handled automatically and backed by SLA. PMK is part of the Platform9’s hybrid cloud solution, which helps organizations in centrally managing VMs, containers and serverless functions on any environment. Enterprises can support Kubernetes at scale alongside their traditional VMs, legacy applications, and serverless functions. Features of Platform9 Managed Kubernetes Self Service, Cloud Experience IT Operations and VMware administrators can now help developers with simple, self-service provisioning and automated management experience. It is now possible to deploy multiple Kubernetes clusters with a click of a button that is operated under the strictest SLAs. Run Kubernetes anywhere PMK allows organizations to run Kubernetes instantly, anywhere. It also delivers centralized visibility and management across all Kubernetes environments including on-premises, public cloud, or at the Edge. This helps the organizations to drop shadow IT and VM/Container sprawl and ensure compliance. It improves utilization and reduces costs across all infrastructure. Speed Platform9 Managed Kubernetes (PMK) allows enterprises to run in less than an hour on VMware. It also eliminates the operational complexity of Kubernetes at scale. PMK helps enterprises to modernize their VMware environments without the need of any hardware or configuration changes. Open Ecosystem Enterprises can benefit from the open source community and all the Kubernetes-related services and applications by delivering open source Kubernetes on VMware without code forks. It ensures portability across environments. Sirish Raghuram, Co-founder, and CEO of Platform9 said, “Kubernetes is the #1 enabler for cloud-native applications and is critical to the competitive advantage for software-driven organizations today. VMware was never designed to run containerized workloads, and integrated offerings in the market today are extremely clunky, hard to implement and even harder to manage. We’re proud to take the pain out of Kubernetes on VMware, delivering a pure open source-based, Kubernetes-as-a-Service solution that is fully managed, just works out of the box, and with an SLA guarantee in your own environment.” To learn more about delivering Kubernetes on VMware, check out the demo video. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more
Read more
  • 0
  • 0
  • 3177

article-image-microsoft-cloud-services-dns-outage-results-in-deleting-several-microsoft-azure-database-records
Bhagyashree R
04 Feb 2019
2 min read
Save for later

Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records

Bhagyashree R
04 Feb 2019
2 min read
On January 29, Microsoft Cloud services including Microsoft Azure, Office 365, and Dynamics 365 suffered a major outage. This resulted in customers experiencing intermittent access to Office 365 and also deleting several database records. This comes just after a major outage that prevented Microsoft 365 users from accessing their emails for an entire day in Europe. https://twitter.com/AzureSupport/status/1090359445241061376 Users who were already logged into Microsoft services weren’t affected; however, those that were trying to log into new sessions were not able to do so. How did this Microsoft Azure outage happen? According to Microsoft, the preliminary reason behind this outage was a DNS issue with CenturyLink, an external DNS provider. Microsoft Azure’s status page read, “Engineers identified a DNS issue with an external DNS provider”. CenturyLink, in a statement, mentioned that their DNS services experienced disruption due to a software defect, which affected connectivity to a customer’s cloud resources. Along with authentication issues, this outage also caused the deletion of users’ live data stored in Transparent Data Encryption (TDE) databases in Microsoft Azure. TDE databases encrypt information dynamically and decrypt them when customers access it. As the data is stored in encrypted form, it prevents intruders from accessing the database. For encryption, many Azure users store their own encryption keys in Microsoft’s Key Vault encryption key management system. The deletion was triggered by a script that automatically drops TDE database tables when corresponding keys can no longer be accessed in the Key Vault. Microsoft was able to restore the tables from a five-minute snapshot backup. But, those transactions that customers had processed within five minutes of the table drop were expected to raise a support ticket asking for the database copy. Read more about Microsoft’s Azure outage in detail on ZDNet. Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020 Outage in the Microsoft 365 and Gmail made users unable to log into their accounts Microsoft Office 365 now available on the Mac App Store
Read more
  • 0
  • 0
  • 4909

article-image-former-google-cloud-ceo-joins-stripe-board-just-as-stripe-joins-the-global-unicorn-club
Bhagyashree R
31 Jan 2019
2 min read
Save for later

Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club

Bhagyashree R
31 Jan 2019
2 min read
Stripe, the payments infrastructure company, has received a whopping $100 million in funding from Tiger Global Management and now its valuation stands at $22.5 billion as reported by The Information on Tuesday. Last year in September, it also secured $245 million through its funding round, also led by Tiger Global Management. Founded in 2010 by the Irish brothers, Patrick and John Collision, Stripe has now become one of the most valuable “unicorns”, a term used for firms worth more than $1 billion, in the U.S. The company also boasts an impressive list of clients, recently adding Google and Uber to its stable users. The company is now planning to expand its platform by launching a point-of-sale payments terminal package targeted at online retailers making the jump to offline. A Stripe spokesperson told CNBC, “Stripe is rapidly scaling internationally, as well as extending our platform into issuing, global fraud prevention, and physical stores with Stripe Terminal. The follow-on funding gives us more leverage in these strategic areas.” The company is also expanding its team. On Tuesday, Patrick Collision announced that Diane Greene, who is an Alphabet board of directors member will be joining the Stripe’s board of directors. Along with Greene, joining the team are Michael Moritz, a partner at Sequoia Capital, Michelle Wilson, former general counsel at Amazon, and Jonathan Chadwick, former CFO of VMware, McAfee, and Skype. https://twitter.com/patrickc/status/1090386301642141696 In addition to Tiger Global Management, the start-up has also being supported by various other investors including Sequoia Capital, Khosla Ventures, Andreessen Horowitz, and PayPal co-founders Peter Thiel, Max Levchin, and Elon Musk. For more details, read the full story on The Information website. PayPal replaces Flow with TypeScript as their type checker for every new web app After BitPay, Coinbase bans Gab accounts and its founder, Andrew Torba Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S.
Read more
  • 0
  • 0
  • 3597
article-image-dropbox-purchases-workflow-and-esignature-startup-hellosign-for-250m
Melisha Dsouza
29 Jan 2019
2 min read
Save for later

Dropbox purchases workflow and eSignature startup ‘HelloSign’ for $250M

Melisha Dsouza
29 Jan 2019
2 min read
Dropbox has purchased HelloSign, a San Francisco based private company that provides lightweight document workflow and eSignature services. Dropbox has paid $230 million for this deal which is expected to close in Quarter 1. Dropbox co-founder and CEO, Drew Houston, said in a statement “HelloSign has built a thriving business focused on eSignature and document workflow products that their users love. Together, we can deliver an even better experience to Dropbox users, simplify their workflows, and expand the market we serve”. Dropbox’ SVP of engineering, Quentin Clark told TechCrunch that, HelloSign’s workflow capabilities added in 2017 were key to the purchase. He calls their investment in APIs as ‘unique’ and that their workflow products are aligned with Dropbox’ long-term direction that Dropbox will pursue ‘a broader vision’. This could possibly mean extending Dropbox Storage capabilities in the long run. This deal comes as an extension to a partnership that Dropbox established with HelloSign last year, to use two of HelloSign technologies-  to offer eSignature and electronic fax solutions to Dropbox users. HelloSign CEO, Joseph Walla says being part of Dropbox would give HelloSign the access to resources of a much larger public company, thereby allowing them to reach a broader market than it could on a standalone basis. He stated, “Together with Dropbox, we can bring more seamless document workflows to even more customers and dramatically accelerate our impact.” COO of HelloSign, Whitney Bouck, said that the company will remain an independent entity and will continue to operate with its current management structure as part of the Dropbox family. She also added that all of the HelloSign employees will be offered employment at Dropbox as part of the deal. You can head over to TechCrunch to know more about this announcement. How Dropbox uses automated data center operations to reduce server outage and downtime NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’  
Read more
  • 0
  • 0
  • 1951

article-image-amazon-launches-tls-termination-support-for-network-load-balancer
Bhagyashree R
25 Jan 2019
2 min read
Save for later

Amazon launches TLS Termination support for Network Load Balancer

Bhagyashree R
25 Jan 2019
2 min read
Starting from yesterday, AWS Network Load Balancers (NLB) supports TLS/SSL. This new feature simplifies the process of building secure web applications by allowing users to make use of TLS connections that terminate at an NLB. This support is fully integrated with AWS PrivateLink and is also supported by AWS CloudFormation. https://twitter.com/colmmacc/status/1088510453767000064 Here are some features and benefits it comes with: Simplified management Using TLS at scale requires you to do extra management work like distributing the server certificate to each backend server. Additionally, it also increases the attack surface due to the presence of multiple copies of the certificate. This TLS/SSL support comes with a central management point for your certificates by integrating with AWS Certificate Manager (ACM) and Identity Access Manager (IAM). Improved compliance This new feature provides the flexibility of predefined security policies. Developers can use these built-in security policies to specify the cipher suites and protocol versions that are acceptable to their application. This will help you if you are going for PCI and FedRAMP compliance and also allow you to achieve a perfect TLS score. Classic upgrade Users who are currently using a Classic Load Balancer for TLS termination can switch to NLB, which will help them to scale quickly in case of an increased load. Users will also be able to make use a static IP address for their NLB and log the source IP address for requests. Access logs This support allows users to enable access logs for their NLBs and direct them to the S3 bucket of their choice. These logs will document information about the TLS protocol version, cipher suite, connection time, handshake time, and more. To read more in detail, check out Amazon’s announcement. Amazon is reportedly building a video game streaming service, says Information Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more
Read more
  • 0
  • 0
  • 4088