Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-oracle-introduces-oracle-cloud-native-framework-at-kubeconcloudnativecon-2018
Amrata Joshi
12 Dec 2018
3 min read
Save for later

Oracle introduces Oracle Cloud Native Framework at KubeCon+CloudNativeCon 2018

Amrata Joshi
12 Dec 2018
3 min read
Yesterday, at the ongoing KubeCon + CloudNativeCon North America 2018, the Oracle team introduced the Oracle Cloud Native Framework. This framework provides developers with a cloud native solution for public cloud, on premises and hybrid cloud deployments. The Oracle Cloud Native Framework supports modern cloud native and traditional applications like, WebLogic, Java, and database. It comprises of the recently announced Oracle Linux Cloud Native Environment and Oracle cloud infrastructure native services. The Oracle Cloud Native Framework supports both dev and ops so it can be used by startups and enterprises. What’s new in Oracle Cloud Native Framework? Application definition & development Oracle Functions: It is a serverless cloud service based on the open source Fn Project that can run on-premises, in a data center, or on any cloud. With Oracle Functions, developers can seamlessly deploy and execute function-based applications without the hassle of managing compute infrastructure. It is Docker container-based and follows the pay-per-use method. Streaming: It is a highly scalable and multi-tenant streaming platform that makes the process of collecting and managing streaming data easy. It also enables applications like security, supply chain and IoT, where large amounts of data gets collected from various sources and is processed in real time. Provisioning Resource Manager: It is a managed service that provisions Oracle Cloud Infrastructure resources and services. It reduces configuration errors while increasing productivity by managing infrastructure as code. Observability & Analysis Monitoring: It is an integrated service that helps in reporting metrics from all resources and services in Oracle Cloud Infrastructure. It uses predefined metrics and dashboards, or service API for getting a holistic view of the performance, health, and capacity of the system. This monitoring service uses alarms for tracking metrics and takes action when they vary or exceed defined thresholds. Notification Service: It is a scalable service that broadcasts messages to distributed components like, PagerDuty and email. The notification service helps users to deliver messages about Oracle Cloud Infrastructure to a large numbers of subscribers. Events: It can store information to object storage and trigger functions to take actions. It also enables users to react to changes in the state of Oracle Cloud Infrastructure resources. The Oracle Cloud Native Framework provides cloud-native capabilities and offerings to the customers by using the open standards established by CNFC. Don Johnson, executive vice president, product development, Oracle Cloud Infrastructure said, “With the growing popularity of the CNCF as a unifying and organizing force in the cloud native ecosystem and organizations increasingly embracing multi cloud and hybrid cloud models, developers should have the flexibility to build and deploy their applications anywhere they choose without the threat of cloud vendor lock-in. Oracle is making this a reality,” To know more about this news, check out the press release. Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Red Hat acquires Israeli multi-cloud storage software company, NooBaa Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format
Read more
  • 0
  • 0
  • 2929

article-image-introducing-gitlab-serverless-to-deploy-cloud-agnostic-serverless-functions-and-applications
Amrata Joshi
12 Dec 2018
2 min read
Save for later

Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, GitLab and TriggerMesh introduced GitLab Serverless which helps enterprises run serverless workloads on any cloud with the help of Google’s Kubernetes-based platform Knative, which is used to build, deploy, and manage serverless workloads. GitLab Serverless enables businesses in deploying serverless functions and applications on any cloud or infrastructure from GitLab UI by using Knative. GitLab Serverless is scheduled for public release on 22 December 2018 and will be available in GitLab 11.6. It involves a technology developed by TriggerMesh, a multi cloud serverless platform, for enabling businesses to run serverless workloads on Kubernetes. Sid Sijbrandij, co-founder and CEO of GitLab said, “We’re pleased to offer cloud-agnostic serverless as a built-in part of GitLab’s end-to-end DevOps experience, allowing organizations to go from planning to monitoring in a single application.” Functions as a service (Faas) With GitLab Serverless, users can run their own Function-as-a-Service (FaaS) on any infrastructure without worrying about vendor lock-in. FaaS allows users to write small and discrete units of code with event-based execution. While deploying the code, developers need not worry about the infrastructure it will run on. It saves resources as the code executes only when needed, so resources don’t get used while the app is idle. Kubernetes and Knative Flexibility and portability can be achieved by running serverless workloads on Kubernetes. The Serverless uses Knative for creating a seamless experience for the entire DevOps lifecycle. Deploy on any infrastructure With Serverless, users can deploy to any cloud or on-premises infrastructure. GitLab can connect to any Kubernetes cluster so users can choose to run their serverless workloads anywhere Kubernetes runs. Auto-scaling with ‘scale to zero’ The Kubernetes cluster automatically scales up and down based on the load. The "Scale to zero" is used for stopping consumption of resources when there are no requests. To know more about this news, check out the official announcement. Haskell is moving to GitLab due to issues with Phabricator GitLab 11.5 released with group security and operations-focused dashboard, control access to GitLab pages GitLab 11.4 is here with merge request reviews and many more features
Read more
  • 0
  • 0
  • 3250

article-image-istio-available-in-beta-for-google-kubernetes-engine-will-accelerate-app-delivery-and-improve-microservice-management
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management

Melisha Dsouza
12 Dec 2018
3 min read
The KubeCon+CloudNativeCon happening at Seattle this week has excited developers with its plethora of new announcements and releases. This conference dedicated to Kubernetes and other cloud native technologies, brings together adopters and technologists from leading open source and cloud native communities to discuss new advancements at the cloud front. At this year’s conference, Google Cloud announced the beta availability of ‘Istio’ for its Google Kubernetes Engine. Istio was launched in the middle of 2017, as a result of a collaboration between Google, IBM and Lyft. According to Google, this open-source “service mesh” that is used to connect, manage and secure microservices on a variety of platforms- like Kubernetes- will play a vital role in helping developers make the most of their microservices. Yesterday, Google Cloud Director of Engineering Chen Goldberg and Director of Product Management Jennifer Lin said in a blog post that the availability of Istio on Google Kubernetes Engine will provide “more granular visibility, security and resilience for Kubernetes-based apps”. This service will be made available through Google’s Cloud Services Platform that bundles together all the tools and services needed by developers to get their container apps up and running on the company’s cloud or in on-premises data centres. In an interview with SiliconANGLE, Holger Mueller, principal analyst and vice president of Constellation Research Inc., compared software containers to “cars.” He says that  “Kubernetes has built the road but the cars have no idea where they are, how fast they are driving, how they interact with each other or what their final destination is. Enter Istio and enterprises get all of the above. Istio is a logical step for Google and a sign that the next level of deployments is about manageability, visibility and awareness of what enterprises are running.” Additional features of Istio Istio allows developers and operators to manage applications as services and not as lots of different infrastructure components. Istio allows users to encrypt all their network traffic, layering transparently onto any existing distributed application. Users need not embed any client libraries in their code to avail this functionality. Istio on GKE also comes with an integration into Stackdriver, Google Cloud’s monitoring and logging service. Istio securely authenticates and connects a developer’s services to one another. It transparently adds mTLS to a service communication, thus encrypting all information in transit. It provides a service identity for each service, allowing developers to create service-level policies enforced for each individual application transaction, while providing non-replayable identity protection. Istio is yet another step for GKE that will make it easier to secure and efficiently manage containerized applications. Head over to TechCrunch for more insights on this news. Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud What’s new in Google Cloud Functions serverless platform Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 2176
Banner background image

article-image-introducing-pivotal-function-service-alpha-an-open-kubernetes-based-multi-cloud-serverless-framework-for-developer-workloads
Melisha Dsouza
10 Dec 2018
3 min read
Save for later

Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads

Melisha Dsouza
10 Dec 2018
3 min read
Last week, Pivotal announced the ‘Pivotal Function Service’ (PFS)  in alpha. Until now, Pivotal has focussed on making open-source tools for enterprise developers but has lacked a serverless component to its suite of offerings. This aspect changes with the launch of PFS. PFS is designed to work both on-premise and in the cloud in a cloud-native fashion while being open source. It is a Kubernetes-based, multi-cloud function service offering customers a single platform for all their workloads on any cloud. Developers can deploy and operate databases, batch jobs, web APIs, legacy apps, event-driven functions, and many other workloads the same way everywhere, all because of the Pivotal Cloud Foundry (PCF) platform. This is comprised of Pivotal Application Service (PAS), Pivotal Container Service (PKS), and now, Pivotal Function Service (PFS). Providing the same developer and operator experience on every public or cloud, PFS is event-oriented with built-in components that make it easy to architect loosely coupled, streaming systems. Its buildpacks simplify packaging and are operator-friendly providing a secure, low-touch experience running atop Kubernetes. The fact that Pivotal can work on any cloud as an open product, makes it stand apart from cloud providers like Amazon, Google, and Microsoft, which provide similar services that run exclusively on their clouds. Features of PFS PFS is built on Knative, which is an open-source project led by Google that simplifies how developers deploy functions atop Kubernetes and Istio. PFS runs on Kubernetes and Istio and helps customers take advantage of the benefits of Kubernetes and Istio, abstracting away the complexity of both technologies. PFS allows customers to use familiar, container-based workflows for serverless scenarios. PFS Event Sources helps customers create feeds from external event sources such as GitHub webhooks, blob stores, and database services. PFS can be connected easily with popular message brokers such as Kafka, Google Pub/Sub, and RabbitMQ; that provide a reliable backing services for messaging channels. Pivotal has continued to develop the riff invoker model in PFS, to help developers deliver both streaming and non-streaming function code using simple, language-idiomatic interfaces. The new package includes several key components for developers, including a native eventing ability that provides a way to build rich event triggers to call whatever functionality a developer requires within a Kubernetes-based environment. This is particularly important for companies that deploy a hybrid use case to manage the events across on-premise and cloud in a seamless way. Head over to Pivotal’s official Blog to know more about this announcement. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12/ ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 3051

article-image-cloud-native-application-bundle-cnab-docker-microsoft-partner-on-an-open-source-cloud-agnostic-all-in-one-packaging-format
Savia Lobo
05 Dec 2018
3 min read
Save for later

Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format

Savia Lobo
05 Dec 2018
3 min read
At Dockercon Europe 2018 held in Barcelona, Microsoft in collaboration with the Docker community announced Cloud Native Application Bundle (CNAB), which is an open-source, cloud-agnostic specification for packaging and running distributed applications. Cloud Native Application Bundle (CNAB) Cloud Native Application Bundle(CNAB) is the combined effort of Microsoft and the Docker community to provide a single all-in-one packaging format, which unifies management of multi-service, distributed applications across different toolchains. Docker is the first to implement CNAB for containerized applications. It plans to expand CNAB across the Docker platform to support new application development, deployment, and lifecycle management. CNAB allows users to define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services. Patrick Chanezon, technical staff at Docker Inc. writes, “Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry.” Docker also plans to enable organizations to deploy and manage CNAB-based applications in Docker Enterprise soon. Scott Johnston, Chief product officer at Docker, said, “this is not a Docker proprietary thing, this is not a Microsoft proprietary thing, it can take Compose files as inputs, it can take Helm charts as inputs, it can take Kubernetes YAML as inputs, it can take serverless artifacts as inputs.” According to Microsoft, they partnered with Docker to solve issues with ISV (Independent Software Vendor) and enterprises including: To be able to describe their application as a single artifact, even when it is composed of a variety of cloud technologies Wanting to provision their applications without having to master dozens of tools They needed to manage lifecycle (particularly installation, upgrade, and deletion) of their applications Added features that CNAB brings include: Manage discrete resources as a single logical unit that comprises an app. Use and define operational verbs for lifecycle management of an app Sign and digitally verify a bundle, even when the underlying technology doesn’t natively support it. Attest and digitally verify that the bundle has achieved that state to control how the bundle can be used. Enable the export of the bundle and all dependencies to reliably reproduce in another environment, including offline environments (IoT edge, air-gapped environments). Store bundles in repositories for remote installation. According to a user review on Hacker News thread, “The goal with CNAB is to be able to version your application with all of its components and then ship that as one logical unit making it reproducible. The package format is flexible enough to let you use the tooling that you're already using”. Another user said, “CNAB makes reproducibility possible by providing unified lifecycle management, packaging, and distribution. Of course, if bundle authors don't take care to work around problems with imperative logic, that's a risk.” To know more about Cloud Native Application Bundle(CNAB) in detail, visit Microsoft blog. Microsoft and Mastercard partner to build a universally-recognized digital identity Creating a Continuous Integration commit pipeline using Docker [Tutorial] Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login
Read more
  • 0
  • 0
  • 2917

article-image-microsoft-connect-2018-azure-updates-azure-pipelines-extension-for-visual-studio-code-github-releases-and-much-more
Melisha Dsouza
05 Dec 2018
4 min read
Save for later

Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more!

Melisha Dsouza
05 Dec 2018
4 min read
“I’m excited to share some of the latest things we’re working on at Microsoft to help developers achieve more when building the applications of tomorrow, today.” -Scott Guthrie - Executive Vice President, Cloud and Enterprise Group, Microsoft On the 4th of December, at the Microsoft Connect(); 2018 Conference, the tech giant announced a series of updates in its Azure domain. With an aim to make it easy for operators and developers to adopt and use Kubernetes, Microsoft has announced the public preview of Azure Kubernetes Service virtual nodes and Azure Container Instances GPU support. They have also announced Azure Pipelines extension for Visual Studio Code, GitHub Releases, and much more! #1 Azure Kubernetes Service virtual nodes, Azure Container Instances GPU support enters public preview The Azure Kubernetes Service (AKS) is powered by the open source Virtual Kubelet technology. This release will enable customers to fully experience serverless Kubernetes. Customers will be able to extend the consistent, powerful Kubernetes API (provided by AKS) with the scalable, container-based compute capacity of ACI. With AKS virtual nodes, customers can precisely allocate the number of additional containers needed, rather than waiting for additional VM-based nodes to spin up. The ACI is billed by the second, based on the resources that a customer specifies, thus enabling them to match their costs to their workloads. This, in turn, will help the AP provided by Kubernetes to reap the benefits of serverless platforms without having to worry about managing any additional compute resources Adding GPU support to ACI will enable a new class of compute-intensive applications through AKS virtual nodes. The blog says that initially, ACI will support the K80, P100, and V100 GPUs from Nvidia and users can specify the type and number of GPUs that they would like for their container. #2 Azure Pipelines extension for Visual Studio Code The  Azure Pipelines extension for Visual Studio Code will enable developers use VS syntax highlighting and IntelliSense that will be aware of the Azure Pipelines YAML format. Traditionally, in Visual Studio Code, syntax highlighting required developers to remember exactly which keys are legal, causing them to flip back and forth to the documentation while keeping track of the location of the keys. Using this new functionality of Azure, they will now be alerted in red “ink” if they write “tasks:” instead of “task:”. They just need to press Ctrl-Space (or Cmd-Space on macOS) to see what’s accepted at that point in the file. #3 GitHub releases Developers can now seamlessly manage GitHub Releases using Azure Pipelines. This allows them to create new releases, modify drafts, or discard older drafts. The new GitHub Releases task supports actions like attaching binary files, publishing draft releases, and marking a release as pre-release and much more. #4 Azure IoT Edge support in the Azure DevOps project Azure DevOps Projects enables developers to set up a fully functional DevOps pipeline straight from the Azure portal which will be customized to the programming language and application platform they want to use, along with the Azure functionality they want to leverage and deploy to. The community showed a growing interest in using Azure DevOps to build and deploy IoT based solutions. The Azure portal for Azure IoT Edge in the Azure DevOps project workflow will make it easy for customers to achieve this goal. They can easily deploy IoT Edge modules written in Node.js, Python, Java, .NET Core, or C, helping users to develop, build, and deploy their IoT Edge application. This support will provide customers with: A Git code repository with a sample IoT Edge application written in Node.js, Python, Java, .NET Core, or C A build and a release pipeline setup for deployment to Azure Easy provisioning of all Azure resources required for Azure IoT Edge #5 ServiceNow integration with Azure Pipelines Azure has joined forces with ServiceNow, an organization that is focussed on automating routines activities, tasks, and processes at work. They help enterprises gain efficiencies and increase the productivity of their workforce. Developers can now automate the deployment process using Azure Pipelines, and use ServiceNow Change Management for risk assessment, scheduling, approvals, and oversight while updating production. You can head over to Microsoft’s official Blog to know more about these announcements. Microsoft and Mastercard partner to build a universally-recognized digital identity Microsoft open sources (SEAL) Simple Encrypted Arithmetic Library 3.1.0, with aims to standardize homomorphic encryption Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser  
Read more
  • 0
  • 0
  • 2925
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kubernetes-1-13-released-with-new-features-and-fixes-to-a-major-security-flaw
Prasad Ramesh
04 Dec 2018
3 min read
Save for later

Kubernetes 1.13 released with new features and fixes to a major security flaw

Prasad Ramesh
04 Dec 2018
3 min read
A privilege escalation flaw in Kubernetes was discussed on GitHub last week. Following that, Red Hat released patches for the same. Yesterday Kubernetes 1.13 was also released. The security flaw A recent GitHub issue outlines the issue. Named as CVE-2018-1002105, this issue allowed unauthorized users to craft special requests. This let the unauthorized users establish a connection to a backend server via the Kubernetes API. This let sending arbitrary requests over the same connection directly to the backend. Following this, IBM owned Red Hat released patches for this vulnerability yesterday. All Kubernetes based products are affected by this vulnerability. It has now been patched and as the impact is classified as critical by Red Hat, a version upgrade is strongly recommended if you’re running an affected product. You can find more details at the Red Hat website. Let’s now look at the new features in Kubernetes 1.13 other than the security patch. kubeadm is GA in Kubernetes 1.13 kubeadm is an essential tool for managing the lifecycle of a cluster, right from creation to configuration to upgrade. kubeadm is now officially GA. This tool handles bootstrapping of production clusters on current hardware and configuration of core Kubernetes components. With the GA release, advanced features are available around pluggability and configurability. kubeadm is aimed to be a toolbox for both admins and automated, higher-level systems. Container Storage Interface (CSI) is also GA The Container Storage Interface (CSI) is generally available in Kubernetes 1.13. It was introduced as alpha in Kubernetes 1.9 and beta in Kubernetes 1.10. CSI makes the Kubernetes volume layer truly extensible. It allows third-party storage providers to write plugins that interoperate with Kubernetes without having to modify the core code. CoreDNS replaces Kube-dns as the default DNS Server CoreDNS is replacing Kube-dns to be the default DNS server for Kubernetes. CoreDNS is a general-purpose, authoritative DNS server. It provides an extensible backwards-compatible integration with Kubernetes. CoreDNS is a single executable and a single process. It supports flexible use cases by creating custom DNS entries and is written in Go making it memory-safe. KubeDNS will be supported for at least one more release. Other than these there are also other feature updates like support for 3rd party monitoring, and more features graduating to stable and beta. For more details, on the Kubernetes release, visit the Kubernetes website. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 2748

article-image-amazon-reinvent-day-3-lamba-layers-lambda-runtime-api-and-other-exciting-announcements
Melisha Dsouza
30 Nov 2018
4 min read
Save for later

Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!

Melisha Dsouza
30 Nov 2018
4 min read
The second last day of Amazon re:Invent 2018 ended on a high note. AWS announced two new features, Lambda Layers, and Lambda Runtime API, that claim to “make serverless development even easier”. In addition to this, they have also announced that Application Load Balancers will now invoke Lambda functions to serve HTTP(S) requests and Ruby Language support for Lambda. #1 Lambda Layers Lambda Layers allow developers to centrally manage code and data which is shared across multiple functions. Instead of packaging and deploying this shared code together with all the functions using it, developers can put common components in a ZIP file and upload it as a Lambda Layer.  These Layers can be used within an AWS account, shared between accounts, or shared publicly within the developer community. AWS  is also publishing a public layer which includes NumPy and SciPy. This layer is prebuilt and optimized to help users to carry out data processing and machine learning applications quickly. Developers can include additional files or data for their functions including binaries such as FFmpeg or ImageMagick, or dependencies, such as NumPy for Python. These layers are added to your function’s zip file when published. Layers can also be versioned to manage updates, which will make each version immutable. When a version is deleted or its permissions are revoked, a developer won’t be able to create new functions; however, functions that used it previously will continue to work. Lamba layers helps in making the function code smaller and more focused on what the application has to build. In addition to faster deployments, because less code must be packaged and uploaded, code dependencies can be reused. #2 Lambda Runtime API This is a simple interface to use any programming language, or a specific language version, for developing functions. Here, runtimes can be shared as layers, which allows developers to work with a  programming language of their choice when authoring Lambda functions. Developers using the Runtime API will have to bundle the same with their application artifact or as a Lambda layer that the application uses. When creating or updating a function, users can select a custom runtime. The function must include (in its code or in a layer) an executable file called bootstrap, that will be responsible for the communication between code and the Lambda environment. As of now, AWS has made the C++ and Rust open source runtimes available. The other open source runtimes that will possibly be available soon include: Erlang (Alert Logic) Elixir (Alert Logic) Cobol (Blu Age) Node.js (NodeSource N|Solid) PHP (Stackery) The Runtime API will depict how AWS will support new languages in Lambda. A notable feature of the C++ runtime is its simplicity and expressiveness of interpreted languages while maintaining a good performance and low memory footprint. The Rust runtime makes it easy to write highly performant Lambda functions in Rust. #3 Application Load Balancers to invoke Lambda functions to serve HTTP(S) requests This new functionality will enable users to access serverless applications from any HTTP client, including web browsers. Users can also route requests to different Lambda functions based on the requested content. Application Load Balancer will be used as a common HTTP endpoint to both simplify operations and monitor applications that use servers and serverless computing. #4 Ruby is now a supported language for AWS Lambda Developers can use Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default making it easy and quick for functions to directly interact with the AWS resources directly. Ruby on Lambda can be used either through the AWS Management Console or the AWS SAM CLI. This will ensure developers benefit from the reduced operational overhead, scalability, availability, and pay-per-use pricing of Lambda. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer  
Read more
  • 0
  • 0
  • 2605

article-image-red-hat-acquires-israeli-multi-cloud-storage-software-company-noobaa
Savia Lobo
29 Nov 2018
3 min read
Save for later

Red Hat acquires Israeli multi-cloud storage software company, NooBaa

Savia Lobo
29 Nov 2018
3 min read
On Tuesday, Red Hat announced that it has acquired an Israel-based multi-cloud storage software company NooBaa. This is Red Hat’s first acquisition since it was acquired by IBM in October. However, this acquisition is not subject to IBM’s approval as Red Hat's acquisition process by IBM stands incomplete. Early this month, Red Hat CEO Jim Whitehurst said, “Until the transaction closes, it is business as usual. For example, equity practices will continue until the close of the transaction, Red Hat M&A will continue as normal, and our product roadmap remains the same." NooBaa, founded in 2013, addresses the need for greater visibility and control over unstructured data spread throughout the distributed environments. The company also developed a data platform designed to serve as an abstraction layer over existing storage infrastructure. This abstraction not only enables data portability from one cloud to another but allows users to manage data stored in multiple locations as a single, coherent data set that an application can interact with. NooBaa's technologies complement and enhance Red Hat's portfolio of hybrid cloud technologies, including Red Hat OpenShift Container Platform, Red Hat OpenShift Container Storage and Red Hat Ceph Storage. Together, these technologies are designed to provide users with a set of powerful, consistent and cohesive capabilities for managing application, compute, storage and data resources across public and private infrastructures. Ranga Rangachari, VP and GM of Red Hat's storage and hyper-converged infrastructure said, “Data portability is a key imperative for organizations building and deploying cloud-native applications across private and multiple clouds. NooBaa’s technologies will augment our portfolio and strengthen our ability to meet the needs of developers in today’s hybrid and multi-cloud world. We are thrilled to welcome a technical team of nine to the Red Hat family as we work together to further solidify Red Hat as a leading provider of open hybrid cloud technologies.” He further added, "By abstracting the underlying cloud storage infrastructure for developers, NooBaa provides a common set of interfaces and advanced data services for cloud-native applications. Developers can also read and write to a single consistent endpoint without worrying about the underlying storage infrastructure." To know more about this news in detail, head over to RedHat’s official announcement. Red Hat announces full support for Clang/LLVM, Go, and Rust Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 2310

article-image-amazon-reinvent-announces-amazon-dynamodb-transactions-cloudwatch-logs-insights-and-cloud-security-conference-amazon-reinforce-2019
Melisha Dsouza
28 Nov 2018
4 min read
Save for later

Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019

Melisha Dsouza
28 Nov 2018
4 min read
Day 2 of the Amazon AWS re:Invent 2018 conference kicked off with just as much enthusiasm with which it began. With some more announcements and releases scheduled for the day, the conference is proving to be a real treat for AWS Developers. Amongst announcements like Amazon Comprehend Medical, New container products in the AWS marketplace; Amazon also announced Amazon DynamoDB Transactions and Amazon CloudWatch Logs Insights. We will also take a look at Amazon re:Inforce 2019 which is a new conference solely to be launched for cloud security. Amazon DynamoDB Transactions Customers have used Amazon DynamoDB for multiple use cases, from building microservices and mobile backends to implementing gaming and Internet of Things (IoT) solutions. Amazon DynamoDB is a non-relational database delivering reliable performance at any scale. It offers built-in security, backup and restore, and in-memory caching along with being a fully managed, multi-region, multi-master database that provides consistent single-digit millisecond latency. DynamoDB with native support for transactions will now help developers to easily implement business logic that requires multiple, all-or-nothing operations across one or more tables. With the help of DynamoDB transactions, users can take advantage of the atomicity, consistency, isolation, and durability (ACID) properties across one or more tables within a single AWS account and region. It is the only non-relational database that supports transactions across multiple partitions and tables. Two new DynamoDB operations have been introduced for handling transactions: TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. It can optionally check for prerequisite conditions that need to be satisfied before making updates. TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If this request is issued on an item that is part of an active write transaction, the read transaction is canceled. Amazon CloudWatch Logs Insights Many AWS services create logs. Data points, patterns, trends, and insights embedded within these logs can be used to understand how an applications and a users AWS resources are behaving, identify room for improvement, and to address operational issues. However, the raw logs have a huge size, making analysis difficult. Considering individual AWS customers routinely generate 100 terabytes or more of log files each day, the operations become complex and time-consuming. Enter CloudWatch Logs Insights designed to work at cloud scale, without any setup or maintenance required. It goes through massive logs in seconds and provides a user with fast, interactive queries and visualizations. CloudWatch Logs Insights includes a sophisticated ad-hoc query language, with commands to perform complicated operations efficiently. It is a fully managed service and can handle any log format, and auto-discovers fields from JSON logs. What's more? Users can also visualize query results using line and stacked area charts, and add queries to a CloudWatch Dashboard. AWS re:Inforce 2019 In addition to these releases, Amazon also announced that AWS is launching a conference dedicated to cloud security called ‘AWS re:Inforce’, for the very first time. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. Here is what the AWS re:Inforce 2019 conference is expected to cover: Deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services. There are multiple learning tracks to be covered over this 2-day conference including a technical track and business enablement track, designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. The conference will also feature sessions on Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, and much more. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer
Read more
  • 0
  • 0
  • 3449
article-image-amazon-introduces-firecracker-lightweight-virtualization-for-running-multi-tenant-container-workloads
Melisha Dsouza
27 Nov 2018
3 min read
Save for later

Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads

Melisha Dsouza
27 Nov 2018
3 min read
The Amazon re:Invent conference 2018 saw a surge of new announcements and releases. The five day event that commenced in Las Vegas yesterday, already saw some exciting developments in the field of AWS, like the AWS RoboMaker, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, an improved AWS Snowball edge and much more. In this article, we will understand their latest release- ‘Firecracker’, a New Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker is open sourced under Apache 2.0 and enables service owners to operate secure multi-tenant container-based services. It combines the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker implements a virtual machine manager (VMM) based on Linux's Kernel-based Virtual Machine (KVM). Users can create and manage microVMs with any combination of vCPU and memory with the help of a RESTful API. It incorporates a faster startup time, provides a reduced memory footprint for each microVM, and offers a trusted sandboxed environment for each container. Features of Firecracker Firecracker uses multiple levels of isolation and protection, and hence is really secure by nature. The security model includes a very simple virtualized device model in order to minimize the attack surface, Process Jail and Static Linking functionality. It delivers a high performance, allowing users to launch a microVM in as little as 125 ms It has a low overhead and consumes about 5 MiB of memory per microVM. This means a user can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance. Firecracker is written in Rust, which guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities. The AWS community has shown a positive response towards this release: https://twitter.com/abbyfuller/status/1067285030035046400 AWS Lambda uses Firecracker for provisioning and running secure sandboxes to execute customer functions. These sandboxes can be quickly provisioned with a minimal footprint, enabling performance along with security. AWS Fargate Tasks also execute on Firecracker microVMs, which allows the Fargate runtime layer to run faster and efficiently on EC2 bare metal instances. To learn more, head over to the Firecracker page. You can also read more at Jeff Barr's blog and the Open Source blog. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 4482

article-image-aws-iot-greengrass-extends-functionality-with-third-party-connectors-enhanced-security-and-more
Savia Lobo
27 Nov 2018
3 min read
Save for later

AWS IoT Greengrass extends functionality with third-party connectors, enhanced security, and more

Savia Lobo
27 Nov 2018
3 min read
At the AWS re:Invent 2018, Amazon announced new features to its AWS IoT Greengrass. These latest features allow users to extend the capabilities of AWS IoT Greengrass and its core configuration options, which include: connectors to third-party applications and AWS services hardware root of trust private key storage isolation and permission settings  New features of the AWS IoT Greengrass AWS IoT Greengrass connectors With the new updated features AWS IoT Greengrass connectors, users can easily build complex workflows on AWS IoT Greengrass without having to worry about understanding device protocols, managing credentials, or interacting with external APIs. These connectors allow users to connect to third-party applications, on-premises software, and AWS services without writing code. Re-use common business logic Users can now re-use common business logic from one AWS IoT Greengrass device to another through the ability to discover, import, configure, and deploy applications and services at the edge. They can even use AWS Secrets Manager at the edge to protect keys and credentials in the cloud and at the edge. Secrets can be attached and deployed from AWS Secrets Manager to groups via the AWS IoT Greengrass console. Enhanced security AWS IoT Greengrass now provides enhanced security with hardware root of trust private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. Users can also use the hardware secure element to protect secrets deployed to the AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. Deploy AWS IoT Greengrass to another container environment With the new configuration option, users can deploy AWS IoT Greengrass to another container environment and directly access device resources such as Bluetooth Low Energy (BLE) devices or low-power edge devices like sensors. They can even run AWS IoT Greengrass on devices without elevated privileges and without the AWS IoT Greengrass container at a group or individual AWS Lambda level. Users can also change their identity associated with an individual AWS Lambda, providing more granular control over permissions. To know more about other updated features, head over to AWS IoT Greengrass website. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 3729

article-image-aws-introduces-aws-datasync-for-automated-simplified-and-accelerated-data-transfer
Natasha Mathur
27 Nov 2018
3 min read
Save for later

AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer 

Natasha Mathur
27 Nov 2018
3 min read
The AWS team introduced AWS DataSync, an online data transfer service for automating data movement, yesterday. AWS DataSync offers data transfer from on-premises storage to Amazon S3 or Amazon Elastic File System (Amazon EFS) and vice versa. Let’s have a look at what’s new in AWS DataSync. Key Functionalities Move data 10x faster: AWS DataSync uses a purpose-built data transfer protocol along with a parallel, multi-threaded architecture that has the capability to run 10 times as fast as open source data transfer. This also speeds up the migration process and the recurring data processing workflows for analytics, machine learning, and data protection processes. Per-gigabyte fee: It is a managed service and you only need to pay the per-gigabyte fee which is paying only for the amount of data that you transfer. Other than that, there are no upfront costs and no minimum fees. DataSync Agent: The ‘AWS DataSync Agent’ is a crucial part of the service. It helps connect to your existing storage and the in-cloud service to automate, scale, and validate transfers. This, in turn, ensures that you don't have to write scripts, or modify the applications. Easy setup: It is very easy to set up and use (Console and CLI access is available). All you need to do is deploy the DataSync agent on-premises, then connect it to your file systems using the Network File System (NFS) protocol. After this, select Amazon EFS or S3 as your AWS storage, and you can start moving the data. Secure data transfer: AWS DataSync offers secure data transfer over the Internet or AWS Direct Connect. It also comes with automatic encryption and data. This, in turn, minimizes the in-house development and management which is needed for fast and secure transfers. Simplify and automate data transfer: With the help of AWS DataSync, you can perform one-time data migrations, transfer the on-premises data for timely in-cloud analysis, and automate the replication to AWS to ensure data protection and recovery. AWS DataSync is available for use from now in the US East, US West, Europe and Asia Pacific Regions. For more information, check out the official AWS DataSync blog post.  Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018  Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! 
Read more
  • 0
  • 0
  • 3765
article-image-day-1-at-the-amazon-re-invent-conference-aws-robomaker-fully-managed-sftp-service-for-amazon-s3-and-much-more
Melisha Dsouza
27 Nov 2018
6 min read
Save for later

Day 1 at the Amazon re: Invent conference - AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more!

Melisha Dsouza
27 Nov 2018
6 min read
Looks like Christmas has come early this year for AWS developers! Following Microsoft’s Surface devices and Amazon’s wide range of Alex products, the latter has once again made a series of big releases, at the Amazon re:Invent 2018 conference. These announcements include an AWS RoboMaker to help developers test and deploy robotics applications, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, Amazon EC2 C5n Instances Featuring 100 Gbps of Network Bandwidth and much more! Let’s take a look at what developers can expect from these releases. #1 AWS RoboMaker helps developers develop, test, deploy robotics applications at scale The AWS RoboMaker allows developers to develop, simulate, test, and deploy intelligent robotics applications at scale. Code can be developed inside of a cloud-based development environment and can be tested in a Gazebo simulation. Finally, they can deploy the finished code to a fleet of one or more robots. RoboMaker uses an open-source robotics software framework, Robot Operating System (ROS), with connectivity to cloud services. The service suit includes AWS machine learning services, monitoring services, and analytics services that enable a robot to stream data, navigate, communicate, comprehend, and learn. RoboMaker can work with robots of many different shapes and sizes running in many different physical environments. After a developer designs and codes an algorithm for the robot, they can also monitor how the algorithm performs in different conditions or environments. You can check an interesting simulation of a Robot using Robomaker at the AWS site. To learn more about ROS, read The Open Source Robot Operating System (ROS) and AWS RoboMaker. #2 AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3 AWS Transfer for SFTP is a fully managed service that enables the direct transfer of files to and fro Amazon S3 using the Secure File Transfer Protocol (SFTP). Users just have to create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. AWS allows users to migrate their file transfer workflows to AWS Transfer for SFTP- by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53. Along with AWS services, acustomer'ss data in S3 can be used for processing, analytics, machine learning, and archiving. Along with control over user identity, permissions, and keys; users will have full access to the underlying S3 buckets and can make use of many different S3 features including lifecycle policies, multiple storage classes, several options for server-side encryption, versioning, etc. On the outbound side, users can generate reports, documents, manifests, custom software builds and so forth using other AWS services, and then store them in S3 for each, controlled distribution to your customers and partners. #3 EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors Amazon has launched EC2 instances powered by Arm-based AWS Graviton Processors. These are built around Arm cores. The A1 instances are optimized for performance and cost and are a great fit for scale-out workloads where the load has to be shared across a group of smaller instances. This includes containerized microservices, web servers, development environments, and caching fleets. AWS Graviton are custom designed by AWS and deliver targeted power, performance, and cost optimizations. A1 instances are built on the AWS Nitro System, that  maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. #4 Introducing Amazon EC2 C5n Instances featuring 100 Gbps of Network Bandwidth AWS announced the availability of C5n instances that can utilize up to 100 Gbps of network bandwidth to provide a significantly higher network performance across all instance sizes, ranging from 25 Gbps of peak bandwidth on smaller instance sizes to 100 Gbps of network bandwidth on the largest instance size. They are powered by 3.0 GHz Intel® Xeon® Scalable processors (Skylake) and provide support for the Intel Advanced Vector Extensions 512 (AVX-512) instruction set. These instances also feature 33% higher memory footprint compared to C5 instances and are ideal for applications that can take advantage of improved network throughput and packet rate performance. Based on the next generation AWS Nitro System, C5n instances make 100 Gbps networking available to network-bound workloads.  Workloads on C5n instances take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC). The improved network performance will accelerate data transfer to and from S3, reducing the data ingestion wait time for applications and speeding up delivery of results. #5  Introducing AWS Global Accelerator AWS Global Accelerator is a  a network layer service that enables organizations to seamlessly route traffic to multiple regions, while improving availability and performance for their end users. It supports both TCP and UDP protocols, and performs a health check of a user’s target endpoints while routing traffic away from unhealthy applications. AWS Global Accelerator uses AWS’ global network to direct internet traffic from an organization's users to their applications running in AWS Regions  based on a users geographic location, application health, and routing policies that can be configured. You can head over to the AWS blog to get an in-depth view of how this service works. #6 Amazon’s  ‘Machine Learning University’ In addition to these announcements at re:Invent, Amazon also released a blog post introducing its ‘Machine Learning University’, where the company announced that the same machine learning courses used to train engineers at Amazon can now be availed by all developers through AWS. These courses, available as part of a new AWS Training and Certification Machine Learning offering, will help organizations accelerate the growth of machine learning skills amongst their employees. With more than 30 self-service, self-paced digital courses and over 45 hours of courses, videos, and labs, developers can be rest assured that ML fundamental and  real-world examples and labs, will help them explore the domain. What’s more? The digital courses are available at no charge and developers only have to pay for the services used in labs and exams during their training. This announcement came right after Amazon Echo Auto was launched at Amazon’s Hardware event. In what Amazon defines as ‘Alexa to vehicles’, the Amazon Echo Auto is a small dongle that plugs into the car’s infotainment system, giving drivers the smart assistant and voice control for hands-free interactions. Users can ask for things like traffic reports, add products to shopping lists and play music through Amazon’s entertainment system. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS
Read more
  • 0
  • 0
  • 3063

article-image-introducing-tigergraph-cloud-a-database-as-a-service-in-the-cloud-with-ai-and-machine-learning-support
Savia Lobo
27 Nov 2018
3 min read
Save for later

Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support

Savia Lobo
27 Nov 2018
3 min read
Today, TigerGraph, the world’s fastest graph analytics platform for the enterprise, introduced TigerGraph Cloud, the simplest, most robust and cost-effective way to run scalable graph analytics in the cloud. With TigerGraph Cloud, users can easily get their TigerGraph services up and running. They can also tap into TigerGraph’s library of customizable graph algorithms to support key use cases including AI and Machine Learning. It provides data scientists, business analysts, and developers with the ideal cloud-based service for applying SQL-like queries for faster and deeper insights into data. It also enables organizations to tap into the power of graph analytics within hours. Features of TigerGraph Cloud Simplicity It forgoes the need to set up, configure or manage servers, schedule backups or monitoring, or look for security vulnerabilities. Robustness TigerGraph relies on the same framework providing point-in-time recovery, powerful configuration options, and stability that has been used for its own workloads over several years. Application Starter Kits It offers out-of-the-box starter kits for quicker application development for cases such as Anti-Fraud, Anti-Money Laundering (AML), Customer 360, Enterprise Graph analytics and more. These starter kits include graph schemas, sample data, preloaded queries and a library of customizable graph algorithms (PageRank, Shortest Path, Community Detection, and others). TigerGraph makes it easy for organizations to tailor such algorithms for their own use cases. Flexibility and elastic pricing Users pay for exactly the hours they use and are billed on a monthly basis. Spin up a cluster for a few hours for minimal cost, or run larger, mission-critical workloads with predictable pricing. This new cloud offering will also be available for production on AWS, with other cloud availability forthcoming. Yu Xu, founder and CEO, TigerGraph, said, “TigerGraph Cloud addresses these needs, and enables anyone and everyone to take advantage of scalable graph analytics without cloud vendor lock-in. Organizations can tap into graph analytics to power explainable AI - AI whose actions can be easily understood by humans - a must-have in regulated industries. TigerGraph Cloud further provides users with access to our robust graph algorithm library to support PageRank, Community Detection and other queries for massive business advantage.” Philip Howard, research director, Bloor Research, said, “What is interesting about TigerGraph Cloud is not just that it provides scalable graph analytics, but that it does so without cloud vendor lock-in, enabling companies to start immediately on their graph analytics journey." According to TigerGraph, “Compared to TigerGraph Cloud, other graph cloud solutions are up to 116x slower on two hop queries, while TigerGraph Cloud uses up to 9x less storage. This translates into direct savings for you.” TigerGraph also announces New Marquee Customers TigerGraph also announced the addition of new customers including Intuit, Zillow and PingAn Technology among other leading enterprises in cybersecurity, pharmaceuticals, and banking. To know more about TigerGraph Cloud in detail, visit its official website. MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’  
Read more
  • 0
  • 0
  • 2646