Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-amazon-reinvent-2018-aws-key-management-service-kms-custom-key-store
Sugandha Lahoti
27 Nov 2018
3 min read
Save for later

Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store

Sugandha Lahoti
27 Nov 2018
3 min read
At the ongoing Amazon re:Invent 2018, Amazon announced that AWS Key Management Service (KMS) has integrated with AWS CloudHSM. Users now have the option to create their own KMS custom key store. They can generate, store, and use their KMS keys in hardware security modules (HSMs) through the KSM. The KMS customer key store satisfies compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs). It supports AWS services and encryption toolkits that are integrated with KMS. Previously, AWS CloudHSM was not widely integrated with other AWS managed services. So, if someone required direct control of their HSMs but still wanted to use and store regulated data in AWS managed services, they had to choose between changing those requirements, not using a given AWS service, or building their own solution. With custom key store, users can configure their own CloudHSM cluster and authorize KMS to use it as a dedicated key store for keys rather than the default KMS key store. On using a KMS CMK in a custom key store, the cryptographic operations under that key are performed exclusively in the developer’s own CloudHSM cluster. Master keys that are stored in a custom key store are managed in the same way as any other master key in KMS and can be used by any AWS service that encrypts data and that supports KMS customer managed CMKs. The use of a custom key store does not affect KMS charges for storing and using a CMK. However, it does come with an increased cost and potential impact on performance and availability. Things to consider before using a custom key store Each custom key store requires the CloudHSM cluster to contain at least two HSMs. CloudHSM charges vary by region and the pricing comes to at least $1,000 per month, per HSM, if each device is permanently provisioned. The number of HSMs determines the rate at which keys can be used. Users should keep in mind the intended usage patterns for their keys and ensure appropriate provisioning of HSM resources. The number of HSMs and the use of availability zones (AZs) impacts the availability of a cluster. Configuration errors may result in a custom key store being disconnected, or key material being deleted. Users need to manually setup HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which users should have the appropriate resources and organizational controls in place. Read more about the KMS custom key stores on Amazon. How Amazon is reinventing Speech Recognition and Machine Translation with AI AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources.
Read more
  • 0
  • 0
  • 3195

article-image-introducing-automatic-dashboards-by-amazon-cloudwatch-for-monitoring-all-aws-resources
Savia Lobo
26 Nov 2018
1 min read
Save for later

Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources

Savia Lobo
26 Nov 2018
1 min read
Last week, Amazon CloudWatch, a monitoring and management service, introduced Automatic Dashboards for monitoring all the AWS resources. These Automatic Dashboards are available in AWS public regions with no additional charges. Through CloudWatch Automatic Dashboards, users can now get aggregated views of health and performance of all the AWS resources. This allows users to quickly monitor, explore user accounts and resource-based view of metrics and alarms, and easily drill-down to understand the root cause of performance issues. Once identified, users can quickly act by going directly to the AWS resource. Features of these Automatic Dashboards are: They are pre-built with AWS services recommended best practices They remain resource aware These dashboards are dynamically updated to reflect the latest state of important performance metrics Users can filter and troubleshoot to a specific view without additional code to reflect the latest state of one's AWS resources. To know more about Automatic Dashboards in detail, visit its official website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Amazon announces Corretto, an open source, production-ready distribution of OpenJDK backed by AWS AWS announces more flexibility its Certification Exams, drops its exam prerequisites
Read more
  • 0
  • 0
  • 3059

article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 3722
Banner background image

article-image-autodesk-acquires-plangrid-for-875-million-to-digitize-and-automate-construction-workflows
Savia Lobo
21 Nov 2018
3 min read
Save for later

Autodesk acquires PlanGrid for $875 million, to digitize and automate construction workflows

Savia Lobo
21 Nov 2018
3 min read
Yesterday, Autodesk, a software corporation for the architecture, engineering, construction, and manufacturing, announced that it has acquired the leading provider of construction productivity software, PlanGrid for $875 million net of cash. The transaction is expected to close during Autodesk's fourth quarter of fiscal 2019, which is, ending January 31, 2019. With this acquisition of the San Francisco based startup, Autodesk will be able to offer more comprehensive, cloud-based construction platform. PlanGrid software, launched in 2011, gives builders real-time access to project plans, punch lists, project tasks, progress photos, daily field reports, submittals and more. Autodesk’s CEO, Andrew Anagnost, said, “There is a huge opportunity to streamline all aspects of construction through digitization and automation. The acquisition of PlanGrid will accelerate our efforts to improve construction workflows for every stakeholder in the construction process.” According to TechCrunch, “The company, which is a 2012 graduate of Y Combinator, raised just $69 million, so this appears to be a healthy exit for them.” In an interview with CEO and co-founder Tracy Young in 2015 at TechCrunch Disrupt in San Francisco, she had said, “the industry was ripe for change. The heart of construction is just a lot of construction blueprints information. It’s all tracked on paper right now and they’re constantly, constantly changing”. When Tracy started the idea in 2011, her idea was to move all that paper to the cloud and display it on an iPad. According to Tracy, “At PlanGrid, we have a relentless focus on empowering construction workers to build as productively as possible. One of the first steps to improving construction productivity is the adoption of digital workflows with centralized data. PlanGrid has excelled at building beautiful, simple field collaboration software, while Autodesk has focused on connecting design to construction. Together, we can drive greater productivity and predictability on the job site.” Jim Lynch, Construction General Manager at Autodesk, said, "We'll integrate workflows between PlanGrid's software and both Autodesk Revit software and the Autodesk BIM 360 construction management platform, for a seamless exchange of information between all project members." Autodesk and PlanGrid have developed complementary construction integration ecosystems using which customers can connect other software applications. The acquisition is expected to expand the integration partner ecosystem, giving customers a customizable platform to test and scale new ways of working. To know more about this news in detail, visit Autodesk’s official press release. IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever Could Apple’s latest acquisition yesterday of an AR lens maker signal its big plans for its secret Apple car? Plotly releases Dash DAQ: a UI component library for data acquisition in Python
Read more
  • 0
  • 0
  • 1987

article-image-oracles-thomas-kurian-to-replace-diane-greene-as-google-cloud-ceo-is-this-googles-big-enterprise-cloud-market-move
Melisha Dsouza
19 Nov 2018
4 min read
Save for later

Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?

Melisha Dsouza
19 Nov 2018
4 min read
On 16th November, CEO of Google Cloud, Diane Greene, announced in a blog post that she will be stepping down from her post after 3 years of running the Google Cloud. The position will now be taken up by Thomas Kurian, who worked at Oracle for the past 22 years. Kurian will be joining Google Cloud on November 26th and transitioning into the Google Cloud leadership role in early 2019, while Diane works as a CEO till end of January 2019. Post that, she will continue as a Director on the Alphabet board. Google Cloud led by Diane Greene Diane Greene has been leading Google’s cloud computing division since early 2016. She has been considered to be Google’s best bet on being the second largest source of revenue while competing with Amazon and Microsoft in providing computing infrastructure for businesses. However, there are speculations that this decision indicates the said project hasn’t gone as well as planned. Although the cloud division has seen notable advances under the leadership of Greene, Amazon and Microsoft have stayed a step ahead in their cloud businesses.  According to Canalys, Amazon has roughly a third of the global cloud market, which contributes more to revenue than its sales on Amazon.com. Microsoft has roughly half of Amazon’s market share, and currently owns 8 percent of the Global market share of cloud infrastructure services. Maribel Lopez, of Lopez Research states “When Diane Greene came in they had a really solid chance of being the No. 2 provider, Microsoft has really closed the gap and is the No. 2 provider for most enterprise customers by a significant margin.” Greene acquired customers such as Twitter, Target, and HSBC for Google cloud. Major Fortune 1000 enterprises depend on Google Cloud for their future on. Under her leadership, Google established a training and professional services organization and Google partner organizations. They have come up with ways to help enterprises adopt AI through their Advanced Solutions Lab. Google’s industry verticals has achieved massive traction in health, financial services, retail, gaming and media, energy and manufacturing, and transportation. Along with the Cloud ML and the Cloud IoT groups, they acquired Apigee, Kaggle, qwiklabs and several promising small startups. She oversaw projects like creating custom chips for machine learning, thus gaining traction for artificial intelligence used on the platform. While the AI- centric approach bought Google in the limelight, Meaghan McGrath, who tracks Google and other cloud providers at Technology Business Research, says that “They’ve been making the right moves and saying the right things, but it just hasn’t shown through in performance financially,” She further stresses on the fact that Google is still hamstrung by a perception that it doesn’t really know how to work with corporate IT departments—an area where Microsoft has made its mark. Kurian to join Google Thomas Kurian worked at Oracle for the past 22 years and since 2015 was the president of product development.  On September 5th, Kurian told employees in an email on Sept. 5 that he was taking "extended time off from Oracle". The company said in a statement at the time that "we expect him to return soon.” 23 days later, Oracle put out a filing saying that Kurian had resigned "to pursue other opportunities." Google and Oracle did not have a pleasant history together. The two companies are involved in a eight-year legal battle related to Google's use of the Java programming language, without a license, in developing its Android operating system for smartphones. Oracle owns the intellectual property behind Java. In March, the Federal Circuit reversed a district court's ruling that had favored Google, sending the case back to the lower court to determine damages that it now must pay Oracle. CNBC reports that one former Google employee, who asked not to be named because of the sensitivity of the matter, is not optimistic that Kurian will be well received; since Kurian still has to figure out how to work with Googlers. It would be interesting to see how the face of Google Cloud changes under Kurian’s leadership. You can head over to Google’s blog to read more about this announcement. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more 10 useful Google Cloud AI services for your next machine learning project [Tutorial]
Read more
  • 0
  • 0
  • 2274

article-image-openstack-foundation-to-tackle-open-source-infrastructure-problems-will-conduct-conferences-under-the-name-open-infrastructure-summit
Melisha Dsouza
16 Nov 2018
3 min read
Save for later

OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’

Melisha Dsouza
16 Nov 2018
3 min read
At the OpenStack Summit in Berlin this week, the OpenStack Foundation announced that from now all its bi-annual conferences will be conducted under the name of ‘Open Infrastructure Summit’. According to TechCrunch, the  Foundation itself won’t have a rebranding of its name, but a change will be brought about in the nature of what the Foundation is doing. The board will now be adopting new projects outside of the core OpenStack project. There will also be a process for adding “pilot projects” and fostering them for a minimum of 18 months. The focus for these projects will be on continuous integration and continuous delivery (CI/CD), container infrastructure, edge computing, data center, and artificial intelligence and machine learning. OpenStack currently has these pilot projects in development: Airship, Kata Containers, StarlingX and Zuul. OpenStack says that the idea of the foundation is not to manage multiple projects, or increase the Foundation’s revenue. However, the scope of this idea is focused around people who run or manage infrastructure. There are no new boards of directors or foundations for each project. The team also assures its members that the actual OpenStack technology isn’t going anywhere. OpenStack Foundation CTO Mark Collier said “We said very clearly this week that open infrastructure starts with OpenStack, so it’s not separate from it. OpenStack is the anchor tenant of the whole concept,” Collier said. Sell added, “All that we are doing is actually meant to make OpenStack better.” Adding his insights on the decision, Canonical founder Mark Shuttleworth is worried that the focus on multiple projects will “confuse people about OpenStack.” he further adds that “I would really like to see the Foundation employ the key contributors to OpenStack so that the heart of OpenStack had long-term stability that wasn’t subject to a popularity contest every six months,” Boris Renski, co-founder of OpenSTack stated that as of today a number of companies are back to doubling down on OpenStack as their core focus. He attributes this to the foundation’s focus on edge computing. The highest interest in OpenStack being shown by China. The OpenStack Foundation’s decision to tackle open source infrastructure problems, while keeping the core of the actual OpenStack project intact, is refreshing. The only possible competition it can face is from the Linux Foundation backing the Cloud Native Computing Foundation. Read Next OpenStack Rocky released to meet AI, machine learning, NFV and edge computing demands for infrastructure Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Introducing OpenStack Foundation’s Kata Containers 1.0
Read more
  • 0
  • 0
  • 2166
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-announces-corretto-a-open-source-production-ready-distribution-of-openjdk-backed-by-aws
Melisha Dsouza
15 Nov 2018
3 min read
Save for later

Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS

Melisha Dsouza
15 Nov 2018
3 min read
Yesterday, at Devoxx Belgium, Amazon announced the preview of Amazon Corretto which is a free distribution of OpenJDK that offers Long term support. With Corretto, users can develop and run Java applications on popular operating systems. The team further mentioned that Corretto is multiplatform and production ready with long-term support that will include performance enhancements and security fixes. They also have plans to make Coretto the default OpenJDK on Amazon Linux 2 in 2019. The preview currently supports Amazon Linux, Windows, macOS, and Docker, with additional support planned for general availability. Corretto is run internally by Amazon on thousands of production services. It is certified as compatible with the Java SE standard. Features and benefits of Corretto Amazon Corretto lets a user run the same environment in the cloud, on premises, and on their local machine. During Preview, Corretto will allow users to develop and run Java applications on popular operating systems like Amazon Linux 2, Windows, and macOS. 2. Users can upgrade versions only when they feel the need to do so. 3. Since it is certified to meet the Java SE standard, Coretto can be used as a drop-in replacement for many Java SE distributions. 4. Corretto is available free of cost and there are no additional paid features or restrictions. 5. Coretto is backed by Amazon and the patches and improvements in Corretto enable Amazon to address high-scale, real-world service concerns. Coretto can meet heavy performance and scalability demands. 6. Customers will obtain long-term support, with quarterly updates including bug fixes and security patches. AWS will provide urgent fixes to customers outside of the quarterly schedule. At Hacker news, users are talking about how the product's documentation could be formulated in a better way. Some users feel that the “Amazon's JVM is quite complex”. Users are also talking about Oracle offering the same service at a price. One user has pointed out the differences between Oracle’s service and Amazon’s service. The most notable feature of this release apparently happens to be the LTS offered by Amazon. Head over to Amazon’s blog to read more about this release. You can also find the source code for Corretto at Github. Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 2770

article-image-the-ceph-foundation-has-been-launched-by-the-linux-foundation-to-support-the-open-source-storage-project
Melisha Dsouza
13 Nov 2018
3 min read
Save for later

The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project

Melisha Dsouza
13 Nov 2018
3 min read
At Ceph Day Berlin, yesterday (November 12)  the Linux Foundation announced the launch of the Ceph Foundation. A total of 31 organizations have come together to launch the Ceph Foundation including industries like ARM, Intel, Harvard and many more. The foundation aims to bring industry members together to support the Ceph open source community. What is Ceph? Ceph is an open source distributed storage technology that provides storage services for many of the world’s largest container and OpenStack deployments. The range of organizations using Ceph is vast. They include financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, car manufacturers like BMW, and software firms like SAP and Salesforce. The main aim of the Ceph Foundation The main focus of the foundation is to raise money via annual membership fees from industry members. The combined pool of funds will then be spent in support of the Ceph community. The team has already raised around half a million dollars for their first year which will be used to support the Ceph project infrastructure, cloud infrastructure services, internships, and community events. The new foundation will provide a forum for community members and industry stakeholders to meet and discuss project status, development and promotional activities, community events, and strategic direction. The Ceph Foundation replaces the Ceph Advisory Board formed back in 2015. According to a Linux Foundation statement, the Ceph Foundation, will “organize and distribute financial contributions in a coordinated, vendor-neutral fashion for immediate community benefit” Ceph has an ambitious plan for new initiatives once the foundation gets properly functional. Some of these include: Expansion of and improvements to the hardware lab used to develop and test Ceph An events team to help plan various programs and targeted regional or local events Investment in strategic integrations with other projects and ecosystems Programs around interoperability between Ceph-based products and services Internships, training materials, and much more! The Ceph Foundation will provide an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem. You can head over to their blog to know more about this news. Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation NIPS Foundation decides against name change as poll finds it an unpopular superficial move; instead increases ‘focus on diversity and inclusivity initiatives’ Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 2383

article-image-cloudflares-workers-enable-containerless-cloud-computing-powered-by-v8-isolates-and-webassembly
Melisha Dsouza
12 Nov 2018
5 min read
Save for later

Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly

Melisha Dsouza
12 Nov 2018
5 min read
Cloudflare’s cloud computing platform Workers doesn’t use containers or virtual machines to deploy computing. Workers allows users to build serverless applications on Cloudflare's data centers. It provides a lightweight JavaScript execution environment to augment existing applications or create entirely new ones without having to configure or maintain infrastructure. Why did Cloudflare create workers? Cloudflare provided limited features and options that developers could build in-house. There was not much flexibility for customers to build features themselves. To enable users to write code on their servers deployed around the world, they had to allow untrusted code to run, with low overhead. This needed to process millions of requests per second and that too at a very fast speed. Customers couldn’t write their own code without the team’s supervision. It would be expensive to use traditional virtualization and container technologies like Kubernetes let alone run thousands of Kubernetes pod at 155 data centers of Cloudflare would be resource intensive. Enter Cloudflare’s ‘Workers’ to solve these issues. Features of Workers #1 ‘Isolates’- Run code from multiple customers ‘Isolates’ is a technology built by Google Chrome team to power the Javascript engine in that browser, V8: Isolates.  These are lightweight contexts that group variables, with the code allowed to mutate them. A single process can run hundreds or thousands of Isolates, while easily  switching between them. Thus, Isolates make it possible to run untrusted code from different customers within a single operating system process. They start real quick (Any given Isolate can start around a hundred times faster than a Node process on a machine) and do not allow one Isolate to access the memory of another. #2 Cold Starts Workers facilitate the concept of ‘cold start’ when a new copy of code has to be started on a machine. In the Lambda world, this means spinning up a new containerized process which can delay requests  for as much as ten seconds ending up in a terrible user experience. A Lambda can only process one single request at a time. A new Lambda has to be cold-started every time an additional concurrent request is recieved. If a Lambda doesn’t get a request soon enough, it will be shut down and it all starts again.  Since Workers don’t have to start a process, Isolates start in 5 milliseconds. It scales and deploys quickly, entirely upgrading existing Serverless technologies. #3 Context Switching A normal context switch performed by an OS can take as much as 100 microseconds. When multiplied by all the Node, Python or Go processes running on average Lambda servers, this leads to a heavy overhead. This splits the CPUs power between running the customer’s code and switching between processes. An Isolate-based system runs all of the code in a single process which means there are no expensive context switches. The machine can invest virtually all of its time running your code. #4 Memory The V8 was designed to be multi-tenant. It runs the code from the many tabs in a user’s browser in isolated environments within a single process. Since memory is often the highest cost of running a customer’s code, V8 lowers it and dramatically changes the cost economics. #5 Security It is not safe to run code from multiple customers within the same process. Testing, fuzzing, penetration testing, and bounties are required to build a truly secure system of that complexity. The open-source nature of V8 helps in creating aanisolation layer that helps Cloudflare take care of the security aspect. Cloudlfare’s Workers also allows users to build responses from multiple background service requests either to the Cloudflare cache, application origin, or third party APIs. They can build conditional responses for inbound requests to assess and subsequently block or reroute malicious or unauthorized requests. All of this at just a third of what AWS costs, remarked an astute Twitter observer. https://twitter.com/seldo/status/1061461318765555713 Running code through WebAssembly One of the disadvantages of using Workers is that, since it is an Isolate-based system, it cannot run arbitrary compiled code. Users have to either write their code in Javascript, or a language which targets WebAssembly (eg. Go or Rust). Also, if a user cannot recompile their processes, they won’t be able to run them in an Isolate. This has been nicely summarised in the above mentioned tweet. He notes that WebAssembly modules are already in the npm registry and it creates the potential for npm to become the dependency management solution for every programming language. He mentions that the “availability of open source libraries to achieve the task at hand is the primary reason people pick a programming language”. This leads us to the question of “How does software development change when you can use any library anytime?” You can head over to the Cloudflare blog to understand more about containerless cloud computing. Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites  
Read more
  • 0
  • 0
  • 5769

article-image-google-kubernetes-engine-was-down-last-friday-users-left-clueless-of-outage-status-and-rca
Melisha Dsouza
12 Nov 2018
3 min read
Save for later

Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA

Melisha Dsouza
12 Nov 2018
3 min read
On the 9th of November, at 4.30 am US/Pacific time,  the Google Kubernetes Engine faced a service disruption. It was questionable whether or not a user would be able to launch a node pool through Cloud Console UI. The team responded to the issue saying that they would get back to users with more information by Friday, 9th November 04:45 am US/Pacific time. However, this was not solved by the given time. Another status update was posted by the team assuring users that mitigation work was underway by the Engineering Team. Users were to be posted with another update by 06:00 pm US/Pacific with current details. In the meantime, affected customers were advised to use gcloud command to create new Node Pools. An update for the issue being finally resolved was posted on Sunday, the 11th of November, stating that services were restored on Friday at 14:30 US/Pacific time.  . However, no proper explanation has been provided regarding what led to the service disruption. They did mention that an internal investigation of the issue will be done and appropriate improvements to the systems will be implemented to help prevent or minimize future recurrence of the issue. According to a user’s summary on Hacker News, “Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems. Some users here are reporting that they have received no response from GCP support, even over a time span of 40+ hours since the support request was submitted.” According to another user, “When everything works, GCP is the best. Stable, fast, simple, reliable. When things stop working, GCP is the worst. They require way too much work before escalating issues or attempting to find a solution”. We can’t help but agree looking at the timeline of the service downtime. Users have also expressed disappointment over how the outage was managed. Source:Hacker News With users demanding a root cause analysis of the situation, it is only fitting that Google provides one so users can trust the company better. You can check out Google Cloud’s blog post detailing the timeline of the downtime. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]  
Read more
  • 0
  • 0
  • 3170
article-image-kubeflow-0-3-released-with-simpler-setup-and-improved-machine-learning-development
Melisha Dsouza
02 Nov 2018
3 min read
Save for later

Kubeflow 0.3 released with simpler setup and improved machine learning development

Melisha Dsouza
02 Nov 2018
3 min read
Early this week, the Kubeflow project launched its latest version- Kubeflow 0.3, just 3 months after version 0.2 was out. This release comes with easier deployment and customization of components along with better multi-framework support. Kubeflow is the machine learning toolkit for Kubernetes. It is an open source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Users are provided with a easy to use ML stack anywhere that Kubernetes is already running, and this stack can self configure based on the cluster it deploys into. Features of Kubeflow 0.3 1. Declarative and Extensible Deployment Kubeflow 0.3 comes with a deployment command line script; kfctl.sh. This tool allows consistent configuration and deployment of Kubernetes resources and non-K8s resources (e.g. clusters, filesystems, etc. Minikube deployment provides a single command shell script based deployment. Users can also use MicroK8s to easily run Kubeflow on their laptop. 2. Better Inference Capabilities Version 0.3 makes it possible to do batch inference with GPUs (but non distributed) for TensorFlow using Apache Beam.  Batch and streaming data processing jobs that run on a variety of execution engines can be easily written with Apache Beam. Running TFServing in production is now easier because of the Liveness probe added and using fluentd to log request and responses to enable model retraining. It also takes advantage of the NVIDIA TensorRT Inference Server to offer more options for online prediction using both CPUs and GPUs. This Server is a containerized, production-ready AI inference server which maximizes utilization of GPU servers. It does this by running multiple models concurrently on the GPU and supports all the top AI frameworks. 3. Hyperparameter tuning Kubeflow 0.3 introduces a new K8s custom controller, StudyJob, which allows a hyperparameter search to be defined using YAML thus making it easy to use hyperparameter tuning without writing any code. 4. Miscellaneous updates The upgrade includes a release of a K8s custom controller for Chainer (docs). Cisco has created a v1alpha2 API for PyTorch that brings parity and consistency with the TFJob operator. It is easier to handle production workloads for PyTorch and TFJob because of the new features added to them. There is also support provided for gang-scheduling using Kube Arbitrator to avoid stranding resources and deadlocking in clusters under heavy load. The 0.3 Kubeflow Jupyter images ship with TF Data-Validation. TF Data-Validation is a library used to explore and validate machine learning data. You can check the examples added by the team to understand how to leverage Kubeflow. The XGBoost example indicates how to use non-DL frameworks with Kubeflow The object detection example illustrates leveraging GPUs for online and batch inference. The financial time series prediction example shows how to leverage Kubeflow for time series analysis The team has said that the next major release:  0.4, will be coming by the end of this year. They will focus on ease of use to perform common ML tasks without having to learn Kubernetes. They also plan to make it easier to track models by providing a simple API and database for tracking models. Finally, they intend to upgrade the PyTorch and TFJob operators to beta. For a complete list of updates, visit the 0.3 Change Log on GitHub. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl    
Read more
  • 0
  • 0
  • 2599

article-image-microsoft-azure-reportedly-chooses-xilinx-chips-over-intel-altera-for-ai-co-processors-says-bloomberg-report
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report

Melisha Dsouza
31 Oct 2018
3 min read
Xilinx Inc., has reportedly won orders from Microsoft Corp.’s Azure cloud unit to account for half of the co-processors currently used on Azure servers to handle machine-learning workloads. This will replace the chips made by Intel Corp, according to people familiar with Microsoft's’ plans as reported by Bloomberg Microsoft’s decision comes with effect to add another chip supplier in order to serve more customers interested in machine learning. To date, this domain was run by Intel’s Altera division. Now that Xilinx has bagged the deal, does this mean Intel will no longer serve Microsoft? Bloomberg reported Microsoft’s confirmation that it will continue its relationship with Intel in its current offerings. A Microsoft spokesperson added that “There has been no change of sourcing for existing infrastructure and offerings”. Sources familiar with the arrangement also commented on how Xilinx chips will have to achieve performance goals to determine the scope of their deployment. Cloud vendors these days are investing heavily in research and development centering around the machine learning field. The past few years has seen an increase in need of flexible chips that can be configured to run machine-learning services. Companies like Microsoft, Google and Amazon are massive buyers of server chips and are always looking for alternatives to standard processors to increase the efficiency of their data centres. Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE that “Programmable chips are key to the success of infrastructure-as-a-service providers as they allow them to utilize existing CPU capacity better, They’re also key enablers for next-generation application technologies like machine learning and artificial intelligence.” Earlier this year, Xilinx CEO Victor Peng made clear his plans to focus on data center customers, saying “data center is an area of rapid technology adoption where customers can quickly take advantage of the orders of magnitude performance and performance per-watt improvement that Xilinx technology enables in applications like artificial intelligence (AI) inference, video and image processing, and genomics” Last month, Xilinx made headlines with the announcement of a new breed of computer chips designed specifically for AI inference. These chips combine FPGAs with two higher-performance Arm processors, plus a dedicated AI compute engine and relates to the application of deep learning models in consumer and cloud environments. The chips promise higher throughput, lower latency and greater power efficiency than existing hardware. Looks like Xilinx is taking noticeable steps to make itself seen in the AI market. Head over to Bloomberg for the complete coverage of this news. Microsoft Ignite 2018: New Azure announcements you need to know Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud    
Read more
  • 0
  • 0
  • 3189

article-image-introducing-numpywren-a-system-for-linear-algebra-built-on-a-serverless-architecture
Sugandha Lahoti
29 Oct 2018
3 min read
Save for later

Introducing numpywren, a system for linear algebra built on a serverless architecture

Sugandha Lahoti
29 Oct 2018
3 min read
Last week, researchers from UC Berkeley and UW Madison published a research paper highlighting a system for linear algebra built on a serverless framework. numpywren is a scientific computing framework built on top of the serverless execution framework pywren. Pywren is a stateless computation framework that leverages AWS Lambda to execute python functions remotely in parallel. What is numpywren? Basically Numpywren, is a distributed system for executing large-scale dense linear algebra programs via stateless function executions. numpywren runs computations as stateless functions while storing intermediate state in a distributed object store. Instead of dealing with individual machines, hostnames, and processor grids numpywren works on the abstraction of "cores" and "memory". Numpywren currently uses Amazon EC2 and Lambda services for computation and uses Amazon S3 as a distributed memory abstraction. Numpywren can scale to run Cholesky decomposition (a linear algebra algorithm) on a 1Mx1M matrix within 36% of the completion time of ScaLAPACK running on dedicated instances and can be tuned to use 33% fewer CPU-hours. They’ve also introduced LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting. Why serverless for Numpywren? Per their research, serverless computing model can be used for computationally intensive programs while providing ease-of-use and seamless fault tolerance. The elasticity provided by serverless computing also allows the numpywren system to dynamically adapt to the inherent parallelism of common linear algebra algorithms. What’s next for Numpywren? One of the main drawbacks of the serverless model is the high communication needed due to the lack of locality and efficient broadcast primitives. The researchers want to incorporate coarser serverless executions (e.g., 8 cores instead of 1) that process larger portions of the input data. They also want to develop services that provide efficient collective communication primitives like broadcast to help address this problem. The researchers want modern convex optimization solvers such as CVXOPT to use Numpywren to scale much larger problems. They are also working on automatically translating numpy code directly into LAmbdaPACK instructions that can be executed in parallel. As data centers continue their push towards disaggregation, the researchers point out that platforms like numpywren open up a fruitful area of research. For further explanation, go through the research paper. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Azure Functions 2.0 launches with better workload support for serverless How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 3173
article-image-google-cloud-storage-security-gets-an-upgrade-with-bucket-lock-cloud-kms-keys-and-more
Melisha Dsouza
24 Oct 2018
3 min read
Save for later

Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more

Melisha Dsouza
24 Oct 2018
3 min read
Earlier this month, the team at Google Cloud Storage announced new capabilities for improving the reliability and performance of user’s data. They have now rolled out updates for storage security that will cater to privacy of data and compliance with financial services regulations.  With these new security upgrades including the general availability of Cloud Storage Bucket Lock, UI changes for privacy management, Cloud KMS integration with Cloud Storage and much more; users will be able to build reliable applications as well as ensure the safety of data. Storage security features on Google Cloud Storage: #1 General availability of Cloud Storage Bucket Lock Cloud Storage Bucket Lock is now generally available. This feature is especially useful for users that need a Write Once Read Many (WORM) storage, as it prevents deletion or modification of content for a specified period of time. To help organizations meet compliance, legal and regulatory requirements for retaining data for specific lengths of time, Bucket Lock provides retention lock capabilities, as well as event, holds for content. Bucket Lock works with all tiers of Cloud Storage. Both primary and archive data can use the same storage setup. Users can automatically move locked data into colder storage tiers and delete data once the retention period expires. Bucket Lock has been used in a diverse range of applications from financial records compliance and Healthcare records retention to Media content archives and much more. You can head over to the Bucket Lock documentation to learn more about this feature. #2 New UI features for secure sharing of data The new UI features in the Cloud Storage console enable users to securely share their data and gain insights over which data, buckets, and objects are publicly visible across their Cloud Storage environment. The public sharing option in the UI has been replaced with an Identity and Access Management (IAM) panel. This mechanism will prevent users from clicking the mouse by mistake and publicly sharing their objects. Administrators can clearly understand which content is publicly available. The mechanism also enables users to know how their data is being shared publicly. #3 Use Cloud KMS keys with Cloud Storage data Cloud Key Management System (KMS) provides users with sophisticated encryption key management capabilities. Users can manage and control encryption keys for their Cloud Storage datasets through the Cloud Storage–KMS integration. This KMS integration helps users manage active keys, authorize users or applications to use certain keys, monitor key use, and more. Cloud Storage users can also perform a  key rotation, revocation, and deletion. Head over to Google Cloud storage blog to learn more about Cloud KMS integration. #4 Access Transparency for Cloud Storage and Persistent Disk This new transparency mechanism will show users who, when, where and why Google support or the engineering team has accessed their Cloud Storage and Persistent Disk environment. Users can use Stackdriver APIs to monitor logs related to Cloud Storage actions programmatically and also archive their logs if required for future auditing. This gives complete visibility into administrative actions for monitoring and compliance purposes You can learn more about AXT on Google's blog post. Head over to Google Cloud Storage blog to understand how these new upgrades will add to the security and control of cloud resources. What’s new in Google Cloud Functions serverless platform Google Cloud announces new Go 1.11 runtime for App Engine Cloud Filestore: A new high performance storage option by Google Cloud Platform  
Read more
  • 0
  • 0
  • 4262

article-image-atlassian-overhauls-its-jira-software-with-customizable-workflows-new-tech-stack-and-roadmaps-tool
Sugandha Lahoti
19 Oct 2018
3 min read
Save for later

Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool

Sugandha Lahoti
19 Oct 2018
3 min read
Atlassian has completely revamped it’s traditional Jira software adding a simplified user experience, new third-party integrations, and a new product roadmaps tool. Announced yesterday, in their official blog post, they mention that “They’ve rolled out an entirely new project experience for the next generation with a focus on making Jira Simply Powerful.” Sean Regan, head of growth for Software Teams at Atlassian, said: “With a more streamlined and simplified application, Atlassian hopes to appeal to a wider range of business execs involved in the software-creation process.” What’s new in the revamped Jira software? Powerful tech stack: Jira Software is transformed into a modern cloud app. It now includes an updated tech stack, permissions, and UX. Developers have more autonomy, administrators have more flexibility and advanced users have more power. “Additionally, we’ve made Jira simpler to use across the board. Now, anyone who works with development teams can collaborate more easily.” Customizable workflow: To upgrade user experience, Atlassian has introduced a new feature called build-your-own-boards. Users can customize their own workflow, issue types, and fields for the board. They don’t require administrator access or the need to jeopardize other project’s customizations. Source: Jira blog This customizable workflow was inspired by Trello, the task management app acquired by Atlassian for $425 million in 2017. “What we tried to do in this new experience is mirror the power that people know and love about Jira, with the simplicity of an experience like Trello.” said Regan. Third party integrations: The new Jira comes with almost 600 third-party integrations. These third-party applications, Atlassian said, should help appeal to a broader range of job roles that interact with developers. Integrations include Adobe, Sketch, and Invision. Other integrations include Facebook's Workplace and updated integrations for Gmail and Slack. Jira Cloud Mobile: Jira Cloud mobile helps developers access their projects from their smartphones. Developers can create, read, update, and delete issues and columns; groom their backlog and start and complete sprints; respond to comments and tag relevant stakeholders, all from their mobile. Roadmapping tool: Jira now features a brand new roadmaps tool that makes it easier for teams to see the big picture. “When you have multiple teams coordinating on multiple projects at the same time, shipping different features at different percentage releases, it’s pretty easy for nobody to know what is going on,” said Regan. “Roadmaps helps bring order to the chaos of software development.” Source: Jira blog Pricing for the Jira software varies by the number of users. It costs $10 per user per month for teams of up to 10 people; $7 per user per month for teams of between 11 and 100 users; and varying prices for teams larger than 100. The company also offers a free 7-day trial. Read more about the release on the Jira Blog. You can also have a look at their public roadmap. Atlassian acquires OpsGenie, launches Jira Ops to make the incident response more powerful. GitHub’s new integration for Jira Software Cloud aims to provide teams with a seamless project management experience. Atlassian open sources Escalator, a Kubernetes autoscaler project
Read more
  • 0
  • 0
  • 3447