Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-google-announces-the-beta-version-of-cloud-source-repositories
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Google announces the Beta version of Cloud Source Repositories

Melisha Dsouza
21 Sep 2018
3 min read
Yesterday, Google launched the beta version of its Cloud Source Repositories. Claiming to provide its users with a better search experience, Google Cloud Source Repositories is a Git-based source code repository built on Google Cloud. The Cloud Source Repositories introduce a powerful code search feature, which uses the document index and retrieval methods similar to Google Search. These Cloud source repositories can pose a major comeback for Google after Google Code, began shutting down in 2015. This could be a very strategic move for Google, as many coders have been looking for an alternative to GitHub, after its acquisition by Microsoft. How does Google code search work? Code search in Cloud Source Repositories optimizes the indexing, algorithms, and result types for searching code. On submitting a query, the query is sent to a root machine and sharded to hundreds of secondary machines. The machines look for matches by file names, classes, functions and other symbols, and matches the context and namespace of the symbols. A single query can search across thousands of different repositories. Cloud Source Repositories also has a semantic understanding of the code. For Java, JavaScript, Go, C++, Python, TypeScript and Proto files, the tools will also return information on whether the match is a class, method, enum or field. Solution to common code search challenges #1 To execute searches across all the code at ones’ company If a company has repositories storing different versions of the code, executing searches across all the code is exhaustive and ttime-consuming While using Cloud Source Repositories, the default branches of all repositories are always indexed and up-to-date. Hence, searching across all the code is faster. #2 To search for code that performs a common operation Cloud Source Repositories enables users to perform quick searches. Users can also save time by discovering and using the existing solution while avoiding bugs in their code. #3 If a developer cannot remember the right way to use a common code component Developers can enter a query and search across all of their company’s code for examples of how the common piece of code has been used successfully by other developers. #4 Issues with production application  If a developer encounters a specific error message to the server logs that reads ‘User ID 123 not found in PaymentDatabase’, they can perform a regular expression search for ‘User ID .* not found in PaymentDatabase’ and instantly find the location in the code where this error was triggered. All repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query. Cloud Source Repositories has a limited free tier that supports projects up to 50GB with a maximum of 5 users. You can read more about Cloud Source Repositories in the official documentation. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Google to allegedly launch a new Smart home device Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 3551

article-image-adobe-set-to-acquire-marketo-putting-adobe-experience-cloud-at-the-heart-of-all-marketing
Bhagyashree R
21 Sep 2018
3 min read
Save for later

Adobe set to acquire Marketo putting Adobe Experience Cloud at the heart of all marketing

Bhagyashree R
21 Sep 2018
3 min read
Yesterday, Adobe Systems confirmed their plans to acquire Marketo Inc for $4.75 billion from Vista Equity Partners Management. This deal is expected to close in the fourth quarter of Adobe’s Fiscal Year 2018 in December. Adobe, with this acquisition, aims to combine Adobe Experience Cloud and Marketo Commerce Cloud to provide a unified platform for all marketers. Marketo is a US-based software company, which develops marketing software that provides inbound marketing, social marketing, CRM, and other related services. The industries it serves include healthcare, technology, financial services, manufacturing, and media, among others. What acquiring Marketo means to Adobe? A single platform to serve both B2B and B2C customers The integration of Marketo Commerce Cloud into the Adobe Experience Cloud will help Adobe deliver a single platform that serves both B2B and B2C customers globally. This acquisition will bring together Marketo’s lead account-based marketing technology and Adobe’s Experience Cloud analytics, advertising, and commerce capabilities. This will enable B2B companies to create, manage, and execute marketing engagements at scale. Access to Marketo’s huge customer base Enterprises from various industries are using Marketo’s marketing applications to drive engagement and customer loyalty. Marketo will bring its huge ecosystem, which consists of nearly 5000 customers and over 500 partners to Adobe. Brad Rencher, Executive Vice President and General Manager, Digital Experience at Adobe said: “The acquisition of Marketo widens Adobe’s lead in customer experience across B2C and B2B and puts Adobe Experience Cloud at the heart of all marketing.” What’s in it for Marketo? Signaling the next phase of Marketo's growth, its acquisition by Adobe will further accelerate its product roadmap and go-to-market execution. With Adobe, Marketo's products will get a new level of global operational scale and the ability to penetrate new verticals and geographies. The CEO of Marketo, Steve Lucas, believes that with Adobe they will be able to rapidly innovate and provide their customers a definitive system of engagement: “Adobe and Marketo both share an unwavering belief in the power of content and data to drive business results. Marketo delivers the leading B2B marketing engagement platform for the modern marketer, and there is no better home for Marketo to continue to rapidly innovate than Adobe.” To know more about Adobe acquiring Marketo, read their official announcement on Adobe’s  website. Adobe to spot fake images using Artificial Intelligence Adobe is going to acquire Magento for $1.68 Billion Adobe glides into Augmented Reality with Adobe Aero
Read more
  • 0
  • 0
  • 3623

article-image-cortex-an-open-source-horizontally-scalable-multi-tenant-prometheus-as-a-service-becomes-a-cncf-sandbox-project
Bhagyashree R
21 Sep 2018
3 min read
Save for later

Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project

Bhagyashree R
21 Sep 2018
3 min read
Yesterday, Cloud Native Computing Foundation (CNCF) accepted Cortex as a CNCF Sandbox project. Cortex is an open source, horizontally scalable, multi-tenant Prometheus-as-a-service. It provides long-term storage for Prometheus metrics when used as a remote write destination. It also comes with a horizontally scalable, Prometheus-compatible query API. It provides uses cases for: Service providers to enable them to manage a large number of Prometheus instances and provide long-term storage. Enterprises to centralize management of large-scale Prometheus deployments and ensure long-term durability of Prometheus data. Originally developed by Weaveworks, it is now being used in production by organizations like Grafana Labs, FreshTracks, and EA. How does it work? The following diagram shows its architecture: Source: CNCF 1. Scraping samples: First, a Prometheus instance scraps all of the users’ services and then forwards them to a Cortex deployment. It does this using the remote_write API, which was added to Prometheus to support Cortex and other integrations. 2. Distributor distributes the samples: The instance then sends all these samples to distributor, which is a stateless service that consults the ring to figure out which ingesters should ingest the sample. The ingesters are arranged using a consistent hash ring, keyed on the fingerprint of the time series, and stored in a consistent data store, such as Consul. Distributor finds the owner ingester and forwards the sample to it and also to two ingesters after it in the ring. This means if an ingester goes down, we have two others that have its data. 3. Ingesters make chunks of samples: Ingesters continuously receive a stream of samples and group them together in chunks. These chunks are then stored in a backend database, such as DynamoDB, BigTable, or Cassandra. Ingesters facilitate this chunking process so that Cortex isn’t constantly writing to its backend database. Alexis Richardson, CEO of Weaveworks believes that being a CNCF Sandbox project will help grow the Prometheus ecosystem: “By joining CNCF, Cortex will have a neutral home for collaboration between contributor companies, while allowing the Prometheus ecosystem to grow a more robust set of integrations and solutions. Cortex already has a strong affinity with several CNCF technologies, including Kubernetes, gRPC, OpenTracing and Jaeger, so it’s a natural fit for us to continue building on these interoperabilities as part of CNCF.” To know more in detail, check out the official announcement by CNCF and also read What is Cortex?, a blog post published on Weaveworks Blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 4112
Banner background image

article-image-microsofts-immutable-storage-for-azure-storage-blobs-now-generally-available
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Microsoft’s Immutable storage for Azure Storage Blobs, now generally available

Melisha Dsouza
21 Sep 2018
3 min read
Microsoft’s new "immutable storage" feature for Azure Blobs, is now generally available. Financial Services organizations regulated by the Securities and Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), Financial Industry Regulatory Authority (FINRA), and others are required to retain business-related communications in a Write-Once-Read-Many (WORM) or immutable state. This ensures that the data is non-erasable and non-modifiable for a specific retention interval. Healthcare, insurance, media, public safety, and legal services industries will also benefit a great deal from this feature. Through configurable policies, users can only create and read Blobs, and not modify or delete them. There is no additional charge for using this feature. Immutable data is priced in the same way as mutable data. Read Also: Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure Upgrades that accompany this feature are: #1 Regulatory compliance Immutable storage for Azure Blobs will help financial institutions and related industries to store data immutably. Microsoft will soon release a technical white paper with details on how the feature addresses regulatory requirements. Head over to the Azure Trust Center for detailed information about compliance certifications. #2 Secure document retention The immutable storage feature for Azure Blobs service ensures that data cannot be modified or deleted by any user- even with administrative privileges. #3 Better Legal Hold Users can now store sensitive information related to a litigation, criminal investigation, and more in a tamper-proof state for the desired duration. #4 Time-based retention policy support Users can set policies to store data immutably for a specified interval of time. #5 Legal hold policy support When users do not know the data retention time, they can set legal holds to store data until the legal hold is cleared. #6 Support for all Blob tiers WORM policies are independent of the Azure Blob Storage tier and will apply to all the tiers. Therefore, Customers can store their data in the most cost-optimized tier for their workloads immutably. #7 Blob Container level configuration Users can configure time-based retention policies and legal hold tags at the container level. Simple container level settings can create time-based retention policies, lock policies, extend retention intervals, set legal holds, clear legal holds etc. 17a-4, LLC, Commvault , HubStor,Archive2Azure are among the few Microsoft partners that support Azure Blob immutable storage. To know how to upgrade to this feature, head over to the Microsoft Blog Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 4398

article-image-oracle-releases-open-source-and-commercial-licenses-for-java-11-and-later
Savia Lobo
13 Sep 2018
3 min read
Save for later

Oracle releases open source and commercial licenses for Java 11 and later

Savia Lobo
13 Sep 2018
3 min read
Oracle announced that it will provide JDK releases in two combinations ( an open source license and a commercial license): Under the open source GNU General Public License v2, with the Classpath Exception (GPLv2+CPE) Under a commercial license for those using the Oracle JDK as part of an Oracle product or service, or who do not wish to use open source software. These combinations will replace the historical BCL(Binary Code License for Oracle Java SE technologies), which had a combination of free and paid commercial terms. The BCL has been the primary license for Oracle Java SE technologies for well over a decade. It historically contained ‘commercial features’ that were not available in OpenJDK builds. However, over the past year, Oracle has contributed features to the OpenJDK Community, which include Java Flight Recorder, Java Mission Control, Application Class-Data Sharing, and ZGC. From Java 11 onwards, therefore, Oracle JDK builds and OpenJDK builds will be essentially identical. Minute differences between Oracle JDK 11 and OpenJDK Oracle JDK 11 emits a warning when using the -XX:+UnlockCommercialFeatures option. On the other hand, in OpenJDK builds this option results in an error. This difference remains in order to make it easier for users of Oracle JDK 10 and earlier releases to migrate to Oracle JDK 11 and later. The javac --release command behaves differently for the Java 9 and Java 10 targets. This is because, in those releases the Oracle JDK contained some additional modules that were not part of corresponding OpenJDK releases. Some of them are: javafx.base javafx.controls javafx.fxml javafx.graphics javafx.media javafx.web This difference remains in order to provide a consistent experience for specific kinds of legacy use. These modules are either now available separately as part of OpenJFX, are now in both OpenJDK and the Oracle JDK because they were commercial features which Oracle contributed to OpenJDK (e.g., Flight Recorder), or were removed from Oracle JDK 11 (e.g., JNLP). The Oracle JDK always requires third party cryptographic providers to be signed by a known certificate. The cryptography framework in OpenJDK has an open cryptographic interface. This means it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. The Oracle JDK has always required third party cryptographic providers to be signed by a known certificate.  The cryptography framework in OpenJDK has an open cryptographic interface, meaning it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. Read more about this news in detail on Oracle blog. State of OpenJDK: Past, Present and Future with Oracle Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java
Read more
  • 0
  • 0
  • 7137

article-image-why-did-last-weeks-azure-cloud-outage-happen-heres-microsofts-root-cause-analysis-summary
Prasad Ramesh
12 Sep 2018
3 min read
Save for later

Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.

Prasad Ramesh
12 Sep 2018
3 min read
Earlier this month, Microsoft Azure Cloud was experiencing problems that left users unable to access its cloud services. The outage in South Central US affected several Azure Cloud services and caused them to go offline for U.S. users. The reason for the outage was stated as “severe weather”. Microsoft is currently conducting a root cause analysis to find out the exact reason. Many services went offline due to cooling system failure causing the servers to overheat and turn themselves off. What did the RCA reveal about the Azure outage High energy storms associated with Hurricane Gordon hit the southern area of Texas near Microsoft Azure’s data centers for South Central US. Many data centers were affected and experienced voltage fluctuations. Lightning-induced increased electrical activity caused significant voltage swells. The rise in voltages, in turn, caused a portion of one data center to switch to generator power. The power swells also shut down the mechanical cooling systems despite surge suppressors being in place. With the cooling systems being offline, temperatures exceeded the thermal buffer within the cooling system. The safe operational temperature threshold exceeded which initiated an automated shutdown of devices. The shutdown mechanism is installed to preserve infrastructure and data integrity. But in this incident, the temperatures increased pretty quickly in some areas of the datacenter causing hardware damage before a shutdown could be initiated. Many storage servers and some network devices and power units were damaged. Microsoft is taking steps to prevent further damage as the storms are still active in the area. They are switching the remaining data centers to generator power to stabilize power supply. For recovery of damaged units, the first step taken was to recover the Azure Software Load Balancers (SLBs) for storage scale units. The next step was to recover the storage servers and the data on them by replacing failed components and migrating data to healthy storage units while validating that no data is corrupted. The Azure website also states that the “Impacted customers will receive a credit pursuant to the Microsoft Azure Service Level Agreement, in their October billing statement.” A detailed analysis will be available on their website in the coming weeks. For more details on the RCA and customer impact, visit the Azure website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Microsoft Azure’s new governance DApp: An enterprise blockchain without mining Microsoft Azure now supports NVIDIA GPU Cloud (NGC)
Read more
  • 0
  • 0
  • 5037
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-announces-aws-lambda-support-for-powershell-core-6-0
Melisha Dsouza
12 Sep 2018
2 min read
Save for later

Amazon announces AWS Lambda Support for PowerShell Core 6.0

Melisha Dsouza
12 Sep 2018
2 min read
In a post yesterday, the AWS Developer team has announced that AWS Lambda support will be provided for PowerShell Core 6.0. Users can now execute PowerShell Scripts and functions in response to Lambda events. Why should Developers look forward to this upgrade? The AWS Tools for PowerShell will allow developers and administrators to manage their AWS services and resources in the PowerShell scripting environment. Users will be able to manage their AWS resources with the same PowerShell tools used to manage Windows, Linux, and MacOS environments. These tools will let them perform many of the same actions as available in the AWS SDK for .NET. What’s more is that these tools can be accessed from the command line for quick tasks. For example: controlling Amazon EC2 instances. The PowerShell scripting language composes scripts to automate AWS service management. With direct access to AWS services from PowerShell, management scripts can take advantage of everything that the AWS cloud has to offer. The AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are flexible in handling credentials including support for the AWS Identity and Access Management (IAM) infrastructure. To understand how the support works, it is necessary to set up the appropriate development environment as shown below. Set up the Development Environment This can be done in a few simple steps- 1. Set up the correct version of PowerShell 2. Ensure Visual Studio Code is configured for PowerShell Core 6.0. 3. PowerShell Core is built on top of .NET Core hence install .NET Core 2.1 SDK 4. Head over to the PowerShell Gallery and install AWSLambdaPSCore module The module provides users with following cmdlets to author and publish Powershell based   Lambda functions- Source: AWS Blog You can head over to the AWS blog for detailed steps on how to use the Lambda support for PowerShell. The blog gives readers a simple example on how to execute a PowerShell script that ensures that the Remote Desktop (RDP) port is not left open on any of the EC2 security groups. How to Run Code in the Cloud with AWS Lambda Amazon hits $1 trillion market value milestone yesterday, joining Apple Inc Getting started with Amazon Machine Learning workflow [Tutorial]
Read more
  • 0
  • 0
  • 2552

article-image-dr-fei-fei-li-googles-ai-cloud-head-steps-down-amidst-speculations-dr-andrew-moore-to-take-her-place
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Dr. Fei Fei Li, Google's AI Cloud head steps down amidst speculations; Dr. Andrew Moore to take her place

Melisha Dsouza
11 Sep 2018
4 min read
Yesterday, Diane Greene, the CEO of Google Cloud, announced in a blog post that Chief Artificial Intelligence Scientist Dr. Fei-Fei Li will be   replaced by Dr. Andrew Moore, dean of the school of computer science at Carnegie Mellon University at the end of this year. The blog further mentions that, as originally planned, Dr. Fei-Fei Li will be returning to her professorship at Stanford and in the meanwhile, she will transition to being an AI/ML Advisor for Google Cloud. The timing of the transition following the controversies surrounding Google and Pentagon Project Maven is not lost on many. Flashback on ‘Project Maven’ protest and its outcry On March 2017 it was revealed that Google Cloud, headed by Greene, signed a secret $9m contract with the United States Department of Defense called as 'Project Maven'. The project aimed to develop an AI system that could help recognize people and objects captured in military drone footage. The contract was crucial to the Google Cloud Platform gaining a key US government FedRAMP authorization. This project was expected to assist Google in finding future government work worth potentially billions of dollars. Planned for use for non-offensive purposes only,  project Maven also had the potential to expand to a $250m deal. Google provided the Department of Defense with its TensorFlow APIs to assist in object recognition, which the Pentagon believed would eventually turn its stores of video into "actionable intelligence". In September 2017, in a leaked email reviewed by The New York Times, Scott Frohman, Google’s head of defense and intelligence sales asked Dr. Li ,Google Cloud AI’s leader and Chief Scientist, for directions on the “burning question” of how to publicize this news to the masses. To which she replied back- “Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.” As predicted by Dr. Li, the project was met with outrage by more than 3000 Google employees who believed that Google shouldn't be involved in any military work and that algorithms have no place in identifying potential targets. This caused a rift in Google’s workforce, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. Many employees were "deeply concerned" that the data collected by Google integrated with military surveillance data for targeted killing. Fast forward to June 2018 where Google stated that it would not renew its contract (to expire in 2019) with the Pentagon. Dr. Li’s timeline at Google During her two year tenure, Dr. Li oversaw some remarkable work in accelerating the adoption of AI and ML by developers and Google Cloud customers. Considered as one of the most talented machine learning researchers in the world, Dr. Li has published more than 150 scientific articles in top-tier journals and conferences including Nature, Journal of Neuroscience, New England Journal of Medicine and many more. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a large-scale effort contributing to the latest developments in computer vision and deep learning in AI. Dr. Li has been a keynote or invited speaker at many conferences. She has been in the forefront of receiving prestigious awards for innovation and technology while being an acclaimed feature in many magazines. In addition to her contributions in the world of tech, Dr Li also is a co-founder of Stanford’s renowned SAILORS outreach program for high school girls and the national non-profit AI4ALL. The controversial email from Dr.Li can lead to one thinking if the transition was made as a result of the events of 2017. However, no official statement has been released by Google or Dr. Li on why she is moving on. Head over to Google’s Blog for the official announcement of this news. Google CEO Sundar Pichai won’t be testifying to Senate on election interference Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready      
Read more
  • 0
  • 0
  • 4571

article-image-opensky-is-now-a-part-of-the-alibaba-family
Bhagyashree R
06 Sep 2018
2 min read
Save for later

OpenSky is now a part of the Alibaba family

Bhagyashree R
06 Sep 2018
2 min read
Yesterday, Chris Keane, the General Manager of OpenSky announced that OpenSky is now acquired by the Alibaba Group. OpenSky is a network of businesses that empower modern global trade for SMBs and help people discover, buy, and share unique goods that match their individual taste. OpenSky will join Alibaba Group in two capacities: One of OpenSky’s team will become a part of Alibaba.com in North America B2B to serve US based buyers and suppliers. The other team will become a wholly-owned subsidiary of Alibaba Group consisting of OpenSky’s marketplace and SaaS businesses. In 2015, Alibaba Group acquired a minority ownership on OpenSky. In 2017, they collaborated with Alibaba’s B2B leadership team to solve the challenges faced by small businesses. According to Chris, both the companies share a common interest, which is to help small businesses: “It was thrilling to discover that our counterparts at Alibaba share our obsession with helping SMBs. We’ve quickly aligned on a global vision to provide access to markets and resources for businesses and entrepreneurs, opening new doors and knocking down obstacles.” In this announcement Chris also mentioned that they will be coming up with powerful concepts to serve small businesses everywhere, in the near future. To know more, read the official announcement on LinkedIn. Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Digitizing the offline: How Alibaba’s FashionAI can revive the waning retail industry Why Alibaba cloud could be the dark horse in the public cloud race
Read more
  • 0
  • 0
  • 6549

article-image-yandex-launched-an-intelligent-public-cloud-platform-yandex-cloud
Savia Lobo
06 Sep 2018
2 min read
Save for later

Yandex launched an intelligent public cloud platform, Yandex.Cloud

Savia Lobo
06 Sep 2018
2 min read
Yesterday, Russia’s largest search engine, Yandex, launched its intelligent public cloud platform named Yandex.Cloud. This intelligent public cloud platform has been tested by more than 50 Russian and international companies since April. Yandex.Cloud is easy to use and offers flexible pricing with a pay per use pricing model. Also, the platform has an easy access to all the Yandex technologies, which makes it easy for companies to complement an existing IT infrastructure or even serve as an alternative to it. The platform will assist companies and industries of different sizes to boost their efficiency or expand their business without large-scale investment. Yandex plans to roll the Yandex.Cloud platform slowly, first to its users of Yandex services for business, and then to all by the end of 2018. It enables companies to store and use databases containing personal data in Russia, as required by law. Features of the ‘Yandex.Cloud’ public cloud platform A scalable virtual infrastructure The new intelligent public cloud platform includes a scalable virtual infrastructure having multiple management options. Users can manage from a graphical interface or the command line. It also includes developer tools for popular programming languages such as Python and Go Automated services Labour-intensive management tasks of popular databases systems such as PostgreSQL, ClickHouse (Yandex open source high-performance database management system) and MongoDB have been automated. AI-based Yandex services Yandex.Cloud includes AI based services such as a SpeechKit speech recognition and synthesis and Yandex.Translate machine translation. Yan Leshinsky, Head of Yandex.Cloud said, “Yandex has an entire ecosystem of successful products and services that are used by millions of people on a daily basis. Yandex.Cloud provides access to the same infrastructure and technologies that we use to power Yandex services, creating unique opportunities for any business to develop their products and services based on this platform.” To know more about Yandex.Cloud, visit its official website. Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Machine learning APIs for Google Cloud Platform Cloud Filestore: A new high-performance storage option by Google Cloud Platform
Read more
  • 0
  • 0
  • 3235
article-image-microsoft-azure-now-supports-nvidia-gpu-cloud-ngc
Vijin Boricha
31 Aug 2018
2 min read
Save for later

Microsoft Azure now supports NVIDIA GPU Cloud (NGC)

Vijin Boricha
31 Aug 2018
2 min read
Yesterday, Microsoft announced NVIDIA GPU Cloud (NGC) support on its Azure platform. Following this, data scientists, researchers, and developers can build, test, and deploy GPU computing projects on Azure. With this availability, users can run containers from NGC with Azure giving them access to on-demand GPU computing that can scale as per their requirement. This eventually eliminates the complexity of software integration and testing. The need for NVIDIA GPU Cloud (NGC) It is challenging and time-consuming to build and test reliable software stacks to run popular deep learning software such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch, and NVIDIA TensorRT. This is due to the operating level and updated framework dependencies. Finding, installing, and testing the correct dependency is quite a hassle as it is supposed to be done in a multi-tenant environment and across many systems. NGC eliminates these complexities by offering pre-configured containers with GPU-accelerated software. Users can now access 35 GPU-accelerated containers for deep learning software, high-performance computing applications, high-performance visualization tools and much more enabled to run on the following Microsoft Azure instance types with NVIDIA GPUs: NCv3 (1, 2 or 4 NVIDIA Tesla V100 GPUs) NCv2 (1, 2 or 4 NVIDIA Tesla P100 GPUs) ND (1, 2 or 4 NVIDIA Tesla P40 GPUs) According to NVIDIA, these same NVIDIA GPU Cloud (NGC) containers can also work across Azure instance types along with different types or quantities of GPUs. Using NGC containers with Azure is quite easy. Users just have to sign up for a free NGC account before starting, then visit Microsoft Azure Marketplace to find the pre-configured NVIDIA GPU Cloud Image for Deep Learning and high-performance computing. Once you launch the NVIDIA GPU instance on Azure, you can pull the containers you want from the NGC registry into your running instance. You can find detailed steps to setting up NGC in the Using NGC with Microsoft Azure documentation. Microsoft Azure’s new governance DApp: An enterprise blockchain without mining NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499  
Read more
  • 0
  • 0
  • 4618

article-image-storj-labs-new-open-source-partner-program-to-generate-revenue-opportunities-for-open-source-companies
Melisha Dsouza
30 Aug 2018
3 min read
Save for later

Storj Labs’ new Open Source Partner Program: to generate revenue opportunities for open source companies

Melisha Dsouza
30 Aug 2018
3 min read
At the Linux Foundation's Open Source Summit in Vancouver, Storj Labs a leader in decentralized cloud storage company, launched their ‘Open Source Partner Program’. This program will enable open-source projects to generate revenue when their users store data in the cloud. The program was launched with the aim to bridge the "major economic disconnect between the 24-million total open-source developers and the $180 billion cloud market" as stated by Ben Golub, Storj's executive chairman and interim CEO. How does the Open Source Partner program work? Open-source projects simply need to integrate Storj into their existing cloud application infrastructure. Since Storj uses an Amazon Web Services (AWS) S3 compliant interface, this integration should be easy. Storj provides a blockchain encrypted, distributed cloud storage with facilitates data security, improves reliability, and enhances performance when compared to traditional cloud storage approaches. Using client-side encryption ensures that data can only be accessed by the data owners. While harvesting all these benefits, open-source projects that will use the Storj network will be provided with a continuous revenue stream. 60% of its gross revenue will be given to its storage farmers and 40% will be split amongst open-source developers. Through simple Storj data connectors that will be integrated with their platforms, Storj can track data storage usage. Partners will be given help desk support and tools to test the network's performance and capabilities. What’s in it for open source companies? Monetization has always been a challenge for open source companies. They ultimately require revenue to sustain themselves. Open source drives a sizable majority of the $200 billion-plus cloud computing market which is inversely proportional to the revenue that currently makes its way directly back to their projects and companies. The ‘Open Source Partner Program’ will help open source companies to grow exponentially and meet other financial-related goals.  Ultimately, open source companies - even the ones that only provide free products - require revenue to sustain themselves, and the Storj Open Source Partner Program aims to help. What’s in it for Storj? While this revenue generation program will benefit open source companies, it can also be viewed as an effective marketing strategy for Storj.  Open source projects are all the rage these days and the more these companies turn to Storj for decentralized cloud-based solutions, the more popularity and recognition Storj gets. Storj, as well as open source companies, realize the importance of openness, decentralization, and broad-based individual empowerment, which is why this program strikes the perfect balance to support open source projects. The Storj Labs has already won over ten major open-source partners, including Confluent, Couchbase, FileZilla, MariaDB, MongoDB, and Nextcloud, to join its Open Source Partner Program. These partners will be given early, immediate access to the V3 network private alpha. You can get a complete overview of the program on Storj’s blog post. 5 reasons why your business should adopt cloud computing Demystifying Clouds: Private, Public, and Hybrid clouds Google’s second innings in China: Exploring cloud partnerships with Tencent and others
Read more
  • 0
  • 0
  • 4113

article-image-google-introduces-cloud-hsm-beta-hardware-security-module-for-crypto-key-security
Prasad Ramesh
23 Aug 2018
2 min read
Save for later

Google introduces Cloud HSM beta hardware security module for crypto key security

Prasad Ramesh
23 Aug 2018
2 min read
Google has rolled out a beta of its Cloud hardware security module aimed at hardware cryptographic key security. Cloud HSM allows better security for customers without them having to worry about operational overhead. Cloud HSM is a cloud-hosted hardware security module that allows customers to store encryption keys. Federal Information Processing Standard Publication (FIPS) 140-2 level 3 security is used in the Cloud HSM. FIPS is a U.S. government security standard for cryptographic modules under non-military use. This standard is certified to be used in financial and health-care institutions. It is a specialized hardware component designed to encrypt small data blocks contrary to larger blocks that are managed with Key Management Service (KMS). It is available now and is fully managed by Google, meaning all the patching, scaling, cluster management and upgrades will be done automatically with no downtime. The customer has full control of the Cloud HSM service via the Cloud KMS APIs. Il-Sung Lee, Product Manager at Google, stated: “And because the Cloud HSM service is tightly integrated with Cloud KMS, you can now protect your data in customer-managed encryption key-enabled services, such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc, with a hardware-protected key.” In addition to Cloud HSM, Google has also released betas for asymmetric key support for both Cloud KMS and Cloud HSM. Now users can create a variety of asymmetric keys for decryption or signing operations. This means that users can now store their keys used for PKI or code signing in a Google Cloud managed keystore. “Specifically, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 keys will be available for signing operations, while RSA 2048, RSA 3072, and RSA 4096 keys will also have the ability to decrypt blocks of data.” For more information visit the Google Cloud blog and for HSM pricing visit the Cloud HSM page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Machine learning APIs for Google Cloud Platform Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 4293
article-image-zeit-releases-serverless-docker-in-beta
Richard Gall
15 Aug 2018
3 min read
Save for later

Zeit releases Serverless Docker in beta

Richard Gall
15 Aug 2018
3 min read
Zeit, the organization behind the cloud deployment software Now, yesterday launched Serverless Docker in beta. The concept was first discussed by the Zeit team at Zeit Day 2018 back in April, but it's now available to use and promises to radically speed up deployments for engineers. In a post published on the Zeit website yesterday, the team listed some of the key features of this new capability, including: An impressive 10x-20x improvement in cold boot performance (in practice this means cold boots can happen in less than a second A new slot configuration property that defines resource allocation in terms of CPU and Memory, allowing you to fit an application within the set of constraints that are most appropriate for it Support for HTTP/2.0 and WebSocket connections to deployments, which means you no longer need to rewrite applications as functions. The key point to remember with this release, according to Zeit, is that  "Serverless can be a very general computing model. One that does not require new protocols, new APIs and can support every programming language and framework without large rewrites." Read next: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 What's so great about Serverless Docker? Clearly, speed is one of the most exciting things about serverless Docker. But there's more to it than that - it also offers a great developer experience. Johannes Schickling, co-founder and CEO of Prisma (a GraphQL data abstraction layer) said that, with Serverless Docker, Zeit "is making compute more accessible. Serverless Docker is exactly the abstraction I want for applications." https://twitter.com/schickling/status/1029372602178039810 Others on Twitter were also complimentary about Serverless Docker's developer experience - with one person comparing it favourably with AWS - "their developer experience just makes me SO MAD at AWS in comparison." https://twitter.com/simonw/status/1029452011236777985 Combining serverless and containers One of the reasons people are excited about Zeit's release is that it provides the next step in serverless. But it also brings containers into the picture too. Typically, much of the conversation around software infrastructure over the last year or so has viewed serverless and containers as two options to choose from rather than two things that can be used together. It's worth remembering that Zeit's product has largely been developed alongside its customers that use Now. "This beta contains the lessons and the experiences of a massively distributed and diverse user base, that has completed millions of deployments, over the past two years." Eager to demonstrate how Serverless Docker works for a wide range of use cases, Zeit has put together a long list of examples of Serverless Docker in action on GitHub. You can find them here. Read next A serverless online store on AWS could save you money. Build one. Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 4689

article-image-cncf-sandbox-accepts-googles-openmetrics-project
Fatema Patrawala
14 Aug 2018
3 min read
Save for later

CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project

Fatema Patrawala
14 Aug 2018
3 min read
The Cloud Native Computing Foundation (CNCF) accepted OpenMetrics, an open source specification for metrics exposition, into the CNCF Sandbox, a home for early stage and evolving cloud native projects. Google cloud engineers and other vendors had been working on this persistently from the past several months and finally it got accepted by CNCF. Engineers are further working on ways to support OpenMetrics in the OpenSensus, a set of uniform tracing and stats libraries that work with multi-vendor services. OpenMetrics will bring together the maturity and adoption of Prometheus, and Google’s background in working with stats at extreme scale. It will also bring in the experience and needs of a variety of projects, vendors, and end-users who are aiming to move away from the hierarchical way of monitoring to enable users to transmit metrics at scale. The open source initiative, focused on creating a neutral metrics exposition format will provide a sound data model for current and future needs of users. It will embed into a standard that is an evolution of the widely-adopted Prometheus exposition format. While there are numerous monitoring solutions available today, many do not focus on metrics and are based on old technologies with proprietary, hard-to-implement and hierarchical data models. “The key benefit of OpenMetrics is that it opens up the de facto model for cloud native metric monitoring to numerous industry leading implementations and new adopters. Prometheus has changed the way the world does monitoring and OpenMetrics aims to take this organically grown ecosystem and transform it into a basis for a deliberate, industry-wide consensus, thus bridging the gap to other monitoring solutions like InfluxData, Sysdig, Weave Cortex, and OpenCensus. It goes without saying that Prometheus will be at the forefront of implementing OpenMetrics in its server and all client libraries. CNCF has been instrumental in bringing together cloud native communities. We look forward to working with this community to further cloud native monitoring and continue building our community of users and upstream contributors.” says Richard Hartmann, Technical Architect at SpaceNet, Prometheus team member, and founder of OpenMetrics. OpenMetrics contributors include AppOptics, Cortex, Datadog, Google, InfluxData, OpenCensus, Prometheus, Sysdig and Uber, among others. “Google has a history of innovation in the metric monitoring space, from its early success with Borgmon, which has been continued in Monarch and Stackdriver. OpenMetrics embodies our understanding of what users need for simple, reliable and scalable monitoring, and shows our commitment to offering standards-based solutions. In addition to our contributions to the spec, we’ll be enabling OpenMetrics support in OpenCensus” says Sumeer Bhola, Lead Engineer on Monarch and Stackdriver at Google. For more information about OpenMetrics, please visit openmetrics.io. To quickly enable trace and metrics collection from your application, please visit opencensus.io. 5 reasons why your business should adopt cloud computing Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 6427