Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-introducing-platform9-managed-kubernetes-service
Amrata Joshi
04 Feb 2019
3 min read
Save for later

Introducing Platform9 Managed Kubernetes Service

Amrata Joshi
04 Feb 2019
3 min read
Today, the team at Platform9, a company known for its SaaS-managed hybrid cloud, introduced a fully managed, enterprise-grade Kubernetes service that works on VMware with full SLA guarantee. It enables enterprises to deploy and run Kubernetes easily without the need of management overhead and advanced Kubernetes expertise. It features enterprise-grade capabilities including multi-cluster operations, zero-touch upgrades, high availability, monitoring, and more, which are handled automatically and backed by SLA. PMK is part of the Platform9’s hybrid cloud solution, which helps organizations in centrally managing VMs, containers and serverless functions on any environment. Enterprises can support Kubernetes at scale alongside their traditional VMs, legacy applications, and serverless functions. Features of Platform9 Managed Kubernetes Self Service, Cloud Experience IT Operations and VMware administrators can now help developers with simple, self-service provisioning and automated management experience. It is now possible to deploy multiple Kubernetes clusters with a click of a button that is operated under the strictest SLAs. Run Kubernetes anywhere PMK allows organizations to run Kubernetes instantly, anywhere. It also delivers centralized visibility and management across all Kubernetes environments including on-premises, public cloud, or at the Edge. This helps the organizations to drop shadow IT and VM/Container sprawl and ensure compliance. It improves utilization and reduces costs across all infrastructure. Speed Platform9 Managed Kubernetes (PMK) allows enterprises to run in less than an hour on VMware. It also eliminates the operational complexity of Kubernetes at scale. PMK helps enterprises to modernize their VMware environments without the need of any hardware or configuration changes. Open Ecosystem Enterprises can benefit from the open source community and all the Kubernetes-related services and applications by delivering open source Kubernetes on VMware without code forks. It ensures portability across environments. Sirish Raghuram, Co-founder, and CEO of Platform9 said, “Kubernetes is the #1 enabler for cloud-native applications and is critical to the competitive advantage for software-driven organizations today. VMware was never designed to run containerized workloads, and integrated offerings in the market today are extremely clunky, hard to implement and even harder to manage. We’re proud to take the pain out of Kubernetes on VMware, delivering a pure open source-based, Kubernetes-as-a-Service solution that is fully managed, just works out of the box, and with an SLA guarantee in your own environment.” To learn more about delivering Kubernetes on VMware, check out the demo video. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more
Read more
  • 0
  • 0
  • 3827

article-image-openwrt-18-06-2-released-with-major-bug-fixes-updated-linux-kernel-and-more
Amrata Joshi
04 Feb 2019
3 min read
Save for later

OpenWrt 18.06.2 released with major bug fixes, updated Linux kernel and more!

Amrata Joshi
04 Feb 2019
3 min read
Last week the team at OpenWrt announced the second service release of the stable OpenWrt 18.06 series, OpenWrt 18.06.2. OpenWrt is a Linux operating system that targets embedded devices and provides a fully writable filesystem with optional package management. It is also considered to be a complete replacement for the vendor-supplied firmware of a wide range of wireless routers and non-network devices. What’s new in OpenWrt 18.06.2? OpenWrt 18.06.2 comes with bug fixes in the network and the build system and updates to the kernel and base packages. In OpenWrt 18.06.2, Linux kernel has been updated to versions 4.9.152/4.14.95 (from 4.9.120/4.14.63 in v18.06.1). GNU time dependency has been removed. This release comes with added support for bpf match. In this release, a blank line has been inserted after KernelPackage template to allow chaining calls. INSTALL_SUID macro has been added. This release comes with added support for enabling the rootfs/boot partition size option via tar. Building of artifacts has been introduced. Package URL has been updated. Un-initialized return value has been fixed. Major bug fixes The docbook2man error has been fixed. The issues with libressl build on x32 (amd64ilp32) host has been fixed. The build has been fixed without modifying Makefile.am. Fedora patch has been added for crashing git style patches. The syntax error has been fixed. Security fixes for the Linux kernel, GNU patch, Glibc, BZip2, Grub, OpenSSL, and MbedTLS. IPv6 and network service fixes. Few of the users are happy about this release and they think despite small teams and budgets, the team at OpenWrt has done a wonderful job by powering so many routers. One of the comment reads, “The new release still works fine on a TP-Link TL-WR1043N/ND v1 (32MB RAM, 8MB Flash). This is an old router I got from the local reuse center for $10 a few years ago. It can handle a 100 Mbps fiber connection fine and has 5 gigabit ports. Thanks Openwrt!” But the question is if cheap routers affect the internet speed. One of the users commented on HackerNews, “My internet is too fast (150 mbps) for a cheap router to effectively manage the connection, meaning that unless I pay 250€ for a router, I will just slow down my Internet needlessly.” Read more about this news on the OpenWrt’s official blog post. Mapzen, an open-source mapping platform, joins the Linux Foundation project Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack The Haiku operating system has released R1/beta1
Read more
  • 0
  • 0
  • 4730

article-image-microsoft-cloud-services-dns-outage-results-in-deleting-several-microsoft-azure-database-records
Bhagyashree R
04 Feb 2019
2 min read
Save for later

Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records

Bhagyashree R
04 Feb 2019
2 min read
On January 29, Microsoft Cloud services including Microsoft Azure, Office 365, and Dynamics 365 suffered a major outage. This resulted in customers experiencing intermittent access to Office 365 and also deleting several database records. This comes just after a major outage that prevented Microsoft 365 users from accessing their emails for an entire day in Europe. https://twitter.com/AzureSupport/status/1090359445241061376 Users who were already logged into Microsoft services weren’t affected; however, those that were trying to log into new sessions were not able to do so. How did this Microsoft Azure outage happen? According to Microsoft, the preliminary reason behind this outage was a DNS issue with CenturyLink, an external DNS provider. Microsoft Azure’s status page read, “Engineers identified a DNS issue with an external DNS provider”. CenturyLink, in a statement, mentioned that their DNS services experienced disruption due to a software defect, which affected connectivity to a customer’s cloud resources. Along with authentication issues, this outage also caused the deletion of users’ live data stored in Transparent Data Encryption (TDE) databases in Microsoft Azure. TDE databases encrypt information dynamically and decrypt them when customers access it. As the data is stored in encrypted form, it prevents intruders from accessing the database. For encryption, many Azure users store their own encryption keys in Microsoft’s Key Vault encryption key management system. The deletion was triggered by a script that automatically drops TDE database tables when corresponding keys can no longer be accessed in the Key Vault. Microsoft was able to restore the tables from a five-minute snapshot backup. But, those transactions that customers had processed within five minutes of the table drop were expected to raise a support ticket asking for the database copy. Read more about Microsoft’s Azure outage in detail on ZDNet. Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020 Outage in the Microsoft 365 and Gmail made users unable to log into their accounts Microsoft Office 365 now available on the Mac App Store
Read more
  • 0
  • 0
  • 5732
Banner background image

article-image-former-google-cloud-ceo-joins-stripe-board-just-as-stripe-joins-the-global-unicorn-club
Bhagyashree R
31 Jan 2019
2 min read
Save for later

Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club

Bhagyashree R
31 Jan 2019
2 min read
Stripe, the payments infrastructure company, has received a whopping $100 million in funding from Tiger Global Management and now its valuation stands at $22.5 billion as reported by The Information on Tuesday. Last year in September, it also secured $245 million through its funding round, also led by Tiger Global Management. Founded in 2010 by the Irish brothers, Patrick and John Collision, Stripe has now become one of the most valuable “unicorns”, a term used for firms worth more than $1 billion, in the U.S. The company also boasts an impressive list of clients, recently adding Google and Uber to its stable users. The company is now planning to expand its platform by launching a point-of-sale payments terminal package targeted at online retailers making the jump to offline. A Stripe spokesperson told CNBC, “Stripe is rapidly scaling internationally, as well as extending our platform into issuing, global fraud prevention, and physical stores with Stripe Terminal. The follow-on funding gives us more leverage in these strategic areas.” The company is also expanding its team. On Tuesday, Patrick Collision announced that Diane Greene, who is an Alphabet board of directors member will be joining the Stripe’s board of directors. Along with Greene, joining the team are Michael Moritz, a partner at Sequoia Capital, Michelle Wilson, former general counsel at Amazon, and Jonathan Chadwick, former CFO of VMware, McAfee, and Skype. https://twitter.com/patrickc/status/1090386301642141696 In addition to Tiger Global Management, the start-up has also being supported by various other investors including Sequoia Capital, Khosla Ventures, Andreessen Horowitz, and PayPal co-founders Peter Thiel, Max Levchin, and Elon Musk. For more details, read the full story on The Information website. PayPal replaces Flow with TypeScript as their type checker for every new web app After BitPay, Coinbase bans Gab accounts and its founder, Andrew Torba Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S.
Read more
  • 0
  • 0
  • 4174

article-image-dropbox-purchases-workflow-and-esignature-startup-hellosign-for-250m
Melisha Dsouza
29 Jan 2019
2 min read
Save for later

Dropbox purchases workflow and eSignature startup ‘HelloSign’ for $250M

Melisha Dsouza
29 Jan 2019
2 min read
Dropbox has purchased HelloSign, a San Francisco based private company that provides lightweight document workflow and eSignature services. Dropbox has paid $230 million for this deal which is expected to close in Quarter 1. Dropbox co-founder and CEO, Drew Houston, said in a statement “HelloSign has built a thriving business focused on eSignature and document workflow products that their users love. Together, we can deliver an even better experience to Dropbox users, simplify their workflows, and expand the market we serve”. Dropbox’ SVP of engineering, Quentin Clark told TechCrunch that, HelloSign’s workflow capabilities added in 2017 were key to the purchase. He calls their investment in APIs as ‘unique’ and that their workflow products are aligned with Dropbox’ long-term direction that Dropbox will pursue ‘a broader vision’. This could possibly mean extending Dropbox Storage capabilities in the long run. This deal comes as an extension to a partnership that Dropbox established with HelloSign last year, to use two of HelloSign technologies-  to offer eSignature and electronic fax solutions to Dropbox users. HelloSign CEO, Joseph Walla says being part of Dropbox would give HelloSign the access to resources of a much larger public company, thereby allowing them to reach a broader market than it could on a standalone basis. He stated, “Together with Dropbox, we can bring more seamless document workflows to even more customers and dramatically accelerate our impact.” COO of HelloSign, Whitney Bouck, said that the company will remain an independent entity and will continue to operate with its current management structure as part of the Dropbox family. She also added that all of the HelloSign employees will be offered employment at Dropbox as part of the deal. You can head over to TechCrunch to know more about this announcement. How Dropbox uses automated data center operations to reduce server outage and downtime NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’  
Read more
  • 0
  • 0
  • 2230

article-image-amazon-launches-tls-termination-support-for-network-load-balancer
Bhagyashree R
25 Jan 2019
2 min read
Save for later

Amazon launches TLS Termination support for Network Load Balancer

Bhagyashree R
25 Jan 2019
2 min read
Starting from yesterday, AWS Network Load Balancers (NLB) supports TLS/SSL. This new feature simplifies the process of building secure web applications by allowing users to make use of TLS connections that terminate at an NLB. This support is fully integrated with AWS PrivateLink and is also supported by AWS CloudFormation. https://twitter.com/colmmacc/status/1088510453767000064 Here are some features and benefits it comes with: Simplified management Using TLS at scale requires you to do extra management work like distributing the server certificate to each backend server. Additionally, it also increases the attack surface due to the presence of multiple copies of the certificate. This TLS/SSL support comes with a central management point for your certificates by integrating with AWS Certificate Manager (ACM) and Identity Access Manager (IAM). Improved compliance This new feature provides the flexibility of predefined security policies. Developers can use these built-in security policies to specify the cipher suites and protocol versions that are acceptable to their application. This will help you if you are going for PCI and FedRAMP compliance and also allow you to achieve a perfect TLS score. Classic upgrade Users who are currently using a Classic Load Balancer for TLS termination can switch to NLB, which will help them to scale quickly in case of an increased load. Users will also be able to make use a static IP address for their NLB and log the source IP address for requests. Access logs This support allows users to enable access logs for their NLBs and direct them to the S3 bucket of their choice. These logs will document information about the TLS protocol version, cipher suite, connection time, handshake time, and more. To read more in detail, check out Amazon’s announcement. Amazon is reportedly building a video game streaming service, says Information Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more
Read more
  • 0
  • 0
  • 4614
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kata-containers-1-5-released-with-firecracker-support-integration-improvements-and-ibm-z-series-support
Melisha Dsouza
24 Jan 2019
3 min read
Save for later

Kata Containers 1.5 released with Firecracker support, integration improvements and IBM Z series support

Melisha Dsouza
24 Jan 2019
3 min read
Yesterday, Kata Containers 1.5 was released with a host of updates like preliminary support for the Firecracker hypervisor, s390x architecture support, and significant integration improvements! Kata Containers is an open source project and community building a standard implementation of lightweight Virtual Machines (VMs) that perform like containers and provide the workload isolation and security advantages of Virtual machines. The project is managed by The OpenStack Foundation and combines the technology from Intel® Clear Containers and Hyper runV. Features of Kata Containers 1.5 #1 Firecracker support Eric Ernest, an architecture committee member for Kata Containers project, states that the Kata Containers project was designed “to support multiple hypervisor solutions.” The new Firecracker support introduced in this update aims to do just that. At the Amazon re:Invent conference 2018, the AWS team released ‘Firecracker’ that they explained to be a new Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker enables service owners to operate secure multi-tenant container-based services while combining the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker can be used in Kata Containers 1.5 for feature constrained workloads, while using the QEMU when working with more advanced workloads. The blog also mentions a small limitation of the Kubernetes functionality when using Kata+Firecracker. The inability to dynamically adjust memory and CPU definitions for a pod and Firecrackers support for only block-based storage drivers and volumes gives rise to the requirement of devicemapper. This is available in Kubernetes + CRI-O and Docker version 18.06. Users can expect more storage driver options soon. Check out this screencast for an example of Kata configured in CRIO+K8S, utilizing both QEMU and Firecracker. You can head over to GitHub to understand how to get started quickly with Kata + runtimeClass in Kubernetes. #2 s390x architecture support Kata Containers 1.5 adds IBM Z-Series support. According to CIO, IBM Z platform includes notable security features. It has a proprietary ASIC on-chip hardware dedicated specifically for cryptographic processes, enabling all-encompassing encryption. This keeps data always encrypted except when that data is being processed. Data is only decrypted during computations before it is encrypted again. #3 containerd integration The 1.5 release simplifies how Kata Containers integrate with containerd. Following the discussion last year to add a shim API to containerd, the 1.5 release includes an initial implementation meeting this shim API. Eric Ernest , an architecture committee member for Kata Containers project, says the API  will result in a better interface to Kata Containers and provide the ability to directly access container level statistics from the Kata runtime. TheKata team plans to have several presentations on this topic at the Open Infrastructure Summit in Denver, April 29- May 1, 2019. You can head over to Eric’s blog for more insights on this announcement or head over to AWS blog to know more about the Firecracker support for Kata 1.5. CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure Tumblr open sources its Kubernetes tools for better workflow integration Implementing Azure-Managed Kubernetes and Azure Container Service [Tutorial]  
Read more
  • 0
  • 0
  • 3022

article-image-idera-acquires-travis-ci-the-open-source-continuous-integration-solution
Sugandha Lahoti
24 Jan 2019
2 min read
Save for later

Idera acquires Travis CI, the open source Continuous Integration solution

Sugandha Lahoti
24 Jan 2019
2 min read
The popular open source continuous integration service Travis CI solution, has been acquired by Idera. Idera offers a number of B2B software solutions ranging from database administration to application development to test management. Travis CI will be joining Idera’s Testing Tools division, which also includes TestRail, Ranorex, and Kiuwan. Travis CI assured its users that the company will continue to be open source and a stand-alone solution under an MIT license. “We will continue to offer the same services to our hosted and on-premises users. With the support from our new partners, we will be able to invest in expanding and improving our core product”, said Konstantin Haase, a founder of Travis CI in a blog post. Idera will also keep the Travis Foundation running which runs projects like Rails Girls Summer of Code, Diversity Tickets, Speakerinnen, and Prompt. It’s not just a happy day for Travis CI. Travis CI will also bring it’s 700,000 users to Idera, and it’s high profile customers like IBM and Zendesk. Users are quick to note that this acquisition comes at a time when Tavis CI’s competitors like Circle CI, seem to be taking market share away from Travis CI. A comment on hacker news reads, “In a past few month I started to see Circle CI badges popping here and there for opensource repositories and anecdotally many internal projects at companies are moving to GitLab and their built-in CI offering. Probably a good time to sell Travis CI, though I'd prefer if they would find a better buyer.” Another user says, “Honestly, for enterprise users that is a good thing. In the hands of a company like Idera we can be reasonably confident that Travis will not disappear anytime soon” Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform Creating a Continuous Integration commit pipeline using Docker [Tutorial] How to master Continuous Integration: Tools and Strategies
Read more
  • 0
  • 0
  • 4502

article-image-geoserver-2-14-2-rolled-out-with-accessible-wmts-bindingimproved-style-editor-and-more
Amrata Joshi
21 Jan 2019
2 min read
Save for later

GeoServer 2.14.2 rolled out with accessible WMTS binding,improved style editor and more

Amrata Joshi
21 Jan 2019
2 min read
Last week, GeoServer 2.14.2 was released., GeoServer is an open source software server based on Java, for sharing geospatial data. It allows users to display their spatial information to the world. It is free and can display data on popular mapping applications such as Google Earth, Google Maps, Microsoft Virtual Earth and Yahoo Maps. Improvements in GeoServer 2.14.2 In GeoServer 2.14.2, WMTS Restful binding is accessible to all users and works with workspace specific services which initially used to be limited to admins. gs:DownloadEstimator now returns a true value when estimating full raster downloads at native resolution. In GeoServer 2.14.2, KML ignores sortBy parameter while querying records. The NullPointerException is thrown while using env() function with LIKE operator in CSS filters. With this release, it’s possible to modify existing GWC blobstore via UI without renaming which was not possible initially. For GetLegendGraphic, this release allows expressions in ColorMapEntry labels. In this release, OpenLayers2 preview is not automatically triggered on IE8. New MongoDB extension has been added GeoServer 2.14.2. The style editor has been improved, it now includes side by side editing Nearest match support has been added for Web Map Service (WMS) dimension handling. Major fixes Rendering issue with JAI-EXT and Input/Output TransparentColor options has been resolved. The Complex MongoDB generated properties are now handled in this release. Check out the official blog post by GeoServer for full release notes. Getting Started with GeoServer ArangoDB 3.4 releases with a native search engine, full GeoJSON support, and more Uber’s kepler.gl, an open source toolbox for GeoSpatial Analysis
Read more
  • 0
  • 0
  • 1925

article-image-microsoft-announces-azure-devops-bounty-program
Prasad Ramesh
18 Jan 2019
2 min read
Save for later

Microsoft announces Azure DevOps bounty program

Prasad Ramesh
18 Jan 2019
2 min read
Yesterday, the Microsoft Security Response Center (MSRC) announced the launch of the Azure DevOps Bounty program. This is a program launched to solidify the security provided to Azure DevOps customers. They are offering rewards up to US$20,000 if you can find eligible vulnerabilities in Azure DevOps online and Azure DevOps server. The bounty rewards range from $500 to $20,000 US. The reward will depend on Microsoft’s discretion on the severity and impact of a vulnerability. It will also depend on the quality of the submission subject to their bounty terms and conditions. Products in focus of this program are Azure DevOps services which was previously known as Visual Studio Team Services and the latest versions of Azure DevOps Server and Team Foundation Server. The goal of the program is to find any eligible vulnerabilities that may have a direct security impact on the customer base. For a submission to be eligible, it should fulfil the following criteria: Identifying a previously unreported vulnerability in one of the services or products. The web application vulnerabilities must impact supported browsers for Azure DevOps server, services, or plug-ins. The submission should have documented steps that are clear and reproducible. It can be text or video. Any necessary information to quickly reproduce and understand the issue can result in faster response and higher rewards. Any submissions that Microsoft thinks are not eligible in this criteria may be rejected. You can send your submissions to secure@microsoft.com with the help of bug submission guidelines. Participants are requested to use the Coordinated Vulnerability Disclosure when reporting the vulnerabilities. Note that there are no restrictions on how many vulnerabilities you can report or the rewards for it. When there are multiple submissions, the first one will be chosen for the reward. For more details about the eligible vulnerabilities and the Microsoft Azure DevOps bounty program, visit the Microsoft website. 8 ways Artificial Intelligence can improve DevOps Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”
Read more
  • 0
  • 0
  • 3203
article-image-google-and-waze-share-their-best-practices-for-canary-deployment-using-spinnaker
Bhagyashree R
18 Jan 2019
3 min read
Save for later

Google and Waze share their best practices for canary deployment using Spinnaker

Bhagyashree R
18 Jan 2019
3 min read
On Monday, Eran Davidovich, a System Operations Engineer at Waze and Théo Chamley, Solutions Architect at Google Cloud shared their experience on using Spinnaker for canary deployments. Waze estimated that canary deployment helped them prevent a quarter of all incidents on their services. What is Spinnaker? Developed at Netflix, Spinnaker, is an open source, multi-cloud continuous delivery platform that helps developers to manage app deployments on different computing platforms including Google App Engine, Google Kubernetes Engine, AWS, Azure, and more. This platform also enables you to implement advanced deployment methods like canary deployment. In this type of deployment, developers roll out the changes to a subset of users to analyze whether or not the code release provides the desired outcome. If this new code poses any risks, you can mitigate it before releasing the update to all users. In April 2018, Google and Netflix introduced a new feature for Spinnaker called Kayenta using which you can create an automated canary analysis for your project. Though you can build your own canary deployment or other advanced deployment patterns, Spinnaker and Kayenta together are aimed at making it much easier and reliable. The tasks that Kayenta automates includes fetching user-configured metrics from their sources, running statistical tests, and providing an aggregating score for the canary. On the basis of the aggregated score and set limits for success, Kayenta automatically promotes or fails the canary, or triggers a human approval path. Canary best practices Check out the following best practices to ensure that your canary analyses are reliable and relevant: Instead of comparing the canary against the production, compare it against a baseline. This is because many differences can skew the results of the analysis such as cache warmup time, heap size, load-balancing algorithms, and so on. The canary should be run for enough time, at least 50 pieces of time-series data per metric, to ensure that the statistical analysis is relevant. Choose metrics that represent different aspects of your applications’ health. Three aspects are very critical as per the SRE book, which includes latency, errors, and saturation. You must put a standard set of reusable canary configs in place. This will come in handy for anyone in your team as a starting point and will also keep the canary configurations maintainable. Thunderbird welcomes the new year with better UI, Gmail support and more Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! AIOps – Trick or Treat?
Read more
  • 0
  • 0
  • 3940

article-image-how-dropbox-uses-automated-data-center-operations-to-reduce-server-outage-and-downtime
Melisha Dsouza
17 Jan 2019
3 min read
Save for later

How Dropbox uses automated data center operations to reduce server outage and downtime

Melisha Dsouza
17 Jan 2019
3 min read
Today, in a blog post, Dropbox explained how the Prilo system used by the team has automated most of the processes of the company, that were previously manually attended to by Dropbox personnel. Pirlo is used by Dropbox in two main areas- validate and configure network switches and ensure the reliability of servers before entering production. This has, in turn, helped Dropbox to safely manage their physical infrastructure operations with ease. Pirlo consists of a distributed MySQL-backed job queue built by Dropbox itself, using primitives like gRPC, service discovery, and our managed MySQL clusters. Switch provisioning at Dropbox is handled by the TOR STarter which is a Pirlo component. The TOR Starter validates and configures switches in Dropbox datacenter server racks, PoP server racks, and at the different layers of the data center fabric; responsible to connect racks in the same facility together. Server provisioning and repair validation is handled by Pirlo Server Validation. All new servers arriving at the company are validated using this component. Repaired servers are also validated before they are transitioned back into production. Pirlo has automated these manual processes at Dropbox and has led to a reduction in downtime, outages, and inefficiencies associated with the incomplete or erroneous fixing of the systems. By reducing manual work, employees can now focus their attention to more value adding jobs. Before using Pirlo, the above tasks had to be performed by operations engineers and subject matter experts who used various server error logs to take appropriate actions to fix failed hardware. After applying the remediation actions, the engineer would send the machine back into production by sending the server to Dropbox re-imaging system. If the remediation actions didn’t fix the system or properly prepare it for re-imaging, the server would be sent back to the operations engineer for additional fixing. This would end up consuming a lot of the operation engineer's time as well as company resources. Operating engineers who used Pirlo system steadily increased their output by 40+%. The automation of manual tasks allowed engineers to address more issues in the same amount of time. You can head over to Dropbox’s official blog to explore the workings of Pirlo and how it benefited the organization. How to navigate files in a Vue app using the Dropbox API Tech jobs dominate LinkedIn’s most promising jobs in 2019 NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!
Read more
  • 0
  • 0
  • 3975

article-image-go-1-11-support-announced-for-google-cloud-functions
Melisha Dsouza
17 Jan 2019
2 min read
Save for later

Go 1.11 support announced for Google Cloud Functions!

Melisha Dsouza
17 Jan 2019
2 min read
Yesterday, Google cloud announced the support for Go 1.11 (in beta) on Cloud Functions. Developers can now write Go functions that scale dynamically and seamlessly integrate with Google Cloud events. The Go language follows suite after Node.js and Python were announced as supported languages for Google Cloud Functions. Google Cloud functions ensures that developers do not have to worry about server management and scaling. Google Cloud functions scale automatically and developers only pay for the time a function runs. By using the familiar blocks of Go functions, developers can build a variety of applications like: Serverless application backends real-time data processing pipelines Chatbots video or image analysis tools And much more! The two types of Go functions that developers can use with cloud functions are the HTTP and background functions. The HTTP functions are invoked by HTTP requests, while background functions are triggered by events. The Google cloud runtime system provides support for multiple Go packages via the Go modules. Go 1.11 modules allow the integration of third-party dependencies into an application’s code. Go Developers and Google Cloud users have taken this news well. Reddit and Youtube did see a host of positive comments from users. Users have commented on Go being a good fit for cloud functions and making the process of adopting cloud functions much more easier. https://www.reddit.com/r/golang/comments/agne4o/get_going_with_cloud_functions_go_111_is_now_a/ee7sd35 https://www.reddit.com/r/golang/comments/agne4o/get_going_with_cloud_functions_go_111_is_now_a/ee84cej It is easy and efficient to deploy a Go function in Google Cloud. Check out the examples on Google Cloud’s official blog page. Alternatively, you can watch this video to know more about this announcement. Google Cloud releases a beta version of SparkR job types in Cloud Dataproc Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move? Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more  
Read more
  • 0
  • 0
  • 2399
article-image-baidu-open-sources-openedge-to-create-a-lightweight-secure-reliable-and-scalable-edge-computing-community
Melisha Dsouza
16 Jan 2019
2 min read
Save for later

Baidu open sources ‘OpenEdge’ to create a ‘lightweight, secure, reliable and scalable edge computing community’

Melisha Dsouza
16 Jan 2019
2 min read
On 9th January, at CES 2019, Chinese technology giant Baidu Inc. announced the open sourcing of its edge computing platform called ‘OpenEdge’ that can be used by developers to extend cloud computing to their edge devices “Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy. By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications,” said Baidu VP and GM of Baidu Cloud Watson Yin. “ Baidu said that systems built using OpenEdge will automatically be enabled with features like artificial intelligence, cloud synchronization, data collection, function compute and message distribution.OpenEdge is a component of the Baidu Intelligent Edge platform (BIE). The BIE offers tools to manage edge nodes, resources such as certifications, passwords and program code and other functions. BIE is designed to run on the Baidu cloud and supports common AI frameworks such as the Baidu-developed PaddlePaddle and TensorFlow. Developers can, therefore, use Baidu’s cloud to train AI models and then deploy them to the systems that are built using OpenEdge. According to TechRepublic, OpenEdge also gives developers the ability to exchange data with Baidu ABC Intelligent Cloud, perform filtering calculation on sensitive data and provide real-time feedback control when a network connection is unstable. A company spokesperson told Techcrunch that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud. You can head over to GitHub to know more about this release. Unity and Baidu collaborate for simulating the development of autonomous vehicles Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system  
Read more
  • 0
  • 0
  • 3128

article-image-introducing-orgkit-an-all-in-one-tool-to-start-a-company-on-microsoft-tools
Prasad Ramesh
15 Jan 2019
2 min read
Save for later

Introducing OrgKit: An all in one tool to start a company on Microsoft tools

Prasad Ramesh
15 Jan 2019
2 min read
Last week, security expert, SwiftOnSecurity introduced OrgKit on Twitter. It is a new way to run a complete and configured company or business across Microsoft Active Directory, Group Policy, Azure Active Directory, and Office 365. Why was OrgKit created? The whole Microsoft ecosystem was designed to be customized per-organization as per needs. That is why a complete repository of Microsoft product configuration guidance, documents for organizations is so rare. Majority of organizations are unequipped to understand what this really means. There is a diverse configuration history among companies that use these Microsoft services. With this comes the need to support these many different configuration types. This prevented Microsoft from providing generic defaults and guidance for setting things up. What is OrgKit for? It is designed to provide users with a series of templates that can set up a new well-documented IT environment ideally for a mid-size organization. It can serve as a public example of what's possible, and allow companies to make informed decisions. These companies are mostly the ones who lack the security knowledge or are not aware as to what other businesses are doing. This is meant for a company that has to start-over after a full network compromise, or creating a new subsidiary business. Usage of Powershell DSC To build and maintain a Windows environment having a centralized design to support all the necessary tools, Powershell DSC is the ideal tool. It provides a good set of abilities and most likely will be a part of the future versions of OrgKit. Currently, OrgKit aims to help Windows administrators who are already dealing with and trying to cope up with many new technologies and concepts. They need to run the system long term with other employees. Powershell DSC is considered to be a specialized skill that can revert actions done outside its own central control. Using it requires buy-in of the whole-organization. Hence, the kind of use-cases for OrgKit cannot depend on it. To check out the repository, head over to GitHub. How 3 glitches in Azure Active Directory MFA caused a 14-hour long multi-factor authentication outage in Office 365, Azure and Dynamics services Microsoft announces official support for Windows 10 to build 64-bit ARM apps A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report
Read more
  • 0
  • 0
  • 1671