Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-google-cloud-security-launches-three-new-services-for-better-threat-detection-and-protection-in-enterprises
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Google Cloud security launches three new services for better threat detection and protection in enterprises

Melisha Dsouza
08 Mar 2019
2 min read
This week, Google Cloud Security announced a host of new services to empower customers with advanced security functionalities that are easy to deploy and use. This includes the Web Risk API, Cloud Armor, and HSM keys. #1 Web Risk API The Web Risk API has been released in the beta format to ensure the safety of users on the web. The Web Risk API includes data on more than a million unsafe URLs. Billions of URL’s are examined each day to keep this data up-to-date. Client applications can use a simple API call to check URLs against Google's lists of unsafe web resources. This list also includes social engineering sites, deceptive sites, and sites that host malware or unwanted software. #2 Cloud Armor Cloud Armor is a Distributed Denial of Service (DDoS) defense and Web Application Firewall (WAF) service for Google Cloud Platform (GCP) based on the technologies used to protect services like Search, Gmail and YouTube. Cloud Armor is generally available, offering L3/L4 DDoS defense as well as IP Allow/Deny capabilities for applications or services behind the Cloud HTTP/S Load Balance. It also allows users to either permit or block incoming traffic based on IP addresses or ranges using allow lists and deny lists. Users can also customize their defenses and mitigate multivector attacks through Cloud Armor’s flexible rules language. #3 HSM keys to protect data in the cloud Cloud HSM is now generally available and it allows customers to protect encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs. Customers do not have to worry about the operational overhead of HSM cluster management, scaling and patching. Cloud HSM service is fully integrated with Cloud Key Management Service (KMS), allowing users to create and use customer-managed encryption keys (CMEK) that are generated and protected by a FIPS 140-2 Level 3 hardware device. You can head over to Google Cloud Platform’s official blog to know more about these releases. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 4012

article-image-lxd-3-11-releases-with-configurable-snapshot-expiry-progress-reporting-and-more
Natasha Mathur
08 Mar 2019
2 min read
Save for later

LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more

Natasha Mathur
08 Mar 2019
2 min read
The LXD team released version 3.11 of LXD, its open source container management extension for Linux Containers (LXC), earlier this week. LXD 3.11 explores new features, minor improvements, and bugfixes. LXD or ‘ Linux Daemon’ system container manager provides users with an experience similar to virtual machines. It is written in Go and helps improve the existing LXC features to build and manage Linux containers. New Features in LXD 3.11 Configurable snapshot expiry at creation time: LXD 3.11 allows users to set an expiry during the snapshot creation time. Earlier, it was a hassle to manually create snapshots and edit them to modify their expiry. To change the expiry at the API level, you can set the exact timestamp to null that will make a persistent snapshot despite any configured auto-expiry. Progress reporting for publish operations: Progress information is now displayed to the user in LXD 3.11 when running lxc publish against a container or snapshot. This is similar to image transfers and container migrations. Improvements Minor improvements have been made to how candid authentication feature gets handled by the CLI in LXD 3.11. Per-remote authentication cookies: Now every remote consist of its own “cookie jar”. Also, LXD’s behavior is now always identical in LXD 3.11 when adding remotes. In prior releases, a shared “cookie jar” was being used for all remotes which would lead to inconsistent behaviors. Candid preferred over TLS for new remotes: In LXD 3.11, while using LXC remote add to add in a new remote, candid will be used for TLS authentication in case that remote supports candid. Also, authentication type can always be overridden using --auth-type. Remote list can now show Candid domain: The remote list can now indicate what Candid domain is used in LXD 3.11. Bug Fixes Goroutine leak has been fixed in ExecContainer. The “client: fix goroutine leak in ExecContainer” has been reverted. rest-api.md formatting has been updated. Translations from weblate have also been updated. Error handling in execIfAliases has been improved. Duplicate scheduled snapshots have been fixed. failing backup import has been fixed. Test case that covers the image sync scenario for the joined node has been updated. For a complete list of changes, check out the official LXD 3.11 release notes. LXD 3.8 released with automated container snapshots, ZFS compression support and more! Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix An update on Bcachefs- the “next generation Linux filesystem”
Read more
  • 0
  • 0
  • 2650

article-image-alphabets-chronicle-launches-backstory-for-business-network-security-management
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

Alphabet’s Chronicle launches ‘Backstory’ for business network security management

Melisha Dsouza
05 Mar 2019
3 min read
Alphabet’s ‘Chronicle’, launched last year, announced its first product, ‘Backstory’ at the ongoing RSA 2019. Backstory is a security data platform and stores huge amounts of business’ network data--including information from domain name servers to employee laptops and phones--into a Chronicle-installed collection of servers on a customer’s premises. This data is quickly indexed and organized. According to Forbes, customers can then carry out searches on the data, like “Are any of my computers sending data to Russian government servers?” Cybersecurity investigators can start asking questions such as: What kinds of information are the Russians taking, when and how?. This method of working is very similar to Google Photos. Backstory gives security analysts the ability to quickly understand the real vulnerabilities. According to the Backstory blog, “Backstory is a global security telemetry platform for investigation and threat hunting within your enterprise network. It is a specialized, cloud-native security analytics system, built on the core infrastructure that powers Google itself. Making security analytics instant, easy, and cost-effective.” The company states that this service requires zero customer hardware, maintenance, tuning, or ongoing management and can support security analytics against the largest customer networks with ease. Features of Backstory Backstory provides a real-time and retroactive instant indicator matching across all logs. It checks failure points such as if a domain flips from good to bad, Backstory shows all devices that have ever communicated with that domain). Prebuilt search results and smart filters designed for security-specific use cases. Displays data in real time to support security investigations and hunts. Backstory provides Intelligent analytics to derive insights to support security investigations. Backstory can automatically work with huge petabytes of data. Chronicle’s CEO Stephen Gillett told CNBC that the pricing model will not be based on volume. However, the licenses will be based on the size of the company and not on the size of the customer's data. Backstory also intends to partner with other cybersecurity companies rather than competing with them. Considering that Alphabet already has a history of obtaining sensitive customer information, it will be interesting to see how Backstory operates without this particular methodology. To know more about this news in detail, read Backstory’s official blog. Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company Google finally ends Forced arbitration for all its employees Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment  
Read more
  • 0
  • 0
  • 1858
Banner background image

article-image-2019-upskilling-enterprise-devops-skills-report-gives-an-insight-into-the-devops-skill-set-required-for-enterprise-growth
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

‘2019 Upskilling: Enterprise DevOps Skills’ report gives an insight into the DevOps skill set required for enterprise growth

Melisha Dsouza
05 Mar 2019
3 min read
DevOps Institute has announced the results of the "2019 Upskilling: Enterprise DevOps Skills Report". The research and analysis for this report were conducted by Eveline Oehrlich, former vice president and research director at Forrester Research. The project was supported by founding Platinum Sponsor Electric Cloud, Gold Sponsor CloudBees and Silver Sponsor Lenovo. This report outlines the most valued and in-demand skills needed to achieve DevOps transformation within enterprise IT organizations of all sizes. It also gives an insight into the skills a DevOps professional should develop to help build a mindset and a culture for organizations and other individuals. According to Jayne Groll, CEO of DevOps Institute, “DevOps Institute is thrilled to share the research findings that will help businesses and the IT community understand the requisite skills IT practitioners need to meet the growing demand for T-shaped professionals. By identifying skill sets needed to advance the human side of DevOps, we can nurture the development of the T-shaped professional that is being driven by the requirement for speed, agility and quality software from the business.” Key findings from the report 55% of the survey respondents said that they first look for internal candidates when searching for DevOps team members and will look for external candidates only if they have not identified an internal candidate. The respondents agreed that automation skills (57%), process skills (55%) and soft skills (53%) are the most important must-have skills On being asked about which job title(s) companies recently hired (or are planning to hire), the survey depicted: DevOps Engineer/Manager, 39%; Software Engineer, 29%; DevOps Consultant, 22%; Test Engineer, 18%; Automation Architect, 17%; and Infrastructure Engineer, 17%. Other recruits included CI/CD Engineers, 16%; System Administrators, 15%; Release Engineers/Managers, 13%; and Site Reliability Engineers, 10% Functional skills and key technical skills when combined, complement the soft skills required to create qualified  DevOps engineers. Automation process and soft skills are the “must-have” skills for a DevOps engineer. Process skills are needed for intelligent automation.  Another key functional skill is IT Operations. Security comes in second. Business skills are most important to leaders, but not as much to individual contributors. Cloud and analytical knowledge are the top technical skills. Recruiting for DevOps is on the rise. Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report The following figure shows the priorities across the top skill categories relative to the key roles surveyed: Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report Oehrlich also said in a statement that Hiring managers see a DevOps professional as a creative, knowledge-sharing, eager-to-learn individual with shapeable skill sets. Andre Pino, vice president of marketing, CloudBees said in a statement s that “The survey results show the importance for developers and managers to have the right skills that empower them to meet business objectives and have a rewarding career in our fast-paced industry.” You can check out the entire report for more insights on this news. Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution
Read more
  • 0
  • 0
  • 2888

article-image-vmware-essential-pks-use-upstream-kubernetes-to-build-a-flexible-cost-effective-cloud-native-platform
Melisha Dsouza
04 Mar 2019
3 min read
Save for later

VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform

Melisha Dsouza
04 Mar 2019
3 min read
Last week, Paul Fazzone, GM Cloud Native Applications, announced the launch of VMware Essential PKS “as a modular approach to cloud-native operation”. VMware Essential PKS includes upstream Kubernetes, reference architectures to help design decisions, and expert support to guide users through upgrades, maintenance and reactively troubleshoot when needed. Paul notes that more than 80% of containers run on virtual machines (VMs), with the percentage growing every year. This launch keeps up with the main objective of establishing VMware as the leading enabler of Kubernetes and cloud-native operation. Features of Essential PKS #1 Modular Approach Customers who have specific technological requirements for networking, monitoring, storage, etc. can build a more modular architecture on upstream Kubernetes. VMware Essential PKS will help these customers access upstream Kubernetes with proactive support.  The only condition being that these organizations should either have the in-house expertise to work with those components, the intention to grow that capability or the willingness to use an expert team. #2 Application portability Customers will be able to use the latest version of upstream Kubernetes, ensuring that they are never locked into a vendor-specific distribution. #3 Flexibility This service allows customers to implement a multi-cloud strategy that lets them choose tools and clouds as per their preference to build a flexible platform on upstream Kubernetes for their workloads. #4  Open-source community support VMware contributes to multiple SIGs and open-source projects that strengthen key technologies and fill up the gaps in the Kubernetes ecosystem. #5 Cloud native ecosystem support and guidance Customers will be able to access 24x7, SLA-driven support for Kubernetes and key open-source tooling. VMware experts will partner with customers to help them with architecture design reviews and help them evaluate networking, monitoring, backup, and other solutions to build a production-grade open source Kubernetes platform. The Kubernetes community has received this news with enthusiasm. https://twitter.com/cmcluck/status/1100506616124719104 https://twitter.com/edhoppitt/status/1100444712794615808 In November, VMware announced it was buying Heptio at VMworld. Heptio products work with upstream Kubernetes and help enterprises realize the impact of Kubernetes on their business. According to FierceTelecom, “PKS Essentials takes the Heptio approach of building a more modular, customized architecture for deploying software containers on upstream Kubernetes but with VMware support.” Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure Tumblr open sources its Kubernetes tools for better workflow integration
Read more
  • 0
  • 0
  • 3874

article-image-redhats-operatorhub-io-makes-it-easier-for-kuberenetes-developers-and-admins-to-find-pre-tested-operators-for-applications
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications

Melisha Dsouza
01 Mar 2019
2 min read
Last week, Red Hat launched OperatorHub.io in collaboration with Microsoft, Google Cloud, and Amazon Web Services, as a “public registry” for finding services backed by the Kubernetes Operator. According to the RedHat blog, the Operator pattern automates infrastructure and application management tasks using Kubernetes as the automation engine. Developers have shown a growing interest in Operators owing to features like accessing automation advantages of public cloud, enable the portability of the services across Kubernetes environments, and much more. RedHat also comments that the number of Operators available has increased but it is challenging for developers and Kubernetes administrators to find available Operators that meet their quality standards. To solve this challenge, they have come up with OperatorHub.io. Features of OperatorHub.io OperatorHub.io is a common registry to “publish and find available Operators”. This is a curation of Operator-backed services for a base level of documentation. It also includes active communities or vendor-backing to show maintenance commitments, basic testing, and packaging for optimized life-cycle management on Kubernetes. The platform will enable the creation of more Operators as well as an improvement to existing Operators. This is a centralized repository that helps users and the community to organize around Operators. Operators can be successfully listed on OperatorHub.io only when then show cluster lifecycle features and packaging that can be maintained through the Operator Framework’s Operator Lifecycle Management, along with acceptable documentation for intended users. Operators that are currently listed in OperatorHub.io include Amazon Web Services Operator, Couchbase Autonomous Operator, CrunchyData’s PostgreSQL, MongoDB Enterprise Operator and many more. This news has been accepted by the Kubernetes community with much enthusiasm. https://twitter.com/mariusbogoevici/status/1101185896777281536 https://twitter.com/christopherhein/status/1101184265943834624 This is not the first time that RedHat has tried to build on the momentum for the Kubernetes Operators. According to TheNewStack, last year, the company acquired CoreOS last year and went on to release Operator Framework, an open source toolkit that “provides an SDK, lifecycle management, metering, and monitoring capabilities to support Operators”. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover  
Read more
  • 0
  • 0
  • 2917
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-announcing-wireshark-3-0-0
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

Announcing Wireshark 3.0.0

Melisha Dsouza
01 Mar 2019
2 min read
Yesterday, Wireshark released its version 3.0.0 with new user interface improvements, bug fixes, new Npcap Windows Packet capturing driver and more. Wireshark, the open source and cross-platform network protocol analysis software is used by security analysts, experts and developers for analysis, troubleshooting, development, and other security-related tasks to capture and browse the packets traffic on computer networks. Features of Wireshark 3.0.0 The Windows .exe installers replaces WinPcap with Npcap. Npcap supports loopback capture and 802.11 WiFi monitor mode capture - only if supported by the NIC driver. The "Map-Button" of the Endpoint dialog that was erased since Wireshark Version 2.6.0 has been added in a modernized form. The macOS package ships with Qt 5.12.1 and the OS requires version 10.12 or later. Initial support has been provided for using PKCS #11 tokens for RSA decryption in TLS. Configure this at Preferences, RSA Keys. The new WireGuard dissector has decryption support and requires Libgcrypt 1.8 for the same. You can now copy coloring rules, IO graphs, filter Buttons and protocol preference tables from other profiles using a button in the corresponding configuration dialogs. Wireshark now supports Swedish, Ukrainian and Russian language. A new dfilter function string() has been added which allows the conversion of non-string fields to strings. This enables string functions to be used on them. The legacy (GTK+) user interface, the portaudio library are removed and no longer supported. Wireshark requires Qt 5.2 or later, GLib 2.32 or later, GnuTLS 3.2 or later as optional dependency. Building Wireshark requires Python 3.4 or a newer version. Data following a TCP ZeroWindowProbe is not passed to subdissectors and is marked as retransmission. Head over to Wireshark’s official blog for the entire list of upgraded features in this release. Using statistical tools in Wireshark for packet analysis [Tutorial] Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Analyzing enterprise application behavior with Wireshark 2
Read more
  • 0
  • 0
  • 26021

article-image-chaos-engineering-platform-gremlin-launches-gremlin-free
Richard Gall
27 Feb 2019
3 min read
Save for later

Chaos engineering platform Gremlin launches Gremlin Free

Richard Gall
27 Feb 2019
3 min read
Chaos engineering has been a trend to watch for the last 12 months, but it is yet to really capture the imagination of the global software industry. It remains a pretty specialised discipline confined to the most forward thinking companies who depend on extensive distributed systems. However, that could all be about to change thanks to Gremlin who have today announced the launch of Gremlin Free. Gremlin Free is a tool that allows software, infrastructure and DevOps engineers to perform shutdown and CPU attacks on their infrastructure in a safe and controlled way using a neat and easy to use UI. In a blog post published on the Gremlin site today, Lorne Kligerman, Director of Product, said "we believe the industry has answered why do chaos engineering, and has begun asking how do I begin practicing Chaos Engineering in order to significantly increase the reliability and resiliency of our systems to provide the best user experience possible." Read next: How Gremlin is making chaos engineering accessible [Interview] What is Gremlin free? Gremlin Free is based on Netflix's Chaos Monkey tool. Chaos Monkey is the tool that gave rise to chaos engineering way back in 2011 when the streaming platform first moved to AWS. It let Netflix engineers "randomly shut down compute instances," which became a useful tactic for stress testing the reliability and resilience of its new microservices architecture. What can you do with Gremlin Free? There are two attacks you can do with Gremlin Free: Shutdown and CPU. As the name indicates, Shutdown lets you take down (or reboot) multiple hosts or containers. CPU attacks simply allow you to cause spikes in CPU usage to monitor its impact on your infrastructure. Both attacks can help teams identify pain points within their infrastructure, and ultimately form the foundations of an engineering strategy that relies heavily on the principles of chaos engineering. Why Gremlin Free now? Gremlin cites data from Gartner that underlines just how expensive downtime can be: according to Gartner, eCommerce companies can lose an average of $5,600 per minute, with that figure stretching even bigger for the planet's leading eCommerce businesses. However, despite the cost of downtime making a clear argument for chaos engineering's value, its adoption isn't widespread - certainly not as widespread as Gremlin believe it should be. Kligerman said "It's still a new concept to most engineering teams, so we wanted to offer a free version of our software that helps them become more familiar with chaos engineering - from both a tooling and culture perspective." If you're interested in trying chaos engineering, sign up for Gremlin Free here.
Read more
  • 0
  • 0
  • 2431

article-image-rancher-labs-announces-k3s-a-lightweight-distribution-of-kubernetes-to-manage-clusters-in-edge-computing-environments
Melisha Dsouza
27 Feb 2019
3 min read
Save for later

Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments

Melisha Dsouza
27 Feb 2019
3 min read
Yesterday, Rancher Labs announced K3s, a lightweight Kubernetes distribution to run Kubernetes in a resource-constrained environment. According to the official blog post, this project was launched to “address the increasing demand for small, easy to manage Kubernetes clusters running on x86, ARM64 and ARMv7 processors in edge computing environments”. To operate an edge computing on Kubernetes is a complex task. K3s will reduce the memory required to run Kubernetes and provide developers with a distribution of Kubernetes that requires less than 512 MB of RAM, ideally suited for edge use cases. Features of K3s #1 Simplicity of Installation K3s was designed to maximize the simplicity of installation and operations on a large scale Kubernetes cluster. It is a standards-compliant, Kubernetes distribution for “mission-critical, production use cases”. #2 Zero Host dependencies There is no requirement for an external installer to install Kubernetes--everything necessary to install it on any device is included in a single, 40MB binary.  A single command will enable the single-node k3s cluster to be provisioned or upgraded. Nodes can be simply added to the cluster running a single command on the new node, pointing it to the original server and passing through a secure token. #3 Automatic certificate and encryption key generation All of the certificates needed to establish TLS between the Kubernetes masters and nodes, as well as the encryption keys for service accounts are automatically created when a cluster is launched. #4 Reduces Memory footprint K3s reduces the memory required to run Kubernetes by removing old and non-essential code and any alpha functionality that is disabled by default. It also removes old features that have been deprecated, non-default admission controllers, in-tree cloud providers, and storage drivers. Users can add in any drivers they need. #5 Conservation of RAM Rancher’s K3s combines the processes that run on a Kubernetes management server into a single process. It also combines the Kubelet, kubeproxy and flannel agent processes that run on a worker node into a single process. Both of these techniques help in conserving RAM. #6 Reducing runtime footprint Rancher labs were able to cut down the runtime footprint significantly by using containerd instead of Docker as the runtime container engine. Functionalities like libnetwork, swarm, Docker storage drivers and other plugins have also been removed to achieve this aim. #7 SQLite as an optional datastore To provide a lightweight alternative to etcd, Rancher added SQLite as optional datastore in K3s. This was done because SQLite has “a lower memory footprint, as well as dramatically simplified operations.” Kelsey Hightower, a Staff Developer Advocate at Google Cloud Platform, commended Rancher Labs for removing features, instead of adding anything additional, to be able to focus on running clusters in low-resource computing environments. https://twitter.com/kelseyhightower/status/1100565940939436034 Kubernetes users have also welcomed the news with enthusiasm. https://twitter.com/toszos/status/1100479805106147330 https://twitter.com/ashim_k_saha/status/1100624734121689089 K3s is released with support for x86_64, ARM64 and ARMv7 architectures,  to work across any edge infrastructure. Head over to the K3s page for a quick demo on how to use the same. Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers Introducing Platform9 Managed Kubernetes Service CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure  
Read more
  • 0
  • 0
  • 4552

article-image-microsoft-mwc-mobile-world-congress-day-1-hololens-2-azure-powered-kinect-camera-and-more
Melisha Dsouza
25 Feb 2019
4 min read
Save for later

Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more!

Melisha Dsouza
25 Feb 2019
4 min read
The ongoing Mobile World Conference 2019 at Barcelona, has an interesting line-up of announcements, keynote speakers, summits, seminars and more. It is the largest mobile event in the world, that brings together the latest innovations and leading-edge technology from more than two thousand leading companies. The theme of this year’s conference is ‘Intelligent Connectivity’ which comprises of the combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data. Microsoft unveiled a host of new products along the same theme on the first day of the conference. Let’s have a look at some of them. #1 Microsoft HoloLens 2 AR announced! Microsoft unveiled the HoloLens 2 AR device at the Mobile World Congress (MWC). This $3,500 AR device is aimed for businesses, and not for the average person, yet. It is designed primarily for situations where field workers might need to work hands-free, such as manufacturing workers, industrial designers and those in the military, etc. This device is definitely an upgrade from Microsoft’s very first HoloLens that recognized basic tap and click gestures. The new headset recognizes 21 points of articulation per hand and accounts for improved and realistic hand motions. The device is less bulky and its eye tracking can measure eye movement and use it to interact with virtual objects. It is built to be a cloud- and edge-connected device. The HoloLens 2 field of view more than doubles the area covered by HoloLens 1. Microsoft said it has plans to announce a  follow-up to HoloLens 2 in the next year or two. According to Microsoft, this device will be even more comfortable and easier to use, and that it'll do more than the HoloLens 2. HoloLens 2 is available on preorder and will be shipping later this year. The device has already found itself in the midst of a controversy after the US Army invested $480 million in more than 100,000 headsets. The contract has stirred dissent amongst Microsoft workers. #2 Azure-powered Kinect camera for enterprise The Azure-powered Kinect camera is an “Intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” according to Azure VP, Julia White. This AI-powered smart enterprise camera leverages Microsoft’s 3D imaging technology and can possibly serve as a companion hardware piece for HoloLens in the enterprise. The system has a 1-megapixel depth camera, a 12-megapixel camera and a seven-microphone array on board to help it work  with "a range of compute types, and leverage Microsoft’s Azure solutions to collect that data.” The system, priced at $399, is available for pre-order. #3 Azure Spatial Anchors Azure Spatial Anchors are launched as a part of the Azure mixed reality services. These services will help developers and business’ build cross-platform, contextual and enterprise-grade mixed reality applications. According to the Azure blog, these mixed reality apps can map, designate and recall precise points of interest which are accessible across HoloLens, iOS, and Android devices. Developers can integrate their solutions with IoT services and artificial intelligence, and protect their sensitive data using security from Azure. Users can easily infuse artificial intelligence (AI) and integrate IoT services to visualize data from IoT sensors as holograms. The Spatial Anchors will allow users to map their space and connect points of interest “to create wayfinding experiences, and place shareable, location-based holograms without any need for environmental setup or QR codes”. Users will also be able to manage identity, storage, security, and analytics with pre-built cloud integrations to accelerate their mixed reality projects. #4 Unreal Engine 4 Support for Microsoft HoloLens 2 During the  Mobile World Congress (MWC), Epic Games Founder and CEO, Tim Sweeney announced that support for Microsoft HoloLens 2 will be coming to Unreal Engine 4 in May 2019. Unreal Engine will fully support HoloLens 2 with streaming and native platform integration. Sweeney says that “AR is the platform of the future for work and entertainment, and Epic will continue to champion all efforts to advance open platforms for the hardware and software that will power our daily lives.” Unreal Engine 4 support for Microsoft HoloLens 2 will allow for "photorealistic" 3D in AR apps. Head over to Microsoft's official blog for an in-depth insight on all the products released. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft joins the OpenChain Project to help define standards for open source software compliance
Read more
  • 0
  • 0
  • 3107
article-image-jfrog-acquires-devops-startup-shippable-for-an-end-to-end-devops-solution
Melisha Dsouza
22 Feb 2019
2 min read
Save for later

JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution

Melisha Dsouza
22 Feb 2019
2 min read
JFrog, a leading company in DevOps has acquired Shippable- a cloud-based startup that focuses on Kubernetes-ready continuous integration and delivery (CI/CD), helping developers to ship code and deliver app and microservices updates. This strategic acquisition- JFrog’s fifth-  aims at providing customers with a “complete, integrated DevOps pipeline solution”. The collaboration between JFrog and Shippable will allow users to automate their development processes right from the time the code is committed all the way to production. Shlomi Ben Haim, Co-founder, and CEO of JFrog, says in the official press release that “The modern DevOps landscape requires ever-faster delivery with more and more automation. Shippable’s outstanding hybrid and cloud native technologies will incorporate yet another best-of-breed solution into the JFrog platform. Coupled with our commitments to universality and freedom of choice, developers can expect a superior out-of-the-box DevOps platform with the greatest flexibility to meet their DevOps needs.” According to an email sent to Packt Hub, JFrog, will now allow developers to have a completely integrated DevOps pipeline with JFrog, while still retaining the full freedom to choose their own solutions in JFrog’s universal DevOps model. The plan is to release the first technology integrations with JFrog Enterprise+ this coming summer, and a full integration by Q3 of this year. According to JFrog, this acquisition will result in a more automated, complete, open and secure DevOps solution in the market. This is just another victory for JFrog. JFrog has previously announced a $165 million Series D funding. Last year, the company also launched JFrog Xray, a binary analysis tool that performs recursive security scans and dependency analyses on all standard software package and container types. Avi Cavale, founder and CEO of Shippable, says that Shippable users and customers will now “have access to leading security, binary management and other high-powered enterprise tools in the end-to-end JFrog Platform”, and that the combined forces of JFrog and Shippable can make full DevOps automation from code to production a reality. Spotify acquires Gimlet and Anchor to expand its podcast services Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Adobe Acquires Allegorithmic, a popular 3D editing and authoring company
Read more
  • 0
  • 0
  • 2927

article-image-google-to-acquire-cloud-data-migration-start-up-alooma
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

Google to acquire cloud data migration start-up ‘Alooma’

Melisha Dsouza
20 Feb 2019
2 min read
On Tuesday, Google announced its plans to acquire cloud migration company Alooma, which helps other companies move their data from multiple sources into a single data warehouse. Alooma not only provides services to help with migrating to the cloud but also helps in cleaning up this data and then using it for Artificial Intelligence and machine learning use cases. Google Cloud’s blog states that “ The addition of Alooma, subject to closing conditions, is a natural fit that allows us to offer customers a streamlined, automated migration experience to Google Cloud, and give them access to our full range of database services, from managed open source database offerings to solutions like Cloud Spanner and Cloud Bigtable” The financial details of the deal haven't been released yet. In early 2016, Alooma raised about $15 million, including an $11.2 million Series A round led by Lightspeed Venture Partners and Sequoia Capital. Aloomas’ blog states that “Joining Google Cloud will bring us one step closer to delivering a full self-service database migration experience bolstered by the power of their cloud technology, including analytics, security, AI, and machine learning” In a statement to TechCrunch, Google says “Regarding supporting competitors, yes, the existing Alooma product will continue to support other cloud providers. We will only be accepting new customers that are migrating data to Google Cloud Platform, but existing customers will continue to have access to other cloud providers.” This means that, after the deal is closed, Alooma will not accept any new customers who want to migrate data to any competitors--for instance, Amazon’s Azure. Those who use Alooma in combination with AWS, Azure and other non-Google services will likely start looking for other solutions. Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Autodesk acquires PlanGrid for $875 million, to digitize and automate construction workflows GitHub acquires Spectrum, a community-centric conversational platform
Read more
  • 0
  • 0
  • 2815

article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 9566
article-image-user-discovers-bug-in-debian-stable-kernel-upgrade-armmp-package-affected
Melisha Dsouza
18 Feb 2019
3 min read
Save for later

User discovers bug in debian stable kernel upgrade; armmp package affected

Melisha Dsouza
18 Feb 2019
3 min read
Last week, Jürgen Löb, a Debian user, discovered a bug in the linux-image-4.9.0-8-armmp-lpae package of the Debian system. The version of the system affected is 4.9.144-3. The user states that he updated his Lamobo R1 board with apt update; apt upgrade. However, after the update, uboot was struck at "Starting kernel" with no further output after the same. The same issue was faced by him on Bananapi 1 board. He performed the following steps to recover his system: downgrading to a backup kernel by mounting the boot partition on the sd card. dd if=boot.scr of=boot.script bs=72 skip=1 (extract script) replaced the following command in boot.script: setenv fk_kvers '4.9.0-8-armmp-lpae' with setenv fk_kvers '4.9.0-7-armmp-lpae'  (backup kernel was available on his boot            partition) Then execute: mkimage -C none -A arm -T script -d boot.script boot.scr After performing these steps he was able to boot the system with the old kernel Version and restore the previous version (4.9.130-2) with the following command: dpkg -i linux-image-4.9.0-8-armmp-lpae_4.9.130-2_armhf.deb He cross-checked the issue and said that upgrading to 4.9.144-3 again after these steps results in the above unbootable behavior. Thus concluding, that the upgrade to 4.9.144-3 is causing the said problem. Timo Sigurdsson, another Debian user stated that “I recovered both systems by replacing the contents of the directories /boot/ and /lib/modules/ with those of a recent backup (taken 3 days ago). After logging into the systems again, I downgraded the package linux-image-4.9.0-8-armmp-lpae to 4.9.130-2 and rebooted again in order to make sure no other package upgrade caused the issue. Indeed, with all packages up-to-date except linux-image-4.9.0-8-armmp-lpae, the systems work just fine. So, there must be a serious regression in 4.9.144-3 at least on armmp-lpae”. In response to this thread, multiple users replied with other instances of broken packages, like plain armmp (non-lpae) is broken for Armada385/Caiman and QEMU's. Vagrant Cascadian, another user added to the list that all of the armhf boards running this kernel failed to boot, including: imx6: Cubox-i4pro, Cubox-i4x4, Wandboard Quad exynos5: Odroid-XU4 exynos4: Odroid-U3 rk3328: firefly-rk3288 sunxi A20: Cubietruck The Debian team has not reverted back with any official response. You can head over to the debian bugs page for more information on this news. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 4059

article-image-serverless-computing-101
Guest Contributor
09 Feb 2019
5 min read
Save for later

Serverless Computing 101

Guest Contributor
09 Feb 2019
5 min read
Serverless applications began gaining popularity when Amazon launched AWS Lambda back in the year 2014. Since then, we are becoming more familiar with Serverless Computing as it is exponentially growing in use and reference among the vendors who are entering the markets with their own solutions. The reason behind the hype of serverless computing is it requires no infrastructure management which is a modern approach for the enterprise to lessen up the workload. What is Serverless Computing? It is a special kind of software architecture which executes the application logic in an environment without visible processes, operating systems, servers, and virtual machines. Serverless Computing is also responsible for provisioning and managing the infrastructure entirely by the service provider. Serverless defines a cloud service that abstracts the details of the cloud-based processor from its user; this does not mean servers are no longer needed, but they are not user-specified or controlled. Serverless computing refers to serverless architecture which relates to the applications that depend on a third-party service (BaaS) and container (FaaS). Image Source: Tatvasoft The top serverless computing providers like Amazon, Microsoft, Google and IBM provide serverless computing like FaaS to companies like NetFlix, Coca-cola, Codepen and many more. FaaS Function as a Service is a mode of cloud computing architecture where developers write business logic functions or java development code which are executed by the cloud providers. In this, the developers can upload loads of functionality into the cloud that can be independently executed. The cloud service provider manages everything from execution to scaling it automatically. Key components of FaaS: Events - Something that triggers the execution of the function is regarded as an event. For instance: Uploading a file or publishing a message. Functions - It is regarded as an independent unit of deployment. For instance: Processing a file or performing a scheduled task. Resources - Components used by the function is defined as resources. For instance: File system services or database services. BaaS Backend as a Service allows developers to write and maintain only the frontend of the application and enable them by using the backend service without building and maintaining them. The BaaS service providers offer in-built pre-written software activities like user authentication, database management, remote updating, cloud storage and much more. The developers do not have to manage servers or virtual machines to keep their applications running which helps them to build and launch applications more quickly. Image courtesy - Gallantra Use-Cases of Serverless Computing Batch jobs scheduled tasks: Schedules the jobs that require intense parallel computation, IO or network access. Business logic: The orchestration of microservice workloads that execute a series of steps for applying your ideas. Chatbots: Helps to scale at peak demand times automatically. Continuous Integration pipeline: It has the ability to remove the need for pre-provisioned hosts. Captures Database change: Auditing or ensuring modifications in order to meet quality standards. HTTP REST APIs and Web apps: Sends traditional request and gives a response to the workloads. Mobile Backends: Can build on the REST API backend workload above the BaaS APIs. Multimedia processing: To execute a transformational process in response to a file upload by implementing the functions. IoT sensor input messages: Receives signals and scale in response. Stream processing at scale: To process data within a potentially infinite stream of messages. Should you use Serverless Computing? Merits Fully managed services - you do not have to worry about the execution process. Supports event triggered approach - sets the priorities as per the requirements. Offers Scalability - automatically handles load balancing. Only pay for Execution time - you need to pay just for what you used. Quick development and deployment - helps to run infinite test cases without worrying about other components. Cut-down time-to-market - you can look at your refined product in hours after creating it. Demerits Third-party dependency - developers have to depend on cloud service providers completely. Lacking Operational tools - need to depend on providers for debugging and monitoring devices. High Complexity - takes more time and it is difficult to manage more functions. Functions cannot stay for a longer period - only suitable for applications having shorter processes. Limited mapping to database indexes - challenging to configure nodes and indexes. Stateless Functions - resources cannot exist within a function after the function stops to exit. Serverless computing can be seen as the future for the next generation of cloud-native and is a new approach to write and deploy applications that allow developers to focus only on the code. This approach helps to reduce the time to market along with the operational costs and system complexity. Third-party services like AWS Lambda has eliminated the requirement to set up and configure physical servers or virtual machines. It is always best to take an expert's advice that holds years of experience in software development with modern technologies. Author Bio: Working as a manager in a Software outsourcing company Tatvasoft.com, Vikash Kumar has a keen interest in blogging and likes to share useful articles on Computing. Vikash has also published his bylines on major publication like Kd nuggets, Entrepreneur, SAP and many more. Google Cloud Firestore, the serverless, NoSQL document database, is now generally available Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications
Read more
  • 0
  • 0
  • 6120