Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Networking

54 Articles
article-image-oaths-distributed-network-telemetry-collector-panoptes-is-now-open-source
Melisha Dsouza
04 Oct 2018
3 min read
Save for later

Oath’s distributed network telemetry collector- 'Panoptes' is now Open source!

Melisha Dsouza
04 Oct 2018
3 min read
Yesterday, the Oath network automation team open sourced Panoptes, a distributed system for collecting, enriching and distributing network telemetry. This pluggable, distributed and high-performance data collection system supports multiple polling formats, including SNMP and vendor-specific APIs. It also supports emerging streaming telemetry standards including gNMI. Panoptes is written primarily in Python. It leverages multiple open-source technologies to provide the most value for the least development effort. Panoptes Architecture Source: Yahoo Developers The architecture is designed to enable easy data distribution and integration with other systems. The plugin to push metrics into InfluxDB allows Panoptes to evolve with industry standards. Teams can quickly set up a fully-featured monitoring environment because of the combination of Grafana and the InfluxData ecosystem. There were multiple issues inherent in legacy polling systems, including overpolling due to multiple point solutions for metrics, a lack of data normalization, consistent data enrichment and integration with infrastructure discovery systems. Panoptes aims to overcome all these issues. Check scheduling is accomplished using Celery, which is a horizontally scalable, open-source scheduler that utilizes a Redis data store. Panoptes ships with a simple, CSV-based discovery system. It can be integrated with a CMDB. From there, Panoptes will manage the task of scheduling polling for the desired devices. Users can also develop custom discovery plugins to integrate with their CMDB and other device inventory data sources. Vendors are moving towards a more streamlined model of telemetry. Panoptes’ flexible architecture will minimize the effort required to adopt these new protocols. The metric bus at the center of the model is implemented on Kafka. All data plane transactions flow across this bus. Discovery plugins publish devices to the bus and polling plugins publish metrics to the bus. Similarly, numerous clients read the data off of the bus for additional processing and forwarding. This architecture enables easy data distribution and integration with other systems. The team at Oath has deployed Panoptes in a tiered, federated model. They have developed numerous custom applications on the platform, including a load balancer monitor, a BGP session monitor, and a topology discovery application. All this was done at a reduced cost, thanks to Panoptes. This open-source release is packaged for easy deployment into any Linux-based environment and available on Github. You can head over to Yahoo Developer Network for deeper insights into this news. Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat
Read more
  • 0
  • 0
  • 3763

article-image-viavi-releases-observer-17-5-a-network-performance-management-and-diagnostics-tool
Natasha Mathur
04 Oct 2018
2 min read
Save for later

VIAVI releases Observer 17.5, a network performance management and diagnostics tool

Natasha Mathur
04 Oct 2018
2 min read
Viavi Solutions, a San Jose-based network test, measurement, and assurance technology company, released version 17.5 of Observer, a popular NPMD ( Network Performance Managment and Diagnostics) tool earlier this week. Observer 17.5 has features such as end-user experience scores, full 100 GB support, improved user experience, and enhanced analytic processing among others. Observer is recognized as a Leader in Gartner's Network Performance Management and Diagnostics (NPMD) Magic Quadrant. It is the network administrator's ultimate toolbox. It enables you to discover your network, capture and decode network traffic, as well as use real-time statistics to solve network problems. Observer 17.5 aims to replace the detailed KPIs that are provided to network engineers with a single result-oriented End-User Experience Score. This will help reduce the guesswork and dead ends that result from troubleshooting processes used by network teams. Let’s discuss Observer 17.5 key features. End-User Experience Scoring and Workflows Observer 17.5 comprises End User Experience Scores integrated with out-of-the-box workflows. This empowers any engineer to navigate a guided path to resolution. Observer 17.5 is backed by complete wire-data, the filtered and relevant insight which can be provided to appropriate IT parties to take corrective actions. Full 100 GB interface support Observer provides full-fidelity forensics for investigations with interfaces for 10, and 40 GB. With Observer 17.5, it now also offers support for 100 GB. This ensures the accuracy and completeness of Observer's performance analytics in high-speed network environments. With the increase in network traffic volumes, it makes sure that every metric reported by the IT team is supported by wire data for root-cause analysis and granular reconstruction. Enhanced User Experience Understanding Observer Apex implements adaptive machine learning that delivers intelligent user insight. This helps reduces the false positives as it enforces enhanced understanding of normal environment behavior and user experience. Improved Interfaces and Analytic Processing User interfaces have been redesigned in Observer 17.5 that help with easier navigation and interaction across different key elements of the Observer platform. Also, the real-time analytical performance has improved in this version. For more information, check out the official blog post. Top 10 IT certifications for cloud and networking professionals in 2018 Top 5 cybersecurity assessment tools for networking professionals Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 2356

article-image-announcing-hyperswarm-preview
Melisha Dsouza
27 Sep 2018
2 min read
Save for later

Announcing Hyperswarm Preview

Melisha Dsouza
27 Sep 2018
2 min read
Connecting two computers over the Internet is difficult. Software needs to negotiate NATs, firewalls, and limited IPv4 addresses. To overcome this issue, the Beaker browser team is releasing a new Kademlia DHT-based toolset for connecting peers called 'Hyperswarm'. Currently the team uses a tracker to get users connected. However, to move towards a more decentralized model, the team has been working on Hyperswarm to improve the reliability of the Dat project connections. What is Hyperswarm? Hyperswarm is a stack of networking modules that finds peers and creates reliable connections. Users join the swarm for a "topic". They periodically query other peers who are part of the topic. To establish a connection between peers, Hyperswarm creates a socket between them using either UTP or TCP. It uses a Kademlia DHT to track peers and arrange connections. The DHT itself includes mechanisms to establish a direct connection between two peers in which one or both are behind firewalls or behind routers that use network address translation (NAT). A few things about Hyperswarm that you should know Iterating on security DHTs have a number of denial-of-service vectors. There are some known mitigations for the same, but they have tradeoffs. The team is thinking through these tradeoffs and will iterate on this over time. Hyperswarm is not anonymous Hyperswarm does not hide users’ IPs. Devices join topics by listing their IP so that other devices can establish connections. The Dat protocol, however, takes steps to hide the topics’ contents. When downloading a dat, the protocol hashes the dat’s key to create the swarm topic. Only those who know the dat’s key can access the dat’s data or create new connections to people in the topic. The members of the topics are public. The deployment strategy The team will be updating the tracker server to make the deployment backward compatible. This will make it possible for old Dat clients to connect using the tracker, while new clients can connect using the DHT. Hyperswarm is MIT licensed open-source and can be found at the following repositories: Network, discovery, dht To know more about this preview release, head over to pfrazee.hasbase.io. Linkerd 2.0 is now generally available with a new service sidecar design Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more!
Read more
  • 0
  • 0
  • 1955
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-linkerd-2-0-is-now-generally-available-with-a-new-service-sidecar-design
Sugandha Lahoti
20 Sep 2018
2 min read
Save for later

Linkerd 2.0 is now generally available with a new service sidecar design

Sugandha Lahoti
20 Sep 2018
2 min read
Linkerd 2.0 is now generally available. Linkerd is a transparent proxy that adds service discovery, routing, failure handling, and visibility to modern software applications. Linkerd 2.0 brings two significant changes. First, Linkerd 2.0 is completely rewritten to to be faster and smaller than Linkerd 1.x. Second, Linkerd moves beyond the service mesh model to running on a single service. It also comes with a focus on minimal configuration, a modular control plane design, and UNIX-style CLI tools. Let’s understand what each of these changes mean. Smaller and Faster Linkerd has undergone a complete change to become faster and smaller than its predecessor. Linkerd 2.0’s data plane is comprised of ultralight Rust proxies which consume around 10mb of RSS and have a p99 latency of <1ms. Linkerd’s minimalist control plane (written in Go) is similarly designed for speed and low resource footprint. Service sidecar design It also adopts a modern service sidecar design from the traditional service mesh model. The traditional service mesh model has two major problems. First, they add a significant layer of complexity to the tech stack. Second they are designed to meet the needs of platform owners undermining the service owners. Linkerd 2.0’s service sidecar design offers a solution to both. It allows platform owners to build out a service mesh incrementally, one service at a time, to provide security and reliability that a full service mesh provides. More importantly, Linkerd 2.0 addresses the needs of service owners directly with its service sidecar model to its focus on diagnostics and debugging. Linkerd 2.0 at its core is a service sidecar, running on a single service without requiring cluster-wide installation. Even without having a whole Kubernetes cluster, developers can run Linkerd and get: Instant Grafana dashboards of a service’s success rates, latencies, and throughput A topology graph of incoming and outgoing dependencies A live view of requests being made to your service Improved, latency-aware load balancing Installation Installing Linkerd 2.0 on a service requires no configuration or code changes. You can try Linkerd 2.0 on a Kubernetes 1.9+ cluster in 60 seconds by running: curl https://run.linkerd.io/install | sh Also check out the full Getting Started Guide. Linkerd 2.0 is also hosted on GitHub. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits. Kubernetes 1.11 is here! VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service.
Read more
  • 0
  • 0
  • 1657

article-image-red-hat-infrastructure-migration-solution-for-proprietary-and-siloed-infrastructure
Savia Lobo
24 Aug 2018
3 min read
Save for later

Red Hat infrastructure migration solution for proprietary and siloed infrastructure

Savia Lobo
24 Aug 2018
3 min read
Red Hat recently introduced its infrastructure migration solution to help provide an open pathway to digital transformation. Red Hat infrastructure migration solution provides an enterprise-ready pathway to cloud-native application development via Linux containers, Kubernetes, automation, and other open source technologies. It helps organizations to accelerate transformation by more safely migrating and managing workload to an open source infrastructure platform, thus reducing cost and speeding innovation. Joe Fernandes, Vice President, Cloud Platforms Products at Red Hat, said, “Legacy virtualization infrastructure can serve as a stumbling block too, rather than a catalyst, for IT innovation. From licensing costs to closed vendor ecosystems, these silos can hold organizations back from evolving their operations to better meet customer demand. We’re providing a way for enterprises to leapfrog these legacy deployments and move to an open, flexible, enterprise platform, one that is designed for digital transformation and primed for the ecosystem of cloud-native development, Kubernetes, and automation.” RedHat program consists of three phases: Discovery Session: Here, Red Hat Consulting will engage with an organization in a complimentary Discovery Session to better understand the scope of the migration and document it effectively. Pilot Migrations: In this phase, an open source platform is deployed and operationalized using Red Hat’s hybrid cloud infrastructure and management tooling. Pilot migrations are carried out to demonstrate typical approaches, establish initial migration capability, and define the requirements for a larger scale migration. Migration at scale: In this phase, IT teams are able to migrate workloads at scale. Red Hat Consulting also aids in better streamline operations across virtualization pool, and navigate complex migration cases. Post the Discovery Session, recommendations are provided for a more flexible open source virtualization platform based on Red Hat technologies. These include: Red Hat Virtualization offers an open software-defined infrastructure and centralized management platform for virtualized Linux and Windows workloads. It is designed to empower customers with greater efficiency for traditional workloads, along with creating a launchpad for cloud-native and container-based application innovation. Red Hat OpenStack Platform is built on the enterprise-grade backbone of Red Hat Enterprise Linux. It helps users to build an on-premise cloud architecture that provides resource elasticity, scalability, and increased efficiency. Red Hat Hyperconverged Infrastructure is a portfolio of solutions that includes Red Hat Hyperconverged Infrastructure for both Virtualization and Cloud. Customers can use it to integrate compute, network and storage in a form factor designed to provide greater operational and cost efficiency. Using the new migration capabilities based on Red Hat’s management technologies, including Red Hat Ansible Automation, new workloads can be delivered in an automated fashion with self-service. These can also enable IT to more quickly re-create workload across hybrid and multi-cloud environment. Read more about the Red Hat infrastructure migration solution on RedHat’s official blog. Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 2451

article-image-red-hat-enterprise-linux-7-6-beta-released-with-security-cloud-automation
Sugandha Lahoti
24 Aug 2018
2 min read
Save for later

Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation

Sugandha Lahoti
24 Aug 2018
2 min read
Red Hat has rolled out their Red Hat Enterprise Linux 7.6 beta in their goal of becoming the cloud powerhouse. This release focuses on security and compliance, automation, and cloud deployment features. Linux security improvements As far as Linux based security is considered, some improvements made include: GnuTLS library with Hardware Security Module (HSM) support Strengthened OpenSSL for mainframes Enhancements to the nftables firewall Integration of Berkeley Packet Filter (eBPF) to provide a safer mechanism for monitoring Linux kernel activity Hybrid cloud deployment-related changes Red Hat Enterprise Linux 7.6 has introduced a variety of cloud deployment improvements. Red Hat’s Paul Cormier considers the hybrid cloud to be the default technology choice. “Enterprises want the best answers to meet their specific needs, regardless of whether that’s through the public cloud or on bare metal in their own datacenter.” For starters, Red Hat Enterprise Linux 7.6 uses Trusted Platform Module (TPM) 2.0 hardware modules to enable Network Bound Disk Encryption (NBDE). This provides two layers of security features for hybrid cloud operations: The network-based mechanism works in the cloud, On-premises TPM helps to keep information on disks more secure. They have also introduced Podman, a part of Red Hat's lightweight container toolkit. It adds enterprise-grade security features to containers. Podman complements Buildah and Skopeo by enabling users to run, build, and share containers using the command line interface. It can also work with CRI-O, a lightweight Kubernetes containers runtime. Management and Automation The latest beta version also adds enhancements to the Red Hat Enterprise Linux Web Console including: Showing available updates on the system summary pages. Automatic configuration of single sign-on for identity management, helping to simplify this task for security administrators. An interface to control firewall services. These are just a select few updates. For a more detailed coverage, go through the release notes available on the Red Hat Blog. Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available. What RedHat and others announced at KubeCon + CloudNativeCon 2018. RedHat and others launch Istio 1.0 service mesh for microservices.
Read more
  • 0
  • 0
  • 3094
article-image-amazon-may-be-planning-to-move-from-oracle-by-2020
Natasha Mathur
07 Aug 2018
3 min read
Save for later

Amazon may be planning to move from Oracle by 2020

Natasha Mathur
07 Aug 2018
3 min read
Amazon is reportedly working towards shifting its business away from Oracle’s database software by 2020 as per the CNBC report last week. In fact, according to the CNBC report, Amazon has already started to transfer most of its infrastructure internally to Amazon Web services and will shift entirely by the first quarter of 2020. Both the organizations, Amazon and Oracle, have been fierce competitors for a really long time, comparing whose products and services are more superior. But, Amazon has also been Oracle’s major customer. It has been leveraging Oracle’s database software for many years to power its infrastructure for retail and cloud businesses. Oracle’s database has been a market standard for many since the 1990s. It is one of the most important products for many organizations across the globe as it provides these businesses with databases to run their operations on. Despite having started off its business with Oracle, Amazon launched AWS back in 2006, taking Oracle SQL based database head on and stealing away many of Oracle’s customers. This is not the first time news about Amazon making its shift from Oracle has stirred up. Amazon’s plans to move away from Oracle Technology came to light back in January this year. But, as per the statement issued to CNBC on August 1, a spokesperson for Oracle mentioned that Amazon had "spent hundreds of millions of dollars on Oracle technology" in the past many years. In fact, Larry Ellison, CEO at Oracle, mentioned during Oracle’s second quarter fiscal 2018 earnings call that “A company you’ve heard of just gave us another $50 million this quarter to buy Oracle database and other Oracle technology. That company is Amazon.” The recent news of Amazon’s migration has come at a time of substantial growth for AWS. AWS saw 49% growth rate in Q2 2018, while, Oracle’s business has remained stagnant for four years, thereby, putting more pressure on the company. There’s also been an increase in Amazon’s “backlog revenue” ( i.e. the total value of the company's future contract obligations) as it has reached $16 billion from $12.4 billion in May. In addition to this, AWS has been consistently appearing as “Leader” in the Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IAAS) since past six years. There have also been regular word wars between Larry Ellison and Andy Jassy, CEO AWS, over each other’s performance during conference keynotes and analyst calls. Andy Jassy, CEO at AWS took a shot at Oracle last year during his keynote at the AWS big tech conference. He said “Oracle overnight doubled the price of its software on AWS. Who does that to their customers? Someone who doesn't care about the customer but views them as a means to their financial ends”. Larry Ellison also slammed Amazon during Oracle OpenWorld conference last year saying, “Oracle’s services are just plain better than AWS” and how Amazon is “one of the biggest Oracle users on Planet Earth”. With all the other cloud services such as AWS, Microsoft, Google, Alibaba and IBM catching up, Oracle seems to be losing the database race. So, if Amazon does decide to phase out Oracle, then Oracle will have to step up its game big time to gain back the cloud market share. Oracle makes its Blockchain cloud service generally available Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer  
Read more
  • 0
  • 0
  • 2858

article-image-google-ibm-redhat-and-others-launch-istio-1-0-service-mesh-for-microservices
Savia Lobo
01 Aug 2018
3 min read
Save for later

Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices

Savia Lobo
01 Aug 2018
3 min read
Istio, an open-source platform that connects, manages and secures microservices announced its version 1.0. Istio provides service mesh for microservices from Google, IBM, Lyft, Red Hat, and other collaborators from the open-source community. What’s Istio? Popularly known as a service mesh, Istio collects logs, traces, and telemetry and then adds security and policy without embedding client libraries. Istio also acts as a platform which provides APIs that allows integration with systems for logging, telemetry, and policy. Istio also helps in measuring the actual traffic between services including requests per second, error rates, and latency. It also generates a dependency graph to know how services affect one another. Istio offers a helping hand to one’s DevOps team by providing them with tools to run distributed apps smoothly. Here’s a list of what Istio does for your team: Performs Canary rollouts for allowing the DevOps team to smoke-test any new build and ensure a good build performance. Offers fault-injection, retry logic and circuit breaking so that DevOps teams can perform more testing and change network behavior at runtime to keep applications up and running. Istio adds security. It can be used to layer mTLS on every call, adding encryption-in-flight with an ability to authorize every single call on one’s cluster and mesh. What’s new in Istio 1.0? Multi-cluster support for Kubernetes Multiple Kubernetes clusters can now be added to a single mesh, enabling cross-cluster communication and consistent policy enforcement. The multi-cluster support is now in beta. Networking APIs now in beta Networking APIs that enable fine-grained control over the flow of traffic through a mesh are now in Beta. Explicitly modeling ingress and egress concerns using Gateways allows operators to control the network topology and meet access security requirements at the edge. Mutual TLS can be easily rolled out incrementally without updating all clients Mutual TLS can now be rolled out incrementally without requiring all clients of a service to be updated. This is a critical feature that unblocks adoption in-place by existing production deployments. Istio’s mixer configuration has a support to develop out-of-process adapters Mixer now has support for developing out-of-process adapters. This will become the default way to extend Mixer over the coming releases and makes building adapters much simpler. Updated authorization policies Authorization policies which control access to services are now entirely evaluated locally in Envoy increasing their performance and reliability. Recommended Install method Helm chart installation is now the recommended install method offering rich customization options to adopt Istio on your terms. Istio 1.0 also includes performance improvement parameters such as continuous regression testing, large-scale environment simulation, and targeted fixes. Read more in detail about Istio 1.0 in its official release notes. 6 Ways to blow up your Microservices! How to build Dockers with microservices How to build and deploy Microservices using Payara Micro  
Read more
  • 0
  • 0
  • 2698

article-image-red-hat-enterprise-linux-7-5-rhel-7-5-now-generally-available
Savia Lobo
11 May 2018
2 min read
Save for later

Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available

Savia Lobo
11 May 2018
2 min read
Red Hat recently announced that its latest enterprise distribution, Red Hat Enterprise linux version 7.5 (RHEL 7.5) is now generally available. This release aims at simplifying hybrid computing. The RHEL 7.5 is packed with multiple features for the server administrators and developers. New features in the RHEL 7.5 RHEL 7.5 provides support for Network Bound Disk Encrypted (NBDE) devices, new Red Hat cluster management capabilities, and compliance management features. Enhancements to the cockpit administrator console. Cockpit provides a simplified web interface to help eliminate complexities around Linux system administration. This makes it easier for new administrators, or administrators moving over from non-Linux systems, to better understand the health and status of their operations. Helps cut down storage costs by providing improved compliance controls and security, enhanced usability, and tools to cut down storage costs. Better Integration with Microsoft Windows infrastructure both in Microsoft Azure and on-premise. This includes improved management and communication with Windows Server, more secure data transfers with Azure, and performance improvements when used within Active Directory architectures. If one wishes to use both RHEL and Windows for their network, RHEL 7.5 serves this purpose. Delivers improved software security controls to alleviate risk while also augmenting IT operations. A significant component of these controls is security automation via the integration of OpenSCAP with Red Hat Ansible Automation. This is aimed at facilitating the development of Ansible Playbooks straight from OpenSCAP scans which, in turn, can be leveraged to execute remediations more consistently and quickly across a hybrid IT environment. Provides high availability support for enterprise applications running on Amazon Web Services or Microsoft Azure with Pacemaker support in public clouds via the Red Hat High Availability Add-On and Red Hat Enterprise Linux for SAP® Solutions. To know more about this release in detail read Red Hat official blog. Linux Foundation launches the Acumos Al Project to make AI accessible How to implement In-Memory OLTP on SQL Server in Linux Kali Linux2 released    
Read more
  • 0
  • 0
  • 2829