Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Networking

54 Articles
article-image-amazon-launches-vpc-traffic-mirroring-for-capturing-and-inspecting-network-traffic
Amrata Joshi
26 Jun 2019
4 min read
Save for later

Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic

Amrata Joshi
26 Jun 2019
4 min read
Yesterday the team at AWS launched VPC Traffic Mirroring, a new feature that can be used with the existing Virtual Private Clouds (VPCs) for capturing and inspecting network traffic at scale. https://twitter.com/nickpowpow/status/1143550924125868033 Features of VPC Traffic Monitoring Detecting network and responding to attacks Users can now detect network and security anomalies and extract traffic of interest from any workload in a VPC and route it to the detection tools with VPC Traffic Mirroring. Users can now detect and respond to attacks more quickly than with traditional log-based tools. Better network visibility Users can now get the network visibility and control for making better security decisions. Regulatory and compliance requirements It is now possible to meet regulatory and compliance requirements that mandate monitoring, logging, etc. Troubleshooting Users can mirror application traffic internally for testing and troubleshooting and analyze traffic patterns. It is now easy for users to proactively locate choke points that will hamper the performance of the applications. The blog post reads, “You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC.” Mirror traffic from any EC2 instance Users can choose to capture all the traffic or can use filters for capturing the packets that are of particular interest and can limit the number of bytes captured per packet. VPC Traffic Mirroring can be used in a multi-account AWS environment for capturing traffic from VPCs spread across many AWS accounts. Users can now mirror traffic from any EC2 instance powered by the AWS Nitro system. It is now possible to replicate the network traffic from an EC2 instance within their Amazon Virtual Private Cloud (Amazon VPC) and forward that traffic to security and monitoring appliances for use cases such as threat monitoring, content inspection, and troubleshooting. And these appliances can be easily deployed on an individual Amazon EC2 instance or a fleet of instances behind a Network Load Balancer (NLB) with the help of a User Datagram Protocol (UDP) listener. Amazon VPC traffic mirroring also supports traffic filtering and packet truncation, allowing customers to extract only traffic they are interested in monitoring. Improved security VPC Traffic mirroring helps in capturing packets at the Elastic Network Interface (ENI) level that cannot be tampered, thus strengthening security. Users can choose to analyze their network traffic from a wide range of monitoring solutions that are integrated with Amazon VPC traffic mirroring on AWS Marketplace. Key elements for VPC Traffic Mirroring Mirror source It is an AWS network resource within a particular VPC which can be used as the source of traffic. VPC Traffic Mirroring supports Elastic Network Interfaces (ENIs) as mirror sources. Mirror target It is an ENI or Network Load Balancer that works as a destination for the mirrored traffic. The mirror target can be in the same AWS account as the Mirror Source or it can be in a different account for the implementation of the central-VPC model. Mirror filter It is a specification of the inbound or outbound traffic that is to be captured or skipped. It can be used to specify a protocol that ranges for the source, destination ports, and CIDR blocks for the source and destination. Traffic mirror session It is a connection that is between a mirror source and target that uses a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target. VPC Traffic Mirroring is now available and customers can start using it in all commercial AWS Regions except for Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for these regions is still pending and will be added soon, as per the official post. To know more about this news, check out Amazon’s official blog post. Amazon adds UDP load balancing support for Network Load Balancer Amazon patents AI-powered drones to provide ‘surveillance as a service’ Amazon is being sued for recording children’s voices through Alexa without consent
Read more
  • 0
  • 0
  • 4225

article-image-amazon-adds-udp-load-balancing-support-for-network-load-balancer
Vincy Davis
25 Jun 2019
3 min read
Save for later

Amazon adds UDP load balancing support for Network Load Balancer

Vincy Davis
25 Jun 2019
3 min read
Yesterday, Amazon announced support for load balancing UDP traffic on Network Load Balancers, which will enable it to deploy connectionless services for online gaming, IoT, streaming, media transfer, and native UDP applications. This has been a long requested feature by Amazon customers. The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on the users part. UDP load balancing will give users the liberty to no longer maintain a fleet of proxy servers to ingest UDP traffic, and instead use the same load balancer for both TCP and UDP traffic. Hence simplifying the network architecture, reducing users cost and scalability. Supported Targets UDP on Network Load Balancers is supported for Instance target types only. It does not support IP target types and PrivateLink. Health Checks Health checks must be done using TCP, HTTP, or HTTPS. Users can check on the health of a service by clicking override and specifying a health check on the selected port. Users can then run a custom implementation of Syslog that stores the log messages centrally and in a highly durable form. Multiple Protocols A single Network Load Balancer can handle both TCP and UDP traffic. In situations like DNS, when support of TCP and UDP is both needed on the same port, user can set up a multi-protocol target group and a multi-protocol listener. New CloudWatch Metrics The existing CloudWatch metrics (ProcessedBytes, ActiveFlowCount, and NewFlowCount) can  now represent the aggregate traffic processed by the TCP, UDP, and TLS listeners on the given Network Load Balancer. Users who host DNS, SIP, SNMP, Syslog, RADIUS and other UDP services in their own data centers can now move their services to AWS. It is also possible to deploy services to handle Authentication, Authorization, and Accounting, often known as AAA. Earlier this year, Amazon launched the TLS Termination support for Network Load Balancer. It simplifies the process of building secure web applications by allowing users to make use of TLS connections that terminate at a Network Load Balancer. Users are delighted with Amazon’s support for load balancing UDP traffic. https://twitter.com/cgswong/status/1143312489360183296 A user on Hacker News comments,“This is a Big Deal because it enables support for QUIC, which is now being standardized as HTTP/3. To work around the TCP head of line blocking problem (among others) QUIC aises UDP. QUIC does some incredible patching over legacy decisions in the TCP and IP stack to make things faster, more reliable, especially on mobile networks, and more secure.” Another comment reads, “This is great news, and something I’ve been requesting for years. I manage an IoT backend based on CoAP, which is typically UDP-based. I’ve looked at Nginx support for UDP, but a managed load balancer is much more appealing.” Some users see this as Amazon’s way of preparing ‘http3 support’ for the future. https://twitter.com/atechiethought/status/1143240391870832640 Another user on Hacker News wrote, “Nice! I wonder if this is a preparatory step for future quick/http3 support?” For details on how to create a UDP Network Load Balancer, head over to Amazon’s official blog. Amazon patents AI-powered drones to provide ‘surveillance as a service’ Amazon is being sued for recording children’s voices through Alexa without consent Amazon announces general availability of Amazon Personalize, an AI-based recommendation service
Read more
  • 0
  • 0
  • 5972

article-image-lyft-announces-envoy-mobile-an-ios-and-android-client-network-library-for-mobile-application-networking
Sugandha Lahoti
19 Jun 2019
3 min read
Save for later

Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking

Sugandha Lahoti
19 Jun 2019
3 min read
Yesterday, Lyft released the initial OSS preview release of Envoy Mobile. This is an iOS and Android client network library that brings Lyft’s Envoy Proxy to mobile platforms. https://twitter.com/mattklein123/status/1140998175722835974 Envoy proxy was initially built at Lyft to solve the networking and observability issues inherent in large polyglot server-side microservice architectures. It soon gained large scale public appreciation and was used by major public cloud providers, end user companies, and infrastructure startups. Now, Envoy Proxy is brought to iOS and Android platforms, providing an API and abstraction for mobile application networking. Envoy Mobile is currently in a very early stage of development. The initial release brings the following features: Ability to compile Envoy on both Android and iOS: Envoy Mobile uses an intelligent protobuf code generation and an abstract transport to help both iOS and Android provide similar interfaces and ergonomics for consuming APIs. Ability to run Envoy on a thread within an application, utilizing it effectively as an in-process proxy server. Swift/Obj-C/Kotlin demo applications that utilize exposed Swift/Obj-C/Kotlin “raw” APIs to interact with Envoy and make network calls, Long term goals Envoy Mobile provides support for Swift APIs for iOS and Kotlin APIs for Android initially, but depending on community interest they will consider adding support for additional languages in the future. In the long term, they are also planning to include the gRPC Server Reflection Protocol into a streaming reflection service API. This API will allow both Envoy and Envoy Mobile to fetch generic protobuf definitions from a central IDL service, which can then be used to implement annotation driven networking via reflection. They also plan to bring Envoy Mobile to xDS configuration to mobile clients, in the form of routing, authentication, failover, load balancing, and other policies driven by a global load balancing system. Envoy Mobile can also add cross-platform functionality when using strongly typed IDL APIs. Some examples of annotations that are planned in their roadmap are Caching, Priority, Streaming, Marking an API as offline/deferred capable and more. Envoy Mobile is getting loads of appreciation from developers with many happy they have open sourced its development. A comment on Hacker news reads, “I really like how they're releasing this as effectively the first working proof of concept and committing to developing the rest entirely in the open - it's a great opportunity to see how a project of this scale plays out in real-time on GitHub.” https://twitter.com/omerlh/status/1141225499139682305 https://twitter.com/dinodaizovi/status/1141157828247347200   Currently the project is in a pre-release stage. Not all features are implemented, and it is not ready for production use. However, you can get started here. Also see the demo release and their roadmap where they plan to develop Envoy Mobile entirely in the open. Related News Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race
Read more
  • 0
  • 0
  • 3473

article-image-untangle-releases-zseries-appliances-and-ng-firewall-v14-2-for-enhanced-network-security-framework
Amrata Joshi
12 Jun 2019
2 min read
Save for later

Untangle releases zSeries appliances and NG Firewall v14.2 for enhanced Network Security Framework

Amrata Joshi
12 Jun 2019
2 min read
Yesterday, Untangle, a company that provides network security for SMBs (Small and Midsize Businesses) and distributed enterprises announced the release of its zSeries appliances. The zSeries appliances will provide better performance and functionality at a lower price for SMBs as well as distributed enterprises with cloud-managed next-generation firewalls. The zSeries includes five appliances, right from small desktop models to 1U rackmount servers, as well as a wireless option. All these appliances will be preloaded with NG Firewall 14.2 version, it is Untangle’s network security software product that makes deployment easy. The zSeries appliances are now available on the Untangle website for purchase. Heather Paunet, vice president of product management at Untangle said, “The zSeries offers a simplified lineup to suit customers from branch offices to large campuses. Key upgrades available with the zSeries include faster processors, more RAM, NVMe SSD storage on the z6 and above, and fiber connectivity on the z12 and above.” She further added, “It’s never been easier to deploy cost-effective, cloud-managed network security across dispersed networks while ensuring a consistent security posture for organizations of any size.” NG Firewall v14.2 packed with enhancements to web security and content filtering Untangle NG Firewall 14.2 comes with enhancements to web security and content filtering. It also offers the ability to synchronize users with Azure Active Directory as well as bring enhancements to intrusion detection and prevention. NG Firewall has won 2019 Security Today Government Security Awards “The Govies” for Network Security. NG Firewall v14.2 offers options for Flagging, blocking and alerting based on search terms for YouTube, Yahoo, Google, Bing, and Ask. With this firewall, YouTube searches can now be easily logged, and usage can also be locked down to show content that meets the 'safe search' criteria. Untangle NG Firewall 14.2 is available as a free upgrade for existing customers. Join Untangle for the Community Webinar: zSeries and NG Firewall v14.2 on June 18, 2019 to learn more about the features in 14.2 and the new zSeries appliances. To know more about this news, check out press release. Untangle VPN Services PyPI announces 2FA for securing Python package downloads All Docker versions are now vulnerable to a symlink race attack  
Read more
  • 0
  • 0
  • 2774

article-image-spacex-shares-new-information-on-starlink-after-the-successful-launch-of-60-satellites
Sugandha Lahoti
27 May 2019
3 min read
Save for later

SpaceX shares new information on Starlink after the successful launch of 60 satellites

Sugandha Lahoti
27 May 2019
3 min read
After the successful launch of Elon Musk’s mammoth space mission, Starlink last week, the company has unveiled a brand new website with more details on the Starlink commercial satellite internet service. Starlink Starlink sent 60 communications satellites to the orbit which will eventually be part of a single constellation providing high speed internet to the globe. SpaceX has plans to deploy nearly 12,000 satellites in three orbital shells by the mid-2020s, initially placing approximately 1600 in a 550-kilometer (340 mi)-altitude area. The new website gives a few glimpses of how Starlink’s plan looks like such as including the CG representation of how the satellites will work. These satellites will move along their orbits simultaneously, providing internet in a given area. They have also revealed more intricacies about the satellites. Flat Panel Antennas In each satellite, the signal is transmitted and received by four high-throughput phased array radio antennas. These antennas have a flat panel design and can transmit in multiple directions and frequencies. Starlink Ion Propulsion system and solar array Each satellite carries a krypton ion propulsion system. These systems enable satellites to orbit raise, maneuver in space, and deorbit. There is also a singular solar array, singe for simplifying the system. Ion thrusters provide a more fuel-efficient form of propulsion than conventional liquid propellants. It uses Krypton, which is less expensive than xenon but offers lower thrust efficiency. Starlink Star Tracker and Autonomous collision avoidance system Star Tracker is Space X’s inbuilt sensors, that can tell each satellite’s output for precise broadband throughput placement and tracking. The collision avoidance system uses inputs from the U.S. Department of Defense debris tracking system, reducing human error with a more reliable approach. Through this data it can perform maneuvers to avoid collision with space debris and other spacecrafts. Per Techcrunch, who interviewed a SpaceX representative, “the debris tracker hooks into the Air Force’s Combined Space Operations Center, where trajectories of all known space debris are tracked. These trajectories are checked against those of the satellites, and if a possible collision is detected the course changes are made, well ahead of time.” Source: Techcrunch More information on Starlink (such as the cost of the project, what ground stations look like, etc) is yet unknown. Till that time, keep an eye on the Starlink’s website and this space for new updates. SpaceX delays launch of Starlink, its commercial satellite internet service, for the second time to “update satellite software” Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon Elon Musk reveals big plans with Neuralink
Read more
  • 0
  • 0
  • 4388

article-image-spacex-delays-launch-of-starlink-its-commercial-satellite-internet-service-for-the-second-time-to-update-satellite-software
Bhagyashree R
17 May 2019
4 min read
Save for later

SpaceX delays launch of Starlink, its commercial satellite internet service, for the second time to “update satellite software”

Bhagyashree R
17 May 2019
4 min read
Update: On 23rd May, SpaceX successfully launched 60 satellites of the company’s Starlink constellation to orbit after a launch from Cape Canaveral. “This is one of the hardest engineering projects I’ve ever seen done, and it’s been executed really well,” said Elon Musk, SpaceX’s founder and CEO, during a press briefing last week. “There is a lot of new technology here, and it’s possible that some of these satellites may not work, and in fact a small possibility that all the satellites will not work. “We don’t want to count anything until it’s hatched, but these are, I think, a great design and we’ve done everything we can to maximize the probability of success,” he said. On Wednesday night, SpaceX was all set to send a Falcon 9 rocket into the space carrying the very first 60 satellites for its new Starlink commercial satellite internet service. And, while everyone was eagerly waiting for the launch webcast, the heavy winds ruined the show for everyone. SpaceX rescheduled the launch at 10:30 pm EDT from Florida's Cape Canaveral Air Force Station, but it has canceled the launch yet again citing the reason as software issues. The launch is now delayed for about a week. https://twitter.com/SpaceX/status/1129181397262843906 Elon Musk’s plans for SpaceX This launch of 60 satellites, weighing 227 kgs each, is actually the first step into creating Elon Musk’s plan of a huge Starlink constellation. He eventually aims to build up a mega constellation of 12,000 satellites. If everything goes well today, Falcon 9 will make a landing on the “Of Course I Still Love You” drone-ship in the Atlantic Ocean. After 1 hour and 20 minutes of the launch, the second stage will begin when the Starlink satellites will start self-deploying. On Wednesday, Musk on a teleconference with reporters revealed a bunch of details about his satellite internet service. Revealing the release mechanism behind the satellites he said that each of the satellites does not have their own release mechanism. Instead, the Falcon rocket’s upper stage will begin a very slow rotation and each one will be released in turn with a different amount of rotational inertia. "It will almost seem like spreading a deck of cards on a table,” he adds. Once the deployment happens, the satellites will start powering up their ion drives and open their solar panels. They will then move to an altitude of 550 km under their own power. This is a new approach for delivering commercial satellite internet. Other satellite internet services such as Viasat depend on few big satellites in geostationary orbit over 22,000 miles (35,000 kilometers) above Earth as opposed to 550 km in this case. Conventional internet services can suffer from high levels of latency because the signals have to travel a huge distance between the satellite and earth. Starlink aims to bring the satellites to the lower orbit to minimize the distance hence resulting in less lag time. However, the catch here is that as these satellites are closer to the Earth they are not able to cover a large surface area and hence a much greater number of them will be required to cover the whole planet. Though his plans look promising, Musk does not claim of everything going well. He, in the teleconference, said, "This is very hard. There is a lot of new technology, so it's possible that some of these satellites may not work. There's a small possibility that all of these satellites will not work." He further added that six more launches of a similar payload will be required before the service even begins to offer a “minor” coverage. You willl be able to see the launch webcast hauere or also on lthe official website: https://www.youtube.com/watch?time_continue=8&v=rT366GiQkP0 Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon Elon Musk reveals big plans with Neuralink DeepMind, Elon Musk, and others pledge not to build lethal AIation
Read more
  • 0
  • 0
  • 2267
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-after-rhel-8-release-users-awaiting-the-release-of-centos-8
Vincy Davis
10 May 2019
2 min read
Save for later

After RHEL 8 release, users awaiting the release of CentOS 8

Vincy Davis
10 May 2019
2 min read
The release of Red Hat Enterprise Linux 8 (RHEL 8) this week, has made everyone waiting for CentOS 8 rebuild to occur. The release of CentOS 8 will require major overhaul in the installer, packages, packaging, and build systems so that it can work with the newer OS. CentOS 7 was released last year, days after RHEL 7 was released. So far the team at CentOS have made their new build system setup ready, and are currently working on the artwork. But they still need to work on their multiple series of build loops in order to get all of the CentOS 8.0 packages built in a compatible fashion. There will be a an installer update followed by a release candidate(s). Only after all these releases, CentOS 8 will finally be available for it’s users. The RHEL 8 release has made many users excited for the CentOS 8 build. A user on Reddit commented, “Thank you to the project maintainers; while RedHat does release the source code anyone who’s actually compiled from source knows that it’s never push-button easy” Another user added, “Thank you Red Hat! You guys are amazing. The entire world has benefited from your work. I've been a happy Fedora user for many years, and I deeply appreciate how you've made my life better. Thank you for building an amazing set of distros, and thank you for pushing forward many of the huge projects that improve our lives such as Gnome and many more. Thank you for your commitment to open source and for living your values. You are heroes to me” So far, a release date has not been declared for CentOS 8 , but a rough timeline has been shared. To read about the steps needed to make a CentOS rebuild, head over to the CentOS wiki page. Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat released RHEL 7.6
Read more
  • 0
  • 0
  • 6661

article-image-neuvector-announces-new-container-risk-reports-for-vulnerability-exploits-external-attacks-and-more-based-on-redhat-openshift-integration
Savia Lobo
09 May 2019
3 min read
Save for later

NeuVector announces new container risk reports for vulnerability exploits, external attacks, and more based on RedHat OpenShift integration

Savia Lobo
09 May 2019
3 min read
NeuVector, a firm that deals with container network security, yesterday, announced new capabilities to help container security teams better assess the security posture of their deployed services in production. NeuVector now delivers an intelligent assessment of the risk of east-west attacks, ingress and egress connections, and damaging vulnerability exploits. An overall risk score summarizes all available risk factors and provides advice on how to lower the threat of attack – thus improving the score. The service connection risk score shows how likely it is for attackers to move laterally (east-west) to probe containers that are not segmented by the NeuVector firewall rules. The ingress/egress risk score shows the risk of external attacks or outbound connections commonly used for data stealing or connecting to C&C (command and control) servers. In an email written to us, Gary Duan, CTO of NeuVector said, “The NeuVector container security solution spans the entire pipeline – from build to ship to run. Because of this, we are able to present an overall analysis of the risk of attack for containers during run-time. But not only can we help assess and reduce risk, we can actually take automated actions such as blocking network attacks, quarantining suspicious containers, and capturing container and network forensics.” With the RedHat OpenShift integration, individual users can review the risk scores and security posture for the containers within their assigned projects. They are able to see the impact of their improvements to security configurations and protections as they lower risk scores and remove potential vulnerabilities. Read Also: Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts $10 trillion global revenue by end of 2019, and more! The one-click RBAC integration requires no additional coding, scripting or configuration, and adds to other OpenShift integration points for admission control, image streams, OVS networking, and service deployments. Fei Huang, CEO of NeuVector said, “We are seeing many business-critical container deployments using Red Hat OpenShift. These customers turn to NeuVector to provide complete run-time protection for in-depth defense – with the combination of container process and file system monitoring, as well as the industry’s only true layer-7 container firewall.” To know about this announcement in detail visit the official website. Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers
Read more
  • 0
  • 0
  • 1468

article-image-red-hat-releases-openshift-4-with-adaptability-enterprise-kubernetes-and-more
Vincy Davis
09 May 2019
3 min read
Save for later

Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!

Vincy Davis
09 May 2019
3 min read
The 3-day Red Hat Summit 2019 has kicked off at Boston Convention and Exhibition Center, United States.Yesterday, Red Hat announced a major release, Red Hat OpenShift 4, the next generation of its trusted enterprise Kubernetes platform, re-engineered to address the complex realities of container orchestration in production systems. Red Hat OpenShift 4 simplifies hybrid and multicloud deployments to help IT organizations utilize new applications, help businesses to thrive and gain momentum in an ever-increasing set of competitive markets. Features of Red Hat OpenShift 4 Simplify and automate the cloud everywhere Red Hat OpenShift 4 will help in automating and operationalize the best practices for modern application platforms. It will operate as a unified cloud experience for the hybrid world, and enable an automation-first approach including Self-managing platform for hybrid cloud: This provides a cloud-like experience via automatic software updates and lifecycle management across the hybrid cloud which enables greater security, auditability, repeatability, ease of management and user experience. Adaptability and heterogeneous support: It will be available in the coming months across major public cloud vendors including Alibaba, Amazon Web Services (AWS), Google Cloud, IBM Cloud, Microsoft Azure, private cloud technologies like OpenStack, virtualization platforms and bare-metal servers. Streamlined full stack installation:When streamlined full stack installation is used along with an automated process, it will make it easier to get started with enterprise Kubernetes. Simplified application deployments and lifecycle management: Red Hat helped stateful and complex applications on Kubernetes with Operators which will helps in self operating application maintenance, scaling and failover. Trusted enterprise Kubernetes The Cloud Native Computing Foundation (CNCF) has certified the Red Hat OpenShift Container Platform, in accordance with Kubernetes. It offers built on the backbone of the world’s leading enterprise Linux platform backed by the open source expertise, compatible ecosystem, and leadership of Red Hat. It also provides codebase which will help in securing key innovations from upstream communities. Empowering developers to innovate OpenShift 4 supports the evolving needs of application development as a consistent platform to optimize developer productivity with: Self-service, automation and application services that will help developers to extend their application by on-demand provisioning of application services. Red Hat CodeReady Workspaces enables developers to hold the power of containers and Kubernetes, while working with familiar Integrated Development Environment (IDE) tools that they use day-to-day. OpenShift Service Mesh combines Istio, Jaeger, and Kiali projects as a single capability that encodes communication logic for microservices-based application architectures. Knative for building serverless applications in Developer Preview makes Kubernetes an ideal platform for building, deploying and managing serverless or function-as-a-service (FaaS) workloads. KEDA (Kubernetes-based Event-Driven Autoscaling), a collaboration between Microsoft and Red Hat supports deployment of serverless event-driven containers on Kubernetes, enabling Azure Functions in OpenShift, in Developer Preview. This allows accelerated development of event-driven, serverless functions across hybrid cloud and on-premises with Red Hat OpenShift. Red Hat mentioned that OpenShift 4 will be available in the coming months. To read more details about Openshift 4, head over to the official press release on Red Hat. To know about the other major announcements at the Red Hat Summit 2019 like Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), visit our coverage on Red Hat Summit 2019 Highlights. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 3219

article-image-red-hat-summit-2019-highlights-microsoft-collaboration-red-hat-enterprise-linux-8-rhel-8-idc-study-predicts-support-for-software-worth-10-trillion
Savia Lobo
09 May 2019
11 min read
Save for later

Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts support for software worth $10 trillion

Savia Lobo
09 May 2019
11 min read
The 3-day Red Hat Summit 2019 kicked off yesterday at Boston Convention and Exhibition Center, United States. Since yesterday, there have been a lot of exciting announcements on board including the collaboration of Red Hat with Microsoft where Satya Nadella (Microsoft’s CEO) came over to announce this collaboration. Red Hat also announced the release of Red Hat Enterprise Linux 8, an IDC study predicting $10 trillion global revenue for Red Hat by the end of 2019 and much more. Let us have a look at each of these announcements in brief. Azure Red Hat OpenShift: A Red Hat and Microsoft collaboration The latest Red Hat and Microsoft collaboration, Azure Red Hat OpenShift, must be important - when the Microsoft’s CEO himself came across from Seattle to present it. Red Hat had already brought OpenShift to Azure last year. This is the next step in its partnership with Microsoft. The new Azure Red Hat OpenShift combines Red Hat's enterprise Kubernetes platform OpenShift (running on Red Hat Enterprise Linux (RHEL) with Microsoft's Azure cloud. With Azure Red Hat OpenShift, customers can combine Kubernetes-managed, containerized applications into Azure workflows. In particular, the two companies see this pairing as a road forward for hybrid-cloud computing. Paul Cormier, President of Products and Technologies at RedHat, said, “Azure Red Hat OpenShift provides a consistent Kubernetes foundation for enterprises to realize the benefits of this hybrid cloud model. This enables IT leaders to innovate with a platform that offers a common fabric for both app developers and operations.” Some features of the Azure Red Hat OpenShift include: Fully managed clusters with master, infrastructure and application nodes managed by Microsoft and Red Hat; plus, no VMs to operate and no patching required. Regulatory compliance will be provided through compliance certifications similar to other Azure services. Enhanced flexibility to more freely move applications from on-premise environments to the Azure public cloud via the consistent foundation of OpenShift. Greater speed to connect to Azure services from on-premises OpenShift deployments. Extended productivity with easier access to Azure public cloud services such as Azure Cosmos DB, Azure Machine Learning and Azure SQL DB for building the next-generation of cloud-native enterprise applications. According to the official press release, “Microsoft and Red Hat are also collaborating to bring customers containerized solutions with Red Hat Enterprise Linux 8 on Azure, Red Hat Ansible Engine 2.8 and Ansible Certified modules. In addition, the two companies are working to deliver SQL Server 2019 with Red Hat Enterprise Linux 8 support and performance enhancements.” Red Hat Enterprise Linux 8 (RHEL 8) is now generally available Red Hat Enterprise Linux 8 (RHEL 8) gives a consistent OS across public, private, and hybrid cloud environments. It also provides users with version choice, long life-cycle commitments, a robust ecosystem of certified hardware, software, and cloud partners, and now comes with built-in management and predictive analytics. Features of RHEL 8 Support for latest and emerging technologies RHEL 8 is supported across different architectures and environments such that the user has a consistent and stable OS experience. This helps them to adapt to emerging tech trends such as machine learning, predictive analytics, Internet of Things (IoT), edge computing, and big data workloads. This is mainly due to the hardware innovations like the GPUS, which can assist machine learning workloads. RHEL 8 is supported to deploy and manage GPU-accelerated NGC containers on Red Hat OpenShift. These AI containers deliver integrated software stacks and drivers to run GPU-optimized machine learning frameworks such as TensorFlow, Caffe2, PyTorch, MXNet, and others. Also, NVIDIA’s DGX-1 and DGX-2 servers are RHEL certified and are designed to deliver powerful solutions for complex AI challenges. Introduction of Application Streams RHEL 8 introduces Application Streams where fast-moving languages, frameworks and developer tools are updated frequently without impacting the core resources. This melds faster developer innovation with production stability in a single, enterprise-class operating system. Abstracts complexities in granular sysadmin tasks with RHEL web console RHEL 8 abstracts away many of the deep complexities of granular sysadmin tasks behind the Red Hat Enterprise Linux web console. The console provides an intuitive, consistent graphical interface for managing and monitoring the Red Hat Enterprise Linux system, from the health of virtual machines to overall system performance. To further improve ease of use, RHEL supports in-place upgrades, providing a more streamlined, efficient and timely path for users to convert Red Hat Enterprise Linux 7 instances to Red Hat Enterprise Linux 8 systems. Red Hat Enterprise Linux System Roles for managing and configuring Linux in production The Red Hat Enterprise Linux System Roles automate many of the more complex tasks around managing and configuring Linux in production. Powered by Red Hat Ansible Automation, System Roles are pre-configured Ansible modules that enable ready-made automated workflows for handling common, complex sysadmin tasks. This automation makes it easier for new systems administrators to adopt Linux protocols and helps to eliminate human error as the cause of common configuration issues. Supports OpenSSL 1.1.1 and TLS 1.3 cryptographic standards To enhance security, RHEL 8 supports the OpenSSL 1.1.1 and TLS 1.3 cryptographic standards. This provides access to the strongest, latest standards in cryptographic protection that can be implemented system-wide via a single command, limiting the need for application-specific policies and tuning. Support for the Red Hat container toolkit With cloud-native applications and services frequently driving digital transformation, RHEL 8 delivers full support for the Red Hat container toolkit. Based on open standards, the toolkit provides technologies for creating, running and sharing containerized applications. It helps to streamline container development and eliminates the need for bulky, less secure container daemons. Other additions in the RHEL 8 include: It drives added value for specific hardware configurations and workloads, including the Arm and POWER architectures as well as real-time applications and SAP solutions. It forms the foundation for Red Hat’s entire hybrid cloud portfolio, starting with Red Hat OpenShift 4 and the upcoming Red Hat OpenStack Platform 15. Red Hat Enterprise Linux CoreOS, a minimal footprint operating system designed to host Red Hat OpenShift deployments is built on RHEL 8 and will be released soon. Red Hat Enterprise Linux 8 is also broadly supported as a guest operating system on Red Hat hybrid cloud infrastructure, including Red Hat OpenShift 4, Red Hat OpenStack Platform 15 and Red Hat Virtualization 4.3. Red Hat Universal Base Image becomes generally available Red Hat Universal Base Image, a userspace image derived from Red Hat Enterprise Linux for building Red Hat certified Linux containers is now generally available. The Red Hat Universal Base Image is available to all developers with or without a Red Hat Enterprise Linux subscription, providing a more secure and reliable foundation for building enterprise-ready containerized applications. Applications built with the Universal Base Image can be run anywhere and will experience the benefits of the Red Hat Enterprise Linux life cycle and support from Red Hat when it is run on Red Hat Enterprise Linux or Red Hat OpenShift Container Platform. Red Hat reveals results of a commissioned IDC study Yesterday, at its summit, Red Hat also announced new research from IDC that examines the contributions of Red Hat Enterprise Linux to the global economy. The press release says: “According to the IDC study, commissioned by Red Hat, software and applications running on Red Hat Enterprise Linux are expected to contribute to more than $10 trillion worth of global business revenues in 2019, powering roughly 5% of the worldwide economy as a cross-industry technology foundation.” According to IDC’s research, IT organizations using Red Hat Enterprise Linux can expect to save $6.7 billion in 2019 alone. IT executives responding to the global survey point to Red Hat Enterprise Linux: Reducing the annual cost of software by 52% Reducing the amount of time IT staff spend doing standard IT tasks by 25% Reducing costs associated with unplanned downtime by 5% To know more about this announcement in detail, read the official press release. Red Hat infrastructure migration solution helped CorpFlex to reduce the IT infrastructure complexity and costs by 87% Using Red Hat’s infrastructure migration solution, CorpFlex, a managed service provider in Brazil successfully reduced the complexity of its IT infrastructure and lowered costs by 87% through its savings on licensing fees. Delivered as a set of integrated technologies and services, including Red Hat Virtualization and Red Hat CloudForms, the Red Hat infrastructure migration solution has provided CorpFlex with the framework for a complete hybrid cloud solution while helping the business improve its bottom line. Red Hat Virtualization is built on Red Hat Enterprise Linux and the Kernel-based Virtual Machine (KVM) project. This gave CorpFlex the ability to virtualize resources, processes, and applications with a stable foundation for a cloud-native and containerized future. Migrating to the Red Hat Virtualization can provide improved affordable virtualization, improved performance, enhanced collaboration, extended automation, and more. Diogo Santos, CTO of CorpFlex said, “With Red Hat Virtualization, we’ve not only seen cost-saving in terms of licensing per virtual machine but we’ve also been able to enhance our own team’s performance through Red Hat’s extensive expertise and training.” To know more about this news in detail, head over to the official press release. Deutsche Bank increases operational efficiency by using Red Hat’s open hybrid cloud technologies to power its ‘Fabric’ application platform Fabric is a key component of Deutsche Bank's digital transformation strategy and serves as an automated, self-service hybrid cloud platform that enables application teams to develop, deliver and scale applications faster and more efficiently. Red Hat Enterprise Linux has served as a core operating platform for the bank for a number of years by supporting a common foundation for workloads both on-premises and in the bank's public cloud environment. For Fabric, the bank continues using Red Hat’s cloud-native stack, built on the backbone of the world’s leading enterprise Linux platform, with Red Hat OpenShift Container Platform. The bank deployed Red Hat OpenShift Container Platform on Microsoft Azure as part of a security-focused new hybrid cloud platform for building, hosting and managing banking applications. By deploying Red Hat OpenShift Container Platform, the industry's most comprehensive enterprise Kubernetes platform, on Microsoft Azure, IT teams can take advantage of massive-scale cloud resources with a standard and consistent PaaS-first model. According to the press release, “The bank has achieved its objective of increasing its operational efficiency, now running over 40% of its workloads on 5% of its total infrastructure, and has reduced the time it takes to move ideas from proof-of-concept to production from months to weeks.” To know more about this news in detail, head over to the official press release. Lockheed Martin and Red Hat together to bring accelerate upgrades to the U.S. Air Force’s F-22 Raptor fighter jets Red Hat announced that Lockheed Martin was working with it to modernize the application development process used to bring new capabilities to the U.S. Air Force’s fleet of F-22 Raptor fighter jets. Lockheed Martin Aeronautics replaced its previous waterfall development process used for F-22 Raptor with an agile methodology and DevSecOps practices that are more adaptive to the needs of the U.S. Air Force. Together, Lockheed Martin and Red Hat created an open architecture based on Red Hat OpenShift Container Platform that has enabled the F-22 team to accelerate application development and delivery. “The Lockheed Martin F-22 Raptor is one of the world’s premier fighter jets, with a combination of stealth, speed, agility, and situational awareness. Lockheed Martin is working with the U.S. Air Force on innovative, agile new ways to deliver the Raptor’s critical capabilities to warfighters faster and more affordable”, the press release mentions. Lockheed Martin chose Red Hat Open Innovation Labs to lead them through the agile transformation process and help them implement an open source architecture onboard the F-22 and simultaneously disentangle its web of embedded systems to create something more agile and adaptive to the needs of the U.S. Air Force. Red Hat Open Innovation Labs’ dual-track approach to digital transformation combined enterprise IT infrastructure modernization and, through hands-on instruction, helped Lockheed’s team adopt agile development methodologies and DevSecOps practices. During the Open Innovation Labs engagement, a cross-functional team of five developers, two operators, and a product owner worked together to develop a new application for the F-22 on OpenShift. After seeing an early impact with the initial project team, within six months, Lockheed Martin had scaled its OpenShift deployment and use of agile methodologies and DevSecOps practices to a 100-person F-22 development team. During a recent enablement session, the F-22 Raptor scrum team improved its ability to forecast for future sprints by 40%, the press release states. To know more about this news in detail, head over to its official press release on Red Hat. This story will have certain updates until the Summit gets over. We will update the post as soon as we get further updates or new announcements. Visit the official Red Hat Summit website to know more about the sessions conducted, the list of keynote speakers, and more. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 3652
article-image-the-major-dns-blunder-at-microsoft-azure-affects-office-365-one-drive-microsoft-teams-xbox-live-and-many-more-services
Amrata Joshi
03 May 2019
3 min read
Save for later

The major DNS blunder at Microsoft Azure affects Office 365, One Drive, Microsoft Teams, Xbox Live, and many more services

Amrata Joshi
03 May 2019
3 min read
It seems all is not well at Microsoft post yesterday’s outage as the Microsoft's Azure cloud been up and down globally because of a DNS configuration issue. This outage that started at 1:20 pm yesterday, lasted for more than an hour which ended up affecting Microsoft’s cloud services, including Office 365, One Drive, Microsoft Teams, Xbox Live, and many others that are used by Microsoft’s commercial customers. Due to the networking connectivity errors in Microsoft Azure even the third-party apps and sites running on Microsoft’s cloud got affected. Meanwhile, around 2:30 pm, Microsoft started gradually recovering Azure regions one by one. Though Microsoft is yet to completely troubleshoot this major issue and has already warned that it might take some time to get everyone back up and running. But this isn’t the first time that DNS outage has affected Azure. This year in January, a few customers' databases had gone missing, which affected a number of Azure SQL databases that utilize custom KeyVault keys for Transparent Data Encryption (TDE). https://twitter.com/AzureSupport/status/1124046510411460610 The Azure status page reads, "Customers may experience intermittent connectivity issues with Azure and other Microsoft services (including M365, Dynamics, DevOps, etc)." The Microsoft engineers found out that an incorrect name server delegation issue affected DNS resolution, network connectivity, and that affected the compute, storage, app service, AAD, and SQL database resources. Even on the Microsoft 365 status page, Redmond's techies have blamed an internal DNS configuration error for the downtime. Also, during the migration of the DNS system to Azure DNS, some domains for Microsoft services got incorrectly updated. The good thing is that no customer DNS records were impacted during this incident, also the availability of Azure DNS remained at 100% throughout this incident. Only records for Microsoft services got affected due to this issue. According to Microsoft, the broken systems have been fixed and the three-hour outage has come to an end and the Azure's network infrastructure will soon get back to normal. https://twitter.com/MSFT365Status/status/1124063490740826133 Users have reported issues with accessing the cloud service and are complaining. A user commented on HackerNews, “The sev1 messages in my inbox currently begs to differ. there's no issue maybe with the dns at this very moment but the platform is thoroughly fucked up.” Users are also questioning the reliability of Azure. Another comment reads, “Man... Azure seems to be an order of magnitude worse than AWS and GCP when it comes to reliability.” To know more about the status of the situation, check out Microsoft’s post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records    
Read more
  • 0
  • 0
  • 3104

article-image-openssh-8-0-released-addresses-scp-vulnerability-new-ssh-additions
Fatema Patrawala
19 Apr 2019
2 min read
Save for later

OpenSSH 8.0 released; addresses SCP vulnerability and new SSH additions

Fatema Patrawala
19 Apr 2019
2 min read
Theo de Raadt and the OpenBSD developers who maintain the OpenSSH, today released the latest version OpenSSH 8.0. OpenSSH 8.0 has an important security fix for a weakness in the scp(1) tool when you use scp for copying files to/from remote systems. Till now when copying files from remote systems to a local directory, SCP was not verifying the filenames of what was being sent from the server to client. This allowed a hostile server to create or clobber unexpected local files with attack-controlled data regardless of what file(s) were actually requested for copying from the remote server. OpenSSH 8.0 adds client-side checking that the filenames sent from the server match the command-line request. While this client-side checking added to SCP, the OpenSSH developers recommend against using it and instead use sftp, rsync, or other alternatives. "The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.", mention OpenSSH developers. New to OpenSSH 8.0 meanwhile is support for ECDSA keys in PKCS#11 tokens, experimental quantum-computing resistant key exchange method. Also, the default RSA key size from ssh-keygen has been increased to 3072 bits and more SSH utilities supporting a "-v" flag for greater verbosity are added. It also comes with a wide range of fixes throughout including a number of portability fixes. More details on OpenSSH 8.0 is available on OpenSSH.com. OpenSSH, now a part of the Windows Server 2019 OpenSSH 7.8 released! OpenSSH 7.9 released
Read more
  • 0
  • 0
  • 5842

article-image-linkerd-2-3-introduces-zero-trust-networking-for-kubernetes
Savia Lobo
19 Apr 2019
2 min read
Save for later

Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes

Savia Lobo
19 Apr 2019
2 min read
This week, the team at Linkerd announced an updated version of the service mesh, Linkerd 2.3. In this release, the mTLS is out of experimental to a fully supported feature. Along with several important security primitives, the important update in Linkerd 2.3 is that it turns authenticated, confidential communication between meshed services on by default. Linkerd, a Cloud Native Computing Foundation (CNCF) project, is a service mesh, designed to give platform-wide observability, reliability, and security without requiring configuration or code changes. The team at Linkerd says, “Securing the communication between Kubernetes services is an important step towards adopting zero-trust networking. In the zero-trust approach, we discard assumptions about a datacenter security perimeter and instead push requirements around authentication, authorization, and confidentiality “down” to individual units. In Kubernetes terms, this means that services running on the cluster validate, authorize, and encrypt their own communication.” Linkerd 2.3 addresses challenges with the adoption of zero-trust networking as follows: The control plane ships with a certificate authority (called simply “identity”). The data plane proxies receive TLS certificates from this identity service, tied to the Kubernetes Service Account that the proxy belongs to, rotated every 24 hours. The data plane proxies automatically upgrade all communication between meshed services to authenticated, encrypted TLS connections using these certificates. Since the control plane also runs on the data plane, communication between control plane components is secured in the same way. All of these changes mentioned are enabled by default and requires no configuration. “This release represents a major step forward in Linkerd’s security roadmap. In an upcoming blog post, Linkerd creator Oliver Gould will be detailing the design tradeoffs in this approach, as well as covering Linkerd’s upcoming roadmap around certificate chaining, TLS enforcement, identity beyond service accounts, and authorization”, the Linkerd’s official blog mentions. These topics and all the other fun features in 2.3 will be further discussed in the upcoming Linkerd Online Community Meeting on Wednesday, April 24, 2019 at 10am PT. To know more about Linkerd 2.3 in detail, visit its official website. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Platform9 open sources Klusterkit to simplify the deployment and operations of Kubernetes clusters Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more
Read more
  • 0
  • 0
  • 3357
article-image-google-cloud-introduces-traffic-director-beta-a-networking-management-tool-for-service-mesh
Amrata Joshi
12 Apr 2019
2 min read
Save for later

Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh

Amrata Joshi
12 Apr 2019
2 min read
This week, the team at Google Cloud announced the Beta version of Traffic Director, a networking management tool for service mesh, at the Google Cloud Next. Traffic Director Beta will help network managers understand what’s happening in their service mesh. Service mesh is a network of microservices that creates the applications and the interactions between them. Features of Traffic Director Beta Fully managed with SLA Traffic Director’s production-grade features have 99.99% SLA. Users don’t have to worry about deploying and managing the control plane. Traffic management With the help of Traffic Director, users can easily deploy everything from simple load balancing to advanced features like request routing and percentage-based traffic splitting. Build resilient services Users can keep their service up and running by deploying it across multiple regions as VMs or containers. Traffic Director can be used for delivering global load balancing with automatic cross-region overflow and failover. With Traffic Director users can deploy their service instances in multiple regions while requiring only a single service IP. Scaling Traffic Director handles the growth in deployments and it manages to scale for larger services and installations. Traffic management for open service proxies This management tool provides a GCP (Google Cloud Platform)-managed traffic management control plane for xDSv2-compliant open service proxies like Envoy. Compatible with VMs and containers Users can deploy their Traffic Director-managed VM service and container instances with the help of managed instance groups and network endpoint groups. Supports request routing policies This tool supports routing features like traffic splitting and enables use cases like canarying, URL rewrites/redirects, fault injection, traffic mirroring, and advanced routing capabilities that are based on header values such as cookies. To know more about this news, check out Google Cloud's official page. Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more
Read more
  • 0
  • 0
  • 2499

article-image-cloudflare-adds-warp-a-free-vpn-to-1-1-1-1-dns-app-to-improve-internet-performance-and-security
Natasha Mathur
02 Apr 2019
3 min read
Save for later

Cloudflare adds Warp, a free VPN to 1.1.1.1 DNS app to improve internet performance and security

Natasha Mathur
02 Apr 2019
3 min read
Cloudflare announced yesterday that it is adding Warp, a free VPN to the 1.1.1.1 DNS resolver app. Cloudflare team states that it began its plans to integrate 1.1.1.1 app with warp performance and security tech, about two years ago. The 1.1.1.1 app was released in November last year for iOS and Android. The mobile app included features such as VPN support that helped move the mobile traffic towards 1.1.1.1 DNS servers, thereby, helping improve speeds. Now with warp integration, 1.1.1.1 app will speed up mobile data using Cloudflare network to resolve DNS queries at a faster pace.  With Warp, all the unencrypted connections are encrypted automatically by default. Also, Warp comes with end-to-end encryption and doesn’t require users to install a root certificate to observe the encrypted Internet traffic. For cases when you browse the unencrypted Internet through Warp, Cloudflare’s network can cache and compress content to improve performance and decrease your data usage and mobile carrier bill. “In the 1.1.1.1 App, if users decide to enable Warp, instead of just DNS queries being secured and optimized, all Internet traffic is secured and optimized. In other words, Warp is the VPN for people who don't know what V.P.N. stands for”, states the Cloudflare team. Apart from that, Warp also offers excellent performance and reliability. Warp is built around a UDP-based protocol that has been optimized for the mobile Internet. Warp also makes use of Cloudflare’s massive global network and allows Warp to connect with servers within milliseconds. Moreover, Warp has been tested to show that it increases internet performance. Another factor is reliability which has also significantly improved. Warp is not as capable of eliminating mobile dead spots, but it is very efficient at recovering from loss. Warp doesn’t increase your battery usage as it is built around WireGuard, a new and efficient VPN protocol. The basic version of Warp has been added as a free option with the 1.1.1.1 app for free. However, Cloudflare team will be charging for Warp+, a premium version of Warp, that will be even faster with Argo technology. A low monthly fee will be charged for Warp+ that will vary based on different regions. Also, the 1.1.1.1 App with Warp will have all the privacy protections launched formerly with the 1.1.1.1 app. Cloudflare team states that 1.1.1.1 app with warp is still under works, and although sign-ups for Warp aren’t open yet, Cloudflare has started a waiting list where you can “claim your place” by downloading the 1.1.1.1 app or by updating the existing app. Once the service is available, you’ll be notified. “Our whole team is proud that today, for the first time, we’ve extended the scope of that mission meaningfully to the billions of other people who use the Internet every day”, states the Cloudflare team. For more information, check out the official Warp blog post. Cloudflare takes a step towards transparency by expanding its government warrant canaries Cloudflare raises $150M with Franklin Templeton leading the latest round of funding workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice
Read more
  • 0
  • 0
  • 4020