Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Networking

54 Articles
article-image-microsoft-adobe-and-sap-share-new-details-about-the-open-data-initiative
Natasha Mathur
28 Mar 2019
3 min read
Save for later

Microsoft, Adobe, and SAP share new details about the Open Data Initiative

Natasha Mathur
28 Mar 2019
3 min read
Earlier this week at the Adobe Summit, world’s largest conference focused on Customer Experience Management, Microsoft, Adobe and SAP announced that they’re expanding their Open Data Initiative. CEOs of Microsoft, Adobe, and SAP announced the launch of the Open Data Initiative at the Microsoft Ignite Conference in 2018. The core idea behind Open Data Initiative is to make it easier for the customers to move data between each others’ services. Now, the three partners are looking forward to transforming customer experiences with the help of real-time insights that will be delivered via the cloud. They have also come out with a common approach and a set of resources for customers to help customers create new connections across previously siloed data. Read Also: Women win all open board director seats in Open Source Initiative 2019 board elections “From the beginning, the ODI has been focused on enhancing interoperability between the applications and platforms of the three partners through a common data model with data stored in a customer-chosen data lake”, reads the Microsoft announcement. This unified data lake offers customers their choice of development tools and applications to build and deploy services. Also, these companies have come out with a new approach for publishing, enriching and ingesting initial data feeds from Adobe Experience Platform into a customer’s data lake. The whole approach will be activated via Adobe Experience Cloud, Microsoft Dynamics 365, Office 365 and SAP C/4HANA. This, in turn, will provide a new level of AI enrichment, helping firms serve their customers better. Moreover, to further advance the development of the initiative, Adobe, Microsoft and SAP, also shared the details about their plans to summon a Partner Advisory Council. This Partner Advisory Council will comprise over a dozen firms including Accenture, Amadeus, Capgemini, Change Healthcare, Cognizant, etc. Microsoft states that these organizations believe there is a significant opportunity in the ODI to help them offer altogether new value to their customers. “We’re excited about the initiative Adobe, Microsoft and SAP have taken in this area, and we see a lot of opportunity to contribute to the development of ODI”, states Stephan Pretorius, CTO, WPP. Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Microsoft announces: Microsoft Defender ATP for Mac, a fully automated DNA data storage, and revived office assistant Clippy Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio
Read more
  • 0
  • 0
  • 2153

article-image-shodan-monitor-a-new-website-that-monitors-the-network-and-tracks-what-is-connected-to-the-internet
Amrata Joshi
28 Mar 2019
2 min read
Save for later

Shodan Monitor, a new website that monitors the network and tracks what is connected to the internet

Amrata Joshi
28 Mar 2019
2 min read
Just two days ago, the team at Shodan introduced Shodan Monitor, a new website that helps users to setup network alerts and keeps a track of what's connected to the internet. Features of Shodan Monitor Networking gets easy with Shodan Monitor Users will be able to explore what they have connected to the internet within their network range. The users can also set up real-time notifications in case something unexpected shows up. Scaling The Shodan platform can handle networks of all the sizes. In case an ISP wants to deal with millions of customers then Shodan could be reliable in that scenario. Security Shodan Monitor helps in monitoring the users’ known networks and their devices across the internet. It helps in detecting leaks to the cloud, identifying phishing websites and compromised databases. Shodan navigates users to important information Shodan Monitor helps in keeping the dashboards precise and relevant by proving the most relevant information with the help of their web crawlers. The information shown to the users on their dashboards gets filtered before getting displayed to them. Component details API Shodan Monitor provides users with developer-friendly API and command-line interface, which has all the features of the Shodan Monitor website. Scanning Shodan’s global infrastructure helps users to scan the networks in order to confirm that an issue has been fixed. Batteries Shodan’s API plan subscription gives users access to Shodan Monitor, search engine, API, and a wide range of websites. Few users are happy about this news and excited to use it. https://twitter.com/jcsecprof/status/1110866625253855235 According to a few others, the website still needs some work as they are facing error while working with the website. https://twitter.com/MarcelBilal/status/1110796413607313408 To know more about this news, check out Shodan Monitor. Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse? Grunt makes it easy to test and optimize your website. Here’s how. [Tutorial] FBI takes down some ‘DDoS for hire’ websites just before Christmas    
Read more
  • 0
  • 0
  • 2175

article-image-announcing-wireshark-3-0-0
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

Announcing Wireshark 3.0.0

Melisha Dsouza
01 Mar 2019
2 min read
Yesterday, Wireshark released its version 3.0.0 with new user interface improvements, bug fixes, new Npcap Windows Packet capturing driver and more. Wireshark, the open source and cross-platform network protocol analysis software is used by security analysts, experts and developers for analysis, troubleshooting, development, and other security-related tasks to capture and browse the packets traffic on computer networks. Features of Wireshark 3.0.0 The Windows .exe installers replaces WinPcap with Npcap. Npcap supports loopback capture and 802.11 WiFi monitor mode capture - only if supported by the NIC driver. The "Map-Button" of the Endpoint dialog that was erased since Wireshark Version 2.6.0 has been added in a modernized form. The macOS package ships with Qt 5.12.1 and the OS requires version 10.12 or later. Initial support has been provided for using PKCS #11 tokens for RSA decryption in TLS. Configure this at Preferences, RSA Keys. The new WireGuard dissector has decryption support and requires Libgcrypt 1.8 for the same. You can now copy coloring rules, IO graphs, filter Buttons and protocol preference tables from other profiles using a button in the corresponding configuration dialogs. Wireshark now supports Swedish, Ukrainian and Russian language. A new dfilter function string() has been added which allows the conversion of non-string fields to strings. This enables string functions to be used on them. The legacy (GTK+) user interface, the portaudio library are removed and no longer supported. Wireshark requires Qt 5.2 or later, GLib 2.32 or later, GnuTLS 3.2 or later as optional dependency. Building Wireshark requires Python 3.4 or a newer version. Data following a TCP ZeroWindowProbe is not passed to subdissectors and is marked as retransmission. Head over to Wireshark’s official blog for the entire list of upgraded features in this release. Using statistical tools in Wireshark for packet analysis [Tutorial] Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Analyzing enterprise application behavior with Wireshark 2
Read more
  • 0
  • 0
  • 25556

article-image-internet-governance-project-igp-survey-on-ipv6-adoption-initial-reports
Prasad Ramesh
07 Jan 2019
3 min read
Save for later

Internet governance project (IGP) survey on IPV6 adoption, initial reports

Prasad Ramesh
07 Jan 2019
3 min read
The Internet Governance Project (IGP) did some research last year to understand the factors affecting decisions of network operators for IPV6 adoption. The study was done by Georgia tech’s IGP in collaboration with the Internet Corporation for Assigned Names and Numbers (ICANN) office. A study was commissioned as both IGP and ICANN believed that the internet community needs a better understanding of the motives to upgrade IPV4 to IPV6. The study titled The Hidden Standards War: Economic Factors Affecting IPv6 Deployment should be out this month. IPV6 is a different type of internet protocol with a larger address space. As IPV4 addresses are limited, about 4 billion, they may get depleted in the future. Hence IPV6 adoption will happen sometime. it can hold 2^128 addresses which is more than enough for the foreseeable distant future. IPV6 addresses are also longer than IPV4 and contain both numbers and letters in a hexadecimal form. Initial results of the study The report by IGP is still in the draft stage but they have shared some initial findings. It was found that IPV6 is not going to be disregarded completely after all. Especially in mobile networks where both the hardware and the software support the use of IPV6. Although IPV6 capability is mostly turned off due to lack of compatibility, it still remains. The initial findings show that 79% of the countries, a total of 169, did not have any noteworthy IPV6 deployment. The deployment percentage remained at or even below 5% when the study was conducted last year. 12% of the countries summing up to 26 had an increasing deployment. 8% or 18 countries had shown a plateau in growth where IPv6 capability growth stopped between 8% and 59%. Why the slow adoption? They say that it is all about the costs and benefits associated with upgrading. As economic incentives were investigated, it was found that there is no real need for operators to actually upgrade their hardware. No one uses IPv6 exclusively, as all public and almost all private network service providers have to offer full compatibility. With this condition in place, operators have only three choices: Stick to IPv4 Implement dual stack and provide both Run IPv6 where compatible and run some tunneling for IPv4 compatibility. To move towards IPv6, dual stack is not economical. The third option seems to be the only viable one. There are no benefits for the operators to shift to IPv6. Even if one operator migrated, it puts no pressure on the others to shift. The network operators exclusively bear the maintenance costs. Hence, a wealthier country can deploy more IPv6 networks. Even though it was introduced in 1994, a big problem for forwarding adoption is that IPv6 is incompatible with IPv4. IPv6 adoption can make sense if a network needs to grow, but most networks don’t need to grow. Hence, instead of buying new hardware/software to run IPv6, operators would rather just buy new IPv4 addresses as they are cheaper. Bottom line is, there is no considerable incentive to make a move to change protocol until the remaining IPv4 pool in near depletion. IPv6 support to be automatically rolled out for most Netify Application Delivery Network users Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! 5G – Trick or Treat?
Read more
  • 0
  • 0
  • 3058

article-image-confluent-an-apache-kafka-service-provider-adopts-a-new-license-to-fight-against-cloud-service-providers
Natasha Mathur
26 Dec 2018
4 min read
Save for later

Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers

Natasha Mathur
26 Dec 2018
4 min read
A common trend of software firms limiting their software licenses to prevent cloud service providers from exploiting their open source code is all the rage these days. One such software firm to have joined this move is Confluent, an Apache Kafka service provider, who announced its new Confluent Community License, two weeks back. The new license is aimed at allowing users to download, modify and redistribute the code without letting them provide the software as a system as a service (SaaS). “What this means is that, for example, you can use KSQL however you see fit as an ingredient in your own products or services, whether those products are delivered as software or as SaaS, but you cannot create a KSQL-as-a-service offering. We’ll still be doing all development out in the open and accepting pull requests and feature suggestions”, says Jay Kreps, CEO, Confluent. The new license, however, will have no effect on Apache Kafka that remains under the Apache 2.0 license and Confluent will continue to contribute to it. Kreps pointed out that leading cloud providers such as Amazon, Microsoft, Alibaba, and Google, today, are all different in the way that they approach open source. Some of these major cloud providers partner up with the open source companies offering hosted versions of their SaaS. Then, there are other cloud providers that take the open source code, implement it into their cloud offering, and then further push all of their investments into differentiated proprietary offerings. For instance, Michael Howard, CEO, MariaDB Corp. called Amazon’s tactics “the worst behavior”  that she’s seen in the software industry due to a loophole in its licensing. Howard also mentioned that the cloud giant is “strip mining by exploiting the work of a community of developers who work for free”, as first reported by Silicon Angle. Kreps suggests a solution, that open source software firms should focus on building more proprietary software and should “pull back” from their open source investments. “But we think the right way to build fundamental infrastructure layers is with open code. As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment, and this is our motivation for the licensing change”, mentions Kreps. The Confluent license change move was followed by MongoDB, who switched to Server Side Public License (SSPL) this October in order to prevent these major cloud providers from misusing its open source code. This decision by MongoDB to change its software license was sparked by the fact these cloud vendors who are not responsible for the development of a software “captures all the value” for the developed software without contributing back much to the community. Another reason was that many cloud providers started to take MongoDB’s open-source code in order to offer a hosted commercial version of its database without following the open-source rules. The license change helps create “an incredible opportunity to foster a new wave of great open source server-side software”, said Eliot Horowitz, CTO, and co-founder, MongoDB. Horowitz also said that he hopes the change would “protect open source innovation”. MongoDB had followed the path of “Common Clause” license that was first adopted by Redis Labs. Common Clause started out as an initiative by a group of top software firms to protect their rights. Common Clause has been added to existing open source software licenses in order to develop a new and combined software license. The combined license puts a limit on the commercial sale of the software. All of these efforts by these companies are aimed at making sure that open source communities do not get taken advantage of by the leading cloud providers. As Kreps point out, “We think this is a positive change and one that can help ensure small open source communities aren’t acting as free and unsustainable R&D (Research & development) for tech giants that put sustaining resources only into their own differentiated proprietary offerings”. Neo4j Enterprise Edition is now available under a commercial license GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 4250

article-image-an-update-on-bcachefs-the-next-generation-linux-filesystem
Melisha Dsouza
03 Dec 2018
3 min read
Save for later

An update on Bcachefs- the “next generation Linux filesystem”

Melisha Dsouza
03 Dec 2018
3 min read
Kent Overstreet announced Bcachefs as “the COW filesystem for Linux that won't eat your data" in 2015. Since then the system has undergone numerous updates and patches to get to be where it is today. On the 1st of December, Overstreet published an update on the problems and improvements that are currently being worked upon in Bcachefs. Status update on Bcachefs After the last update, Overstreet has focussed on two major areas of improvement- atomicity of filesystem operations and non-persistence of allocation information (per bucket sector counts). The filesystem operations that had anything to do with i_nlink were not atomic. On startup, the system would have to scan and recalculate i_nlink and also delete no longer referenced inodes. Also, because of non-persistence of allocation information, on startup, the system would have to recalculate all the accounting disk space. The team has now been able to get everything to be fully atomic except for fallocate/fcollapse/etc. After an unclean shutdown, the only thing to be done is scan the inodes btree for inodes that have been deleted. Erasure coding is about 80% done now in Bcachefs. Overstreet is now focussed on persistent allocation information. This will then allow him to focus on ‘reflink’ which in turn will be useful to the company that's funding bcachefs development. This is because the reflinked extent refcounts will be much too big to keep in memory and hence will l have to be kept in a btree and updated whenever doing extent updates. The infrastructure needed to make that happen also depends on making disk space accounting persistent. After all of these updates, he claims bcachefs will have fast mounts (including after unclean shutdown). He is also working on some improvements to disk space accounting for multi-device filesystems which will lead up to fast mounts after clean Shutdowns. To know if a user can safely mount in degraded mode, they will have to store a list of all the combinations of disks that have data replicated across them (or are in an erasure coded stripe) - without any kind of fixed layout, like regular RAID does. Why should you choose Bcachefs? Overstreet announced that Bcachefs is stable, fast, and has a small and clean code-base, along with  the necessary features to be a modern Linux file-system. It has a long list of features, completed or in progress: Copy on write (COW) - like zfs or btrfs Full data and metadata checksumming Caching Compression Encryption Snapshots Scalable Bcachefs prioritizes robustness and reliability According to Kent, Bcachefs ensures that customers won't lose their data. The Bcachefs is an extension of bcache where the bcache was designed as a caching layer to improve block I/O performance. It uses a solid-state drive as a cache for a (slower, larger) underlying storage device. Mainline bcache is not a typical filesystem but looks like a special kind of block device. It handles the movement of blocks of data between fast and slow storage, ensuring that the most frequently used data is kept on the faster device. bcache manages data in a way that yields high performance while ensuring that no data is ever lost, even when an unclean shutdown takes place. You can head over to LKML.org for more information on this announcement. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 4691
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ipv6-support-to-be-automatically-rolled-out-for-most-netify-application-delivery-network-users
Melisha Dsouza
29 Nov 2018
3 min read
Save for later

IPv6 support to be automatically rolled out for most Netify Application Delivery Network users

Melisha Dsouza
29 Nov 2018
3 min read
Earlier this week,, Netlify announced in a blog post that the company has begun the rollout of IPv6 support on the Netlify Application Delivery Network. Netlify has adopted the IPv6 support as a solution to the IPv4 address capacity problem. This news comes right after the announcement that Netlify raised $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management. Netlify provides developers with an all-in-one workflow to build, deploy, and manage modern web projects. Their ‘Application Delivery Network’ is a new platform for the web and will assist web developers in building newer web-based applications. There is no need for developers to setup or manage servers as all content and applications will be created directly on a global network. It removes the dependency on origin infrastructure, allowing companies to host the entire application globally using APIs and microservices. IP addresses are assigned to every server connected to the internet. Netifly explain how  traditionally used IPv4 address pool is getting smaller with continuous expansion of the internet. This is where IPv6 steps in. IPv6 defines an IP address as a 128-bit entity instead of integer-based IPv4 addresses. For example, IPv4 defines an address as 167.99.129.42, and IPv6 address would instead look like 2001:0db8:85a3:0000:0000:8a2e:0370:7334. Even though the IPv6 format is complex to remember, it creates vastly more possible addresses to help support the rapid growth of the internet. In addition to efficient routing and packet processing, IPv6 also accounts for better security as compared to IPv4. This is because IPSec, which provides confidentiality, authentication and data integrity, is baked into IPv6. According to the blog post, users that are serving their sites on a subdomain of netlify.com or using custom domains registered from an external domain registrar, will automatically begin using IPv6 on their ADN. Customers using Netlify for DNS management, can go to the Domains section on the dashboard and enable IPv6 for each of their domains. Customers having a complex or bespoke DNS configuration or enterprise customers using Netlify’s Enterprise ADN infrastructure, are advised to contact Netlify’s support team or their account manager to ensure that their specific configuration is migrated to IPv6 appropriately. Netlify’s users have received this news well: https://twitter.com/sethvargo/status/1067152518638116864 Hacker News is also flooded with positive comments for Netlify: Netlify has starting off on the right foot, it would be interesting to see what customers think after implementing the IPv6 for their Netlify ADN. Head over to Netlify’s blog for more insights on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! libp2p: the modular P2P network stack by IPFS for better decentralized computing  
Read more
  • 0
  • 0
  • 2945

article-image-amazon-consumer-business-migrated-to-redshift-with-plans-to-move-88-of-its-oracle-dbs-to-aurora-and-dynamodb-by-year-end
Natasha Mathur
12 Nov 2018
3 min read
Save for later

Amazon consumer business migrated to Redshift with plans to move 88% of its Oracle DBs to Aurora and DynamoDB by year end

Natasha Mathur
12 Nov 2018
3 min read
Amazon is getting quite close to moving away from Oracle. Andy Jassy, CEO, Amazon Web Services, tweeted last week regarding turning off the Oracle data warehouse and moving to RedShift. Jassy’s recent tweet seems to be a response to Oracle’s CTO, Larry Ellison’s constant taunts and punch lines. https://twitter.com/ajassy/status/1060979175098437632 The news about Amazon making its shift from Oracle stirred up in January this year. This was followed by the CNBC report this August which talked about Amazon’s plans to move from Oracle by 2020. As per the report, Amazon had already started to migrate most of its infrastructure internally to Amazon Web services. The process to move from Oracle, however, has been a bit harder than expected for Amazon. It faced an outage in one of its biggest warehouses on Prime Day (one of the Amazon’s biggest sales day in a year), last month, as reported by CNBC. The major cause of the outage was Amazon’s migration from Oracle’s database to its own technology, Aurora PostgreSQL. Moreover, Amazon and Oracle have had regular word battles in recent years over the performance of their database software and cloud tools. For instance, Larry Ellison, CTO, Oracle, slammed Amazon as he said, “Let me tell you an interesting fact: Amazon does not use [Amazon web services] to run their business. Amazon runs their entire business on top of Oracle, on top of the Oracle database. They have been unable to migrate to AWS because it’s not good enough.” Larry Ellison also slammed Amazon during Oracle OpenWorld conference last year saying, “Oracle’s services are just plain better than AWS” and how Amazon is “one of the biggest Oracle users on Planet Earth”. “Amazon's Oracle data warehouse was one of the largest (if not THE largest) in the world. RIP. We have moved on to newer, faster, more reliable, more agile, more versatile technology at more lower cost and higher scale. #AWS Redshift FTW.” tweeted Werner Vogels, CTO, Amazon. Public reaction to this decision by Amazon has been largely positive with people supporting Amazon’s decision to migrate from Oracle: https://twitter.com/eonnen/status/1061082419057442816 https://twitter.com/adamuaa/status/1061094314909057024 https://twitter.com/nayar_amit/status/1061154161125773312 Oracle makes its Blockchain cloud service generally available Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer
Read more
  • 0
  • 0
  • 2229

article-image-soon-rhel-red-hat-enterprise-linux-wont-support-kde
Amrata Joshi
05 Nov 2018
2 min read
Save for later

Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE

Amrata Joshi
05 Nov 2018
2 min read
Later last week, Red Hat announced that RHEL has deprecated KDE (K Desktop Environment) support. KDE Plasma Workspaces (KDE) is an alternative to the default GNOME desktop environment for RHEL. Major future release of Red Hat Enterprise Linux will no longer support using KDE instead of the default GNOME desktop environment. In the 90’s, the Red Hat team was entirely against KDE and had put lots of effort into Gnome. Since Qt was under a not-quite-free license that time, the Red Hat team was firmly behind Gnome. Steve Almy, principal product manager of Red Hat Enterprise Linux, told the Register, “Based on trends in the Red Hat Enterprise Linux customer base, there is overwhelming interest in desktop technologies such as Gnome and Wayland, while interest in KDE has been waning in our installed base.” Red Hat heavily backs the Linux desktop environment GNOME, which is developed as an independent open-source project. Also, it is used by a large bunch of other distros. Although Red Hat is indicating the end of KDE support in RHEL, KDE is very much its own independent project that will continue on its own, with or without support from future RHEL editions. Almy said, “While Red Hat made the deprecation note in the RHEL 7.6 notes, KDE has quite a few years to go in RHEL's roadmap.” This is simply a warning that certain functionality may be removed or replaced from RHEL in the future with functionality similar or more advanced to the one deprecated. Though KDE, as well as anything listed in Chapter 51 of the Red Hat Enterprise Linux 7.6 release notes,  will continue to be supported for the life of Red Hat Enterprise Linux 7. Read more about this news on the official website of Red Hat. Red Hat released RHEL 7.6 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 4397

article-image-an-early-access-to-sailfish-3-is-here
Savia Lobo
02 Nov 2018
3 min read
Save for later

An early access to Sailfish 3 is here!

Savia Lobo
02 Nov 2018
3 min read
This week, Sailfish OS announced the early release of its third generation release i.e Sailfish 3 software and has made it available to all Sailfish users who had opted-in for the early access updates. Sami Pienimäki, CEO & Co-founder of Jolla Ltd, in his release post said, “we are expanding the Sailfish community program, “Sailfish X“, with a few of key additions next week: on November 8 we release the software for various Sony Xperia XA2 models.” Why the name ‘Sailfish’? Sailfish 3.0.0 is named after the legendary National Park Lemmenjoki in Northern Lapland. We’ve always aimed at respecting our Finnish roots in naming our software versions: previously we’ve covered lakes and rivers, and now we’re set to explore our beautiful national parks. Sailfish 3 will be rolled out in phases, and thus many features are deployed in several software releases. The first phase is Sailfish 3.0.0 is available as an early access version since October 31st. The customer release is expected to roll out soon in the coming weeks. Further, the next release 3.0.1 is expected to release in early December. Security and Corporate features of Sailfish 3 Sailfish 3 has a deeper level of security, which makes it a go-to option for various corporate and organizational solutions, and other use cases. Some of the new enhanced features in Sailfish 3 include Mobile Device Management (MDM), fully integrated VPN solutions, enterprise WiFi, data encryption, and better and faster performance. It also offers a full support for regional infrastructures including steady releases & OS upgrades, local hosting, training, and a flexible feature set to support specific customer needs. User experience highlights for Sailfish 3.0.0 New Top Menu: quick settings and shortcuts can now be accessed anywhere Light ambiances: new fresh look for Sailfish OS Data encryption: memory card encryption is now available. Device file system encryption is coming in next releases. New Keyboard gestures: quickly change keyboard layouts with one swipe USB On-The-Go storage: connect to different kinds of external storage devices Camera improvements: new lock screen camera roll allows you to review the photos you just took without unlocking the device Further, due to the rewritten way to launch apps and load views, one can achieve much better UI performance in Sailfish 3. Sami mentions, “You can start to enjoy the faster Sailfish already now with the 3.0.0 release and the upcoming major Qt upgrade will further improve the responsiveness & performance resulting to 50% better overall performance.” To know more about Sailfish 3 in detail, visit its official website. GitHub now allows issue transfer between repositories; a public beta version Introducing Howler.js, a Javascript audio library with full cross-browser support BabyAI: A research platform for grounded language learning with human in the loop, by Yoshua Bengio et al
Read more
  • 0
  • 0
  • 2769
article-image-red-hat-released-rhel-7-6
Amrata Joshi
01 Nov 2018
4 min read
Save for later

Red Hat released RHEL 7.6

Amrata Joshi
01 Nov 2018
4 min read
On Tuesday, Red Hat announced the general availability of RHEL (Red Hat Enterprise Linux) 7.6. RHEL 7.6 is a consistent hybrid cloud foundation for enterprise IT. It is built on an open source innovation, designed to enable organizations to match the pace with emerging cloud-native technologies. It also supports IT operations across enterprise IT’s four footprints. Just three months back the beta version of RHEL 7.6 was released. Red Hat Enterprise Linux 7.6  addresses a range of IT challenges, emphasizes security and compliance, management and automation, and Linux container innovations. Features in RHEL 7.6 RHEL 7.6 solves security concerns IT security has always been a key challenge for many IT departments as it does not get easier in complex hybrid and multi-cloud environments. Red Hat Enterprise Linux 7.6 is the answer to this problem as it introduces a Trusted Platform Module (TPM) 2.0 hardware modules as part of Network Bound Disk Encryption (NBDE). NBDE provides security across networked environments whereas, TPM works on-premise to add an additional layer of security, tying disks to specific physical systems. These two layers of security for hybrid cloud operations help keep information on disks physically more secure. RHEL 7.6 also makes it easier to manage firewalls with improvements to nftables, a packet filtering framework. It also simplifies the configuration of counter-intrusion measures. Updated cryptographic algorithms delivered for RSA and elliptic-curve cryptography (ECC) are enabled by default with RHEL 7.6. This helps the organizations handling sensitive information to match their pace with Federal Information Processing Standards (FIPS) compliance and standards bodies like the National Institute of Standards and Technology (NIST). Management and automation get better Red Hat Enterprise Linux 7.6 helps in making Linux adoption easier for the users as it brings enhancements to the Red Hat Enterprise Linux Web Console, which provides a graphical overview of Red Hat system health and status. RHEL 7.6 has made it easier to find updates on the system summary page. It also provides automated configuration of single sign-on for identity management and a firewall control interface. This makes it easier for security administrators. RHEL 7.6 comes with the Extended Berkeley Packet Filter (eBPF), which provides a safer and efficient mechanism for monitoring activities within the kernel. Soon, it will help in enabling additional performance monitoring and network tracing tools. Red Hat Enterprise Linux 7.6 also provides support for Red Hat Enterprise Linux System Roles which is a collection of Ansible modules. These modules are designed to provide a consistent way to automate and remotely manage Red Hat Enterprise Linux deployments. Each of these modules provides a ready-made automated workflow for handling common and complex tasks, involved in Linux environments. This automation helps to remove the possibilities of human error from these tasks.  This, in turn, frees up the IT teams and lets them focus more on adding business value. Red Hat’s lightweight container toolkit Red Hat Enterprise Linux 7.6 supports the rise of cloud-native technologies by introducing Red Hat’s lightweight container toolkit. This toolkit comprises of CRI-O, Buildah, Skopeo, and now Podman. Each of these tools is built on a fully open source and community-backed technologies. They are based on open standards like the Open Container Initiative (OCI) format. Podman complements Buildah and Skopeo while sharing the same foundations as CRI-O. It enables users to run containers and groups of containers (pods) from a familiar command-line interface, which eliminates the need of a daemon. This, in turn, helps to reduce the complexity in container creation while making it easier for developers to build containers on workstations, in continuous integration/continuous development (CI/CD) systems and within high-performance computing (HPC) or big data scheduling systems. For more information on this release, check out Red Hat’s official website Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 7238

article-image-center-for-democracy-and-technology-formulates-signals-of-trustworthy-vpns-to-improve-transparency-among-vpn-services
Bhagyashree R
22 Oct 2018
3 min read
Save for later

Center for Democracy and Technology formulates ‘Signals of Trustworthy VPNs’ to improve transparency among VPN services

Bhagyashree R
22 Oct 2018
3 min read
Earlier this year in May, the Center for Democracy and Technology (CDT) held a discussion at RightsCon in Toronto with popular VPN service providers: IVPN, Mullvad, TunnelBear, VyprVPN, and ExpressVPN. They together formulated a list of eight questions that describes the basic commitments VPNs can make to signal their trustworthiness and positive reputation which is called Signals of Trustworthy VPNs. CDT is a Washington, D.C.-based non-profit organization which aims to strengthen individual rights and freedom by defining, promoting, and influencing technology policy and the architecture of the internet. What was the goal behind the discussion between CDT and VPN providers? The goal of these questions is to improve transparency among VPN services and to help resources like That One Privacy Site and privacytools.io provide better comparisons between different services. Additionally, it will provide a way for users to easily compare privacy, security, and data use practices of VPNs. This initiative will also encourage VPNs to deploy measures that will meaningfully improve the privacy and security of individuals using their services. The questions that they have come up with tries to provide users clarity in three areas: Corporate accountability and business models Privacy practices Data security protocols and protections You can find the entire list of the questions at CDT’s official website. What are the key recommendations by CDT for VPN providers? The following are few of the best practices for VPN providers in order to build trust in their users: VPN providers should share information about the company’s leadership team, which can help users know more about the reputation of who they are trusting with their online activities. Any VPN provider should be able to share their place of legal incorporation and the laws they operate under. They should provide detailed information about their business model, specifically whether subscriptions are the sole source of a service’s revenue. They should clearly define what exactly they mean by “logging”. This information will include both connection and activity logging practices, as well as whether the VPN provider aggregates this information. Users should be aware of the approximate retention periods for any log data. VPN providers put in place procedures for automatically deleting any retained information after an appropriate period of time. This period of time should be disclosed and the length of time should also be justified. VPN providers can also implement bug bounty programs. This will encourage third parties to identify and report vulnerabilities they might come across when using the VPN service. Independent security audits should be conducted to identify technical vulnerabilities. To know more about the CDT’s recommendations and the eight questions, check out their official website. Apple bans Facebook’s VPN app from the App Store for violating its data collection rules What you need to know about VPNFilter Malware Attack IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support
Read more
  • 0
  • 0
  • 1501

article-image-sway-1-0-beta-1-released-with-the-addition-of-third-party-panels-auto-locking-and-more
Savia Lobo
22 Oct 2018
4 min read
Save for later

Sway 1.0 beta.1 released with the addition of third-party panels, auto-locking, and more

Savia Lobo
22 Oct 2018
4 min read
Last week, Sway, the i3-compatible Wayland compositor, released its version 1.0-beta.1. The community mentions that the Sway 1.0-beta.1 is 100% compatible with the i3 X11 window manager. Sway works well the existing i3 configuration and supports most of i3's features. The community also maintains the wlroots project to provide a modular basis for Sway and other Wayland compositors to build upon, and have also published standards for interoperable Wayland desktops. This version includes many input and output features alongwith other features such as auto-locking, idle management, and more. New features in Sway 1.0-beta.1 Output features The Sway 1.0 beta.1 version includes a new output features where the users can get the names of the outputs for use in their config file by using swaymsg -t get_outputs. Following are some examples of how outputs can be used: To rotate display to 90 degrees, use: output DP-1 transform 90 To enable Sway’s improved HiDPI support, use: output DP-1 scale 2 To enable fractional scaling : output DP-1 scale 1.5 Users can now run sway on multiple GPUs In this version, sway picks up a primary GPU automatically. Users can also override this by specifying a list of card names at startup with WLR_DRM_DEVICES=card0:card1:... Other features include support for daisy-chained DisplayPort configurations and improved Redshift support. Users can now drag windows between outputs with the mouse. Input features Users can get a list of their identifiers with swaymsg -t get_inputs. Users can now have multiple mice with multiple cursors, and can link keyboards, mice, drawing tablets, and touchscreens to each other arbitrarily. Users can have a dvorak keyboard for normal use and a second qwerty keyboard for a paired programming session. The coworker can also focus and type into separate windows from what a user is working on. Addition of third-party panels, lockscreens, etc. This version includes a new layer-shell protocol which enables the use of more third-party software on sway. One of the main features in sway 1.0 and wlroots is to break the boundaries between Wayland compositors and encourage standard inter-operable protocols. The community has also added two new protocols for capturing user’s screen; screencopy and dmabuf-export. These new protocols are useful for screenshots and real-time screen capture, for example to live stream on Twitch. DPMS, auto-locking, and idle management The new swayidle tool adds support for DPMS, auto-locking, and idle management, which even works on other Wayland compositors. To configure it, start by running the daemon in the sway config file: exec swayidle \    timeout 300 'swaylock -c 000000' \    timeout 600 'swaymsg "output * dpms off"' \       resume 'swaymsg "output * dpms on"' \    before-sleep 'swaylock -c 000000' This code will lock user’s screen after 300 seconds of inactivity. After 600 seconds, it will turn off all of the outputs (and turn them back on when the user simply wiggles the mouse). This configuration also locks the screen before the system goes to sleep. However, none of this will happen if while watching a video on a supported media player (mpv, for example). Other features of Sway 1.0 beta.1 The additional features in the Sway 1.0 beta version include: swaylock has a config file Drag and drop is supported Rich content (like images) is synced between the Wayland and X11 clipboards The layout is updated atomically, meaning that user will never see an in-progress frame when resizing windows Primary selection is implemented and synced with X11 To know more about Sway 1.0 beta.1 in detail, see the release notes. Chrome 70 releases with support for Desktop Progressive Web Apps on Windows and Linux Announcing the early release of Travis CI on Windows Windows 10 IoT Core: What you need to know  
Read more
  • 0
  • 0
  • 1696
article-image-opus-1-3-a-popular-foss-audio-codec-with-machine-learning-and-vr-support-is-now-generally-available
Amrata Joshi
22 Oct 2018
3 min read
Save for later

Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available

Amrata Joshi
22 Oct 2018
3 min read
Last week, the team at Opus announced the general availability of Opus Audio Codec version 1.3. Opus 1.3 comes along with a new set of features, namely, a recurrent neural network, reliable speech/music detector, convenience, ambisonics support, efficient memory, compatibility with RFC 6716 and a lot more. Opus is an open and royalty-free audio codec, which is highly useful for all audio applications, right from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is included in all major browsers and mobile operating systems, used for a wide range of applications and is the default WebRTC codec. New features in Opus Audio Codec 1.3 Reliable speech/music detector powered by machine learning Opus 1.3 promises a new speech/music detector. As it is based on a recurrent neural network, it is way simpler and reliable than the detector used in version 1.1.The speech/music detector in earlier versions was based on a simple (non-recurrent) neural network, followed by an HMM-based layer to combine the neural network results over time. Opus 1.3 introduces a new recurrent neuron which is the Gated Recurrent Unit (GRU). The GRU does not just learn how to use its input and memory at a time, but it also promises to learn, how and when to update its memory. This, in turn, helps it to remember information for a longer period of time. Mixed Content encoding gets better Mixed content encoding, especially at bit rates below 48 kb/s, will get more convenient as the new detector helps in improving the performance of Opus. Developers will experience a great change in speech encoding at lower bit rates, both for mono and stereo. Encode 3D audio soundtracks for VR easily This release comes along with ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos. Opus detector won’t take much of your space The Opus detector has just 4986 weights (that fit in less than 5 KB) and takes about 0.02% memory of CPU to run in real-time, instead of thousands of neurons and millions of weights running on a GPU. Additional Updates Improvements in Security/hardening, Voice Activity Detector (VAD), and speech/music classification using an RNN are simply add-ons. The major bug fixes in this release are CELT PLC and bandwidth detection fixes. Read more about the release on Mozilla’s official website. Also, check out a demo for more details. YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist Google releases Oboe, a C++ library to build high-performance Android  audio apps How to perform Audio-Video-Image Scraping with Python
Read more
  • 0
  • 0
  • 4160

article-image-libp2p-the-modular-p2p-network-stack-by-ipfs-for-better-decentralized-computing
Melisha Dsouza
09 Oct 2018
4 min read
Save for later

libp2p: the modular P2P network stack by IPFS for better decentralized computing

Melisha Dsouza
09 Oct 2018
4 min read
libp2p is a P2P Network stack introduced by the IPFS community. libp2p is capable of discovering other peers and networks without resourcing to centralized registries that enables apps to work offline. In July 2018, Davis Dias explained that the design of a 'location addressed web' is the reason for its fragility. Small errors in its backbone can lead to shutting down of all running applications. Firewalls, routing issues, roaming issue, and network reliability interfere with users having a smooth experience on the web. Thus came a need to re-imagine the network stack. To solve all the above problems, the InterPlanetary File System (IPFS) came into being. It is a decentralized web protocol based on content-addressing, digital signatures, and peer-to-peer distribution. Today, IPFS is used to build completely distributed (and offline-capable!) web-apps which are also available offline. IPFS saves and distributes valuable datasets, and moves billions of files. IPFS spawned several other projects and libp2p is one of them. It enables users to run network applications free from runtime and address services while being independent of their location. libp2p solves the complexity of dealing with numerous protocols in a decentralized environment. It effectively helps users connect with multiple peers using only a single protocol thus paving the way for the next generation of decentralized systems. Libp2p Features #1 Transport Module libp2p enables application developers to pick the modules needed to run their application. These modules vary depending on the runtime they are executing. A libp2p node uses one or more Transports to dial and listen for connections. These transport modules offer a clean interface for dialing and listening which is defined by the interface-transport specification. #2 No prior assigning of ports Before libp2p came into existence, users would assign a listener to a port and then assign ports to special protocols. This was done so that other hosts would know in advance which port to dial. With libp2p users do not have to assign ports beforehand. #3 Encrypted communication To ensure an encrypted connection, libp2p also supports a set of modules that encrypt every communication established. #4 Peer Discovery and Routing A peer discovery module helps libp2p to find peers to connect to. Peer routing finds other peers in the network by intentionally issuing queries, which can be iterative or recursive, until a peer is found. Content routing mechanism is used to find where content lives in the network. Using libp2p in IPFS libp2p is now refactored into its own project so that other users can take advantage of it and be part of its ecosystem as well. It is what provides IPFS and other projects the P2P connectivity, support for multiple platforms and browsers and many other advantages. Users can utilize the libp2p module to create their own libp2p bundle. They can customize their bundles with features and default setup. It also takes into account a user's needs. For example, the team has built a browser working version of libp2p that acts as the network layer of IPFS and leverages browser transports. You can head over to GitHub to check this example. Keep Networks has also demonstrated the use of libp2p. Since participants need to know how to connect to each other, the team has come up with a simple example of peer-to-peer discovery. They have used a few pieces of the libp2p JS library to create nodes that discover and communicate with each other. You can head over to their blog to check out how the example works. Another emerging use for libP2P is in blockchain applications. IPFS is used by blockchains and blockchain applications, and its subprotocols (libp2p, multihash, IPLD) can be extremely useful for blockchain standardization. A good  example of this would be getting the ethereum blockchain in the browser or in a Node.js process using libp2p and running it through ethereum-vm. That being said, there are multiple challenges that developers will encounter while using libP2P for their Blockchain examples. Chris Pacia, the backend developer for OB1, explains how developers can face these challenges in his talk at QCon. With all the buzz around blockchains and decentralized computing these days, libp2p is making its rounds on the internet. For more insights on libp2p, you can visit their official site. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed
Read more
  • 0
  • 0
  • 6488