Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-stackrox-kubernetes-security-platform-3-0-releases-with-advanced-configuration-and-vulnerability-management-capabilities
Bhagyashree R
13 Nov 2019
3 min read
Save for later

StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities

Bhagyashree R
13 Nov 2019
3 min read
Today, StackRox, a Kubernetes-native container security platform provider announced StackRox Kubernetes Security Platform 3.0. This release includes industry-first features for configuration and vulnerability management that enable businesses to achieve stronger protection of cloud-native, containerized applications. In a press release, Wei Lien Dang, StackRox’s vice president of product, and co-founder said, “When it comes to Kubernetes security, new challenges related to vulnerabilities and misconfigurations continue to emerge.” “DevOps and Security teams need solutions that quickly and easily solve these issues. StackRox 3.0 is the first container security platform with the capabilities orgs need to effectively deal with Kubernetes configurations and vulnerabilities, so they can reduce risk to what matters most – their applications and their customer’s data,” he added. What’s new in StackRox Kubernetes Security Platform 3.0 Features for configuration management Interactive dashboards: This will enable users to view risk-prioritized misconfigurations, easily drill-down to critical information about the misconfiguration, and determine relevant context required for effective remediation. Kubernetes role-based access control (RBAC) assessment: StackRox will continuously monitor permission for users and service accounts to help mitigate against excessive privileges being granted. Kubernetes secrets access monitoring: The platform will discover secrets in Kubernetes and monitor which deployments can use them to limit unnecessary access. Kubernetes-specific policy enforcement: StackRox will identify configurations in Kubernetes related to network exposures, privileged containers, root processes, and other factors to determine policy violations. Advanced vulnerability management capabilities Interactive dashboards: StackRox Kubernetes Security Platform 3.0 has interactive views that provide risk prioritized snapshots across your environment, highlighting vulnerabilities in both, images and Kubernetes. Discovery of Kubernetes vulnerabilities: The platform gives you visibility into critical vulnerabilities that exist in the Kubernetes platform including the ones related to the Kubernetes API server disclosed by the Kubernetes product security team. Language-specific vulnerabilities: StackRox scans container images for additional vulnerabilities that are language-dependent, providing greater coverage across containerized applications.  Along with the aforementioned features, StackRox Kubernetes Security Platform 3.0 adds support for various ecosystem platforms. These include CRI-O, the Open Container Initiative (OCI)-compliant implementation of the Kubernetes Container Runtime Interface (CRI), Google Anthos, Microsoft Teams integration, and more. These were a few latest capabilities shipped in StackRox Kubernetes Security Platform 3.0. To know more, you can check out live demos and Q&A by the StackRox team at KubeCon 2019, which will be happening from November 18-21 in San Diego, California. It brings together adopters and technologists from leading open source and cloud-native communities. Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices  
Read more
  • 0
  • 0
  • 3735

article-image-google-starts-experimenting-with-manifest-v3-extension-in-chrome-80-canary-build
Sugandha Lahoti
12 Nov 2019
3 min read
Save for later

Google starts experimenting with Manifest V3 extension in Chrome 80 Canary build

Sugandha Lahoti
12 Nov 2019
3 min read
In spite of the overwhelmingly negative feedback on the Manifest V3 extension system, Google is standing firm on Chrome’s ad-blocking changes. Last month, the company announced that it has begun testing its upcoming extension manifest V3 in the latest Chrome Canary build. As of October 31st, the Manifest V3 developer preview has been made available in the Chrome 80 Canary build. Manifest v3 and why it can end multiple ad blockers Manifest v3 has become a bone of contention for many ad-block companies. This is because Google developers have introduced an alternative to the webRequest API (earlier used for ad-blocking) named the declarativeRequest API, which limits the blocking version of the webRequest API. Chrome developers listed two reasons behind this new update, one was performance (although that was nullified in a study by WhoTracks.me) and the other was a better privacy guarantee to users. Chrome currently imposes a limit of 30,000 rules. However, most popular ad-blocking rules lists use almost 75,000 rules. Although Google claimed that they’re looking to increase this number, they didn’t assure it. Many ad blocker maintainers and developers felt that the introduction of the declarativeNetRequest API can lead to the crippling of many already existing ad blockers. The lead developer of popular ad blocker uBlock Origin, which relies on the original functionality of the webRequest API, commented, “This breaks uBlock Origin and uMatrix, [which] are incompatible with the basic matching algorithm picked, ostensibly designed to enforce EasyList-like filter lists,” he explained in an email to The Register. “A blocking webRequest API allows open-ended content blocker designs, not restricted to a specific design and limits dictated by the same company which states that content blockers are a threat to its business.” Many users also mentioned that Chrome is using its dominance in the browser market to dictate what type of extensions are developed and used. A user commented, “As Chrome is a dominant platform, our work is prevented from reaching users if it does not align with the business goals of Google, and extensions that users want on their devices are effectively censored out of existence.” Others expressed that it is better to avoid all the drama by simply switching to some other browser, mainly Firefox. “Or you could cease contributing to the Blink monopoly on the web and join us of Firefox. Microsoft is no longer challenging Google in this space,” a user added. Manifest V3 proposed changes As a part of Chrome 80 Canary build, the Chrome team is continuing to iterate on the declarativeNetRequest API and its capabilities. As a part of this release, background service workers (killing background page and scripts) are now available for testing in manifest version 2 and 3 extensions in Canary. Remotely-hosted code restrictions and host permissions changes are currently a work in progress. They are also working on combining page_action and browser_action APIs to single-action API. The manifest v3 proposed changes are not finalized yet, and several features are currently works in progress. The MV3 stable release is expected in 2020. As part of this launch, Google has created a Migrating to Manifest V3 guide that developers can use to migrate their existing extensions. They have also built a guide specifically for migrating from background pages to service workers. Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end. Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument.
Read more
  • 0
  • 0
  • 5936

article-image-lg-introduces-auptimizer-an-open-source-ml-model-optimization-tool-for-efficient-hyperparameter-tuning-at-scale
Bhagyashree R
12 Nov 2019
4 min read
Save for later

LG introduces Auptimizer, an open-source ML model optimization tool for efficient hyperparameter tuning at scale

Bhagyashree R
12 Nov 2019
4 min read
Last week, researchers from LG’s Advanced AI team open-sourced Auptimizer. It is a general hyperparameter optimization (HPO) framework to help data scientists speed up machine learning model tuning. What challenges Auptimizer aims to address Hyperparameters are adjustable parameters that govern the training process of a machine learning model. These represent important properties of a model, for instance, a penalty in Logistic Regression Classifier or learning rate for training a neural network. Tuning hyperparameters can often be a very tedious task, especially when the model training is computationally intensive. There are currently both open source and commercial automated HPO solutions like Google AutoML, Amazon SageMaker, and Optunity. However, using them at scale still poses some challenges. In a paper explaining the motivation and system design behind Auptimizer, the team wrote, “But, in all cases, adopting new algorithms or accommodating new computing resources is still challenging.” To address these challenges and more, the team has come up with Auptimizer with which they aim to automate all the tedious tasks you do when building a machine learning model. This initial open-sourced version provides the following advantages: Easily switch among different Hyperparameter Optimization algorithms without rewriting the training script Getting started with Auptimizer only requires adding a few lines of code and it will then guide you to setup all other experiment-related configurations. This enables users to switch among different HPO algorithms and computing resources without rewriting their training script, which is one of the key hurdles in the HPO adoption process. Once set up, it will run and record sophisticated Hyperparameter Optimization experiments for you. Orchestrating compute resources for faster hyperparameter tuning Users can specify the resources they want to be used in experiment configurations including processors, graphics chips, nodes, and public cloud instances like Amazon Web Services EC2. Auptimizer keeps track of the resources in a persistent database and queries it to check if the resources specified by the user are available. If the resource is available, it will be taken by Auptimizer for job execution, and if not the system will wait until it is free. Auptimizer is also compatible with existing resource management tools such as Boto 3. A single interface to various sophisticated HPO algorithms The current Auptimizer implementation provides a “single seamless access point to top-notch HPO algorithms” such as Spearmint, Hyperopt, Hyperband, BOHB and also supports the simple random search and grid search. Users can also integrate their own proprietary solution and switch between different HPO algorithms with minimal changes to their existing code. The following table shows the currently supported techniques by Auptimizer: Source: LG How Auptimizer works The following figure shows the system design for Auptimizer: [caption id="attachment_30616" align="aligncenter" width="699"] Auptimizer System Design[/caption] Source: LG The key components of Auptimizer are the Proposer and Resource Manager. The Proposer interface defines two functions: ‘get_param()’ to return the new hyperparameter values and ‘update()’ to update the history. Resource Manager is responsible for automatically connecting compute resources to model training when they are available. Its ‘get_available()’ function acts as the interface between Auptimizer and typical resource management and job scheduling tools. The ‘run()’ function, as the name suggests, executes the provided code. To enable reproducibility, Auptimizer has a provision for tracking all the experiment history in the user-specified database. Users can also visualize the results from history with a basic visualization tool that comes integrated with Auptimizer. For further analysis, users can directly access the results stored in the database. Sharing the future vision for Auptimizer the team wrote, “As development progress, Auptimizer will support the end-to-end development cycle for building models for edge devices including robust support for model compression and neural architecture search.” This article gave you a basic introduction to Auptimizer. Check out the paper, Auptimizer - an Extensible, Open-Source Framework for Hyperparameter Tuning and GitHub repository to know more in detail. Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Training Deep Convolutional GANs to generate Anime Characters [Tutorial]
Read more
  • 0
  • 0
  • 3031

article-image-redhats-quarkus-announces-plans-for-quarkus-1-0-releases-its-rc1
Vincy Davis
11 Nov 2019
3 min read
Save for later

Red Hat’s Quarkus announces plans for Quarkus 1.0, releases its rc1 

Vincy Davis
11 Nov 2019
3 min read
Update: On 25th November, the Quarkus team announced the release of Quarkus 1.0.0.Final bits. Head over to the Quarkus blog for more details on the official announcement. Last week, RedHat’s Quarkus, the Kubernetes native Java framework for GraalVM & OpenJDK HotSpot announced the availability of its first release candidate. It also notified users that its first stable version will be released by the end of this month. Launched in March this year, Quarkus framework uses Java libraries and standards to provide an effective solution for running Java on new deployment environments like serverless, microservices, containers, Kubernetes, and more. Java developers can employ this framework to build apps with faster startup time and less memory than traditional Java-based microservices frameworks. It also provides flexible and easy to use APIs that can help developers to build cloud-native apps, and best-of-breed frameworks. “The community has worked really hard to up the quality of Quarkus in the last few weeks: bug fixes, documentation improvements, new extensions and above all upping the standards for developer experience,” states the Quarkus team. Latest updates added in Quarkus 1.0 A new reactive core based on Vert.x with support for reactive and imperative programming models. This feature aims to make reactive programming a first-class feature of Quarkus. A new non-blocking security layer that allows reactive authentications and authorization. It also enables reactive security operations to integrate with Vert.x. Improved Spring API compatibility, including Spring Web and Spring Data JPA, as well as Spring DI. A Quarkus ecosystem also called as “universe”, is a set of extensions that fully supports native compilation via GraalVM native image. It supports Java 8, 11 and 13 when using Quarkus on the JVM. It will also support Java 11 native compilation in the near future. RedHat says, “Looking ahead, the community is focused on adding additional extensions like enhanced Spring API compatibility, improved observability, and support for long-running transactions.” Many users are excited about Quarkus and are looking forward to trying the stable version. https://twitter.com/zemiak/status/1192125163472637952 https://twitter.com/loicrouchon/status/1192206531045085186 https://twitter.com/lasombra_br/status/1192114234349563905 How Quarkus brings Java into the modern world of enterprise tech Apple shares tentative goals for WebKit 2020 Apple introduces Swift Numerics to support numerical computing in Swift Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Fastly announces the next-gen edge computing services available in private beta
Read more
  • 0
  • 0
  • 3442

article-image-the-union-types-2-0-proposal-gets-a-go-ahead-for-php-8-0
Bhagyashree R
11 Nov 2019
3 min read
Save for later

The Union Types 2.0 proposal gets a go-ahead for PHP 8.0

Bhagyashree R
11 Nov 2019
3 min read
Last week, the Union Types 2.0 RFC by Nikita Popov, a software developer at JetBrains got accepted for PHP 8.0 with 61 votes in favor and 5 against. Popov submitted this RFC as a GitHub pull request to check whether it would be a better medium for RFC proposals in the future, which got a positive response from many PHP developers. https://twitter.com/enunomaduro/status/1169179343580516352 What the Union Types 2.0 RFC proposes PHP type declarations allow you to specify the type of parameters and return values acceptable by a function. Though for most of the functions, the acceptable parameters and possible return values will be of only one type, there are cases when they can be of multiple types. Currently, PHP supports two special union types. One is the nullable types that you can specify using the ‘?Type’ syntax to mark a parameter or return value as nullable. This means, in addition to the specified type, NULL can also be passed as an argument or return value. Another one is ‘array’ or ‘Traversable’ that you can specify using the special iterable type. The Union Types 2.0 RFC proposes to add support for arbitrary union types, which can be specified using the syntax T1|T2|... Support for Union types will enable developers to move more type information from ‘phpdoc’ into function signatures. Other advantages of arbitrary union types include early detection of mistakes and less boilerplate-y code compared to ‘phpdoc’. This will also ensure that type is checked during inheritance and are available through Reflection. This RFC does not contain any backward-incompatible changes. However, existing ReflectionType based code will have to be adjusted in order to support the processing of code that uses union types. The RFC for union types was first proposed 4 years ago by PHP open source contributors, Levi Morrison and Bob Weinand. This new proposal has a few updates compared to the previous one that Popov shared on the PHP mailing list thread: Updated to specify interaction with new language features, like full variance and property types. Updated for the use of the ?Type syntax rather than the Type|null syntax. It only supports "false" as a pseudo-type, not "true". Slightly simplified semantics for the coercive typing mode. In a Reddit discussion, many developers welcomed this decision. A user commented, “PHP 8 will be blazing. I can't wait for it.” While some others felt that this a one step backward. “Feels like a step backward. IMHO, a better solution would have been to add function overloading to the language, i.e. give the ability to add many methods with the same name, but different argument types,” a user expressed. You can read the Union Types 2.0 RFC to know more in detail. You can read the discussion about this RFC on GitHub. Symfony leaves PHP-FIG, the framework interoperability group Oracle releases GraphPipe: An open-source tool that standardizes machine learning model deployment Connecting your data to MongoDB using PyMongo and PHP
Read more
  • 0
  • 0
  • 4522

article-image-apple-shares-tentative-goals-for-webkit-2020
Sugandha Lahoti
11 Nov 2019
3 min read
Save for later

Apple shares tentative goals for WebKit 2020

Sugandha Lahoti
11 Nov 2019
3 min read
Apple has released a list of tentative goals for WebKit in 2020 catering to WebKit users as well as Web, Native, and WebKit Developers. These features are tentative and there is no guarantee if these updates will ship at all. Before releasing new features, Apple looks at a number of factors that are arranged according to a plan or system. They look at developer interests and harmful aspects associated with a feature. Sometimes they also take feedback/suggestions from high-value websites. WebKit 2020 enhancements for WebKit users Primarily, WebKit is focused on improving performance as well as privacy and security. Some performance ideas suggested include Media query change handling, No sync IPC for cookies, Fast for-of iteration, Turbo DFG, Async gestures, Fast scrolling on macOS, Global GC, and Service Worker declarative routing. For privacy, Apple is focusing on improving Address ITP bypasses, logged in API, in-app browser privacy, and PCM with fraud prevention. They are also working on improving Authentication, Network Security, JavaScript Hardening, WebCore Hardening, and Sandbox Hardening. Improvements in WebKit 2020 for Web Developers For web platforms, the focus is on three qualities - Catchup, Innovation, and Quality. Apple is planning to bring improvements in Graphics and Animations (CSS overscroll-behavior, WebGL 2, Web Animations), Media (Media Session Standard MediaStream Recording, Picture-in-Picture API) and DOM, JavaScript, and Text. They are also looking to improve CSS Shadow Parts, Stylable pieces, JS builtin modules, Undo Web API and also work on WPT (Web Platform Tests). Changes suggested for Native Developers For Native Developers in the obsolete legacy WebKit, the following changes are suggested: WKWebView API needed for migration Fix cookie flakiness due to multiple process pools WKWebView APIs for Media Enhancements for WebKit Developers The focus is on improving Architecture health and service & tools. Changes suggested are: Define “intent to implement” style process Faster Builds (finish unified builds) Next-gen layout for line layout Regression Test Debt repayment IOSurface in Simulator EWS (Early Warning System) Improvements Buildbot 2.0 WebKit on GitHub as a project (year 1 of a multi-year project) On Hacker News, this topic was widely discussed with people pointing out what they want to see in WebKit. “Two WebKit goals I'd like to see for 2020: (1) Allow non-WebKit browsers on iOS (start outperforming your competition instead of merely banning your competition), and (2) Make iOS the best platform for powerful web apps instead of the worst, the leader instead of the spoiler.” Another pointed, “It would be great if SVG rendering, used for diagrams, was of equal quality to Firefox.” One said, “WebKit and the Safari browsers by extension should have full and proper support for Service Workers and PWAs on par with other browsers.” For a full list of updates, please see the WebKit Wiki page. Apple introduces Swift Numerics to support numerical computing in Swift Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications
Read more
  • 0
  • 0
  • 5784
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-apple-introduces-swift-numerics-to-support-numerical-computing-in-swift
Bhagyashree R
08 Nov 2019
2 min read
Save for later

Apple introduces Swift Numerics to support numerical computing in Swift

Bhagyashree R
08 Nov 2019
2 min read
Yesterday, Steve Canon, a member of Apple’s Swift Standard Library team announced a new open-source project called Swift Numerics. The goal behind this project is to enable the use of Swift language in new domains of programming. What is Swift Numerics Swift Numerics is a Swift package containing a set of fine-grained modules. These modules fall broadly under two categories. One, modules that are too specialized to be included into the standard library, but are general enough to be in a single common package. The second category includes those modules that are “under active development toward possible future inclusion in the standard library.” Currently, Swift Numerics has two most-requested modules: Real and Complex. The Real module provides basic math functions proposed in SE-0246. This proposal was accepted but due to some limitations in the compiler, it is not yet possible to add the new functions directly to the standard library. Real provides the basic math functionalities in a separate module so that developers can start using them right away in their projects. The Complex module introduces a Complex number type over an underlying Real type. It includes usual arithmetic operators for complex numbers. It is conformant to usual protocols such as Equatable, Hashable, Codable, and Numeric. The support for complex numbers can be especially useful when working with Fourier transforms and signal processing algorithms. The modules included in Swift Numerics have minimal dependencies. For instance, the current modules only require the availability of the Swift and C standard libraries and the runtime support provided by compiler-rt. Also, the Swift Numerics package is open-sourced under the same license and contribution guidelines as the Swift project (Apache License 2.0). In a discussion on Hacker News, many developers shared their views on Swift Numerics. A user commented,  “Really looking forward to ShapedArray. Eventually, a lot of what one might do with Python may be available in Swift.” Read the official announcement by Apple to know more about Swift Numerics. Also, check out its GitHub repository. Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Introducing SwiftWasm, a tool for compiling Swift to WebAssembly Swift is improving the UI of its generics model with the “reverse generics” system
Read more
  • 0
  • 0
  • 5310

article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 3757

article-image-neo4j-introduces-aura-a-new-cloud-service-to-supply-a-flexible-reliable-and-developer-friendly-graph-database
Vincy Davis
07 Nov 2019
2 min read
Save for later

Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database

Vincy Davis
07 Nov 2019
2 min read
Last year, Neo4j had announced the availability of its Enterprise Edition under a commercial license that was aimed at larger companies. Yesterday, the graph database management firm introduced a new managed cloud service called Aura directed at smaller companies. This new service is developed for the market audience between the larger companies and Neo4j’s open source product. https://twitter.com/kfreytag/status/1192076546070253568 Aura aims to supply a flexible, reliable and developer-friendly graph database. In an interview with TechCrunch, Emil Eifrem, CEO and co-founder at Neo4j says, “To get started with, an enterprise project can run hundreds of thousands of dollars per year. Whereas with Aura, you can get started for about 50 bucks a month, and that means that it opens it up to new segments of the market.” Aura offers a definite value proposition, a flexible pricing model, and other management and security updates for the company. It will also provide scaling of the growing data requirements of the company. In simple words, Aura seeks to simplify developers’ work by allowing them to focus on building applications work while Neo4j takes care of the company’s database. Many developers are excited to try out Aura. https://twitter.com/eszterbsz/status/1192359850375884805 https://twitter.com/IntriguingNW/status/1192352241853849600 https://twitter.com/sixwing/status/1192090394244333569 Neo4j rewarded with $80M Series E, plans to expand company Neo4j 3.4 aims to make connected data even more accessible Introducing PostgREST, a REST API for any PostgreSQL database written in Haskell Linux Foundation introduces strict telemetry data collection and usage policy for all its projects MongoDB is partnering with Alibaba
Read more
  • 0
  • 0
  • 3853

article-image-yubico-reveals-biometric-yubikey-at-microsoft-ignite
Fatema Patrawala
07 Nov 2019
4 min read
Save for later

Yubico reveals Biometric YubiKey at Microsoft Ignite

Fatema Patrawala
07 Nov 2019
4 min read
On Tuesday, at the ongoing Microsoft Ignite, Yubico, the leading provider of authentication and encryption hardware, announced the long-awaited YubiKey Bio. YubiKey Bio is the first YubiKey to support fingerprint recognition for secure and seamless passwordless logins. As per the team this feature has been a top requested feature from many of their YubiKey users. Key features in YubiKey Bio The YubiKey Bio delivers the convenience of biometric login with the added benefits of Yubico’s hallmark security, reliability and durability assurances. Biometric fingerprint credentials are stored in the secure element that helps protect them against physical attacks. As a result, a single, trusted hardware-backed root of trust delivers a seamless login experience across different devices, operating systems, and applications. With support for both biometric- and PIN-based login, the YubiKey Bio leverages the full range of multi-factor authentication (MFA) capabilities outlined in the FIDO2 and WebAuthn standard specifications. In keeping with Yubico’s design philosophy, the YubiKey Bio will not require any batteries, drivers, or associated software. The key seamlessly integrates with the native biometric enrollment and management features supported in the latest versions of Windows 10 and Azure Active Directory, making it quick and convenient for users to adopt a phishing-resistant passwordless login flow. “As a result of close collaboration between our engineering teams, Yubico is bringing strong hardware-backed biometric authentication to market to provide a seamless experience for our customers,” said Joy Chik, Corporate VP of Identity, Microsoft. “This new innovation will help drive adoption of safer passwordless sign-in so everyone can be more secure and productive.” The Yubico team has worked with Microsoft in the past few years to help drive the future of passwordless authentication through the creation of the FIDO2 and WebAuthn open authentication standards. Additionally they have built YubiKey integrations with the full suite of Microsoft products including Windows 10 with Azure Active Directory and Microsoft Edge with Microsoft Accounts. Microsoft Ignite attendees saw a live demo of passwordless sign-in to Microsoft Azure Active Directory accounts using the YubiKey Bio. The team also promises that by early next year, enterprise users will be able to authenticate to on-premises Active Directory integrated applications and resources. And provide seamless Single Sign-On (SSO) to cloud- and SAML-based applications. To take advantage of strong YubiKey authentication in Azure Active Directory environments, users can refer to this page for more information. On Hacker News, this news has received mixed reactions while some are in favour of the biometric authentication, others believe that keeping stronger passwords is still a better choice. One of them commented, “1) This is an upgrade to the touch sensitive button that's on all YubiKeys today. The reason you have to touch the key is so that if an attacker gains access to your computer with an attached Yubikey, they will not be able to use it (it requires physical presence). Now that touch sensitive button becomes a fingerprint reader, so it can't be activated by just anyone. 2) The computer/OS doesn't have to support anything for this added feature.” Another user responds, “A fingerprint is only going to stop a very opportunistic attacker. Someone who already has your desktop and app password and physical access to your desktop can probably get a fingerprint off a glass, cup or something else. I don't think this product is as useful as it seems at first glance. Using stronger passwords is probably just as safe.” Google updates biometric authentication for Android P, introduces BiometricPrompt API GitHub now supports two-factor authentication with security keys using the WebAuthn API You can now use fingerprint or screen lock instead of passwords when visiting certain Google services thanks to FIDO2 based authentication Microsoft and Cisco propose ideas for a Biometric privacy law after the state of Illinois passed one SafeMessage: An AI-based biometric authentication solution for messaging platforms
Read more
  • 0
  • 0
  • 7677
article-image-microsofts-visual-studio-intellicode-gets-improved-features-whole-line-code-completions-ai-assisted-refactoring-and-more
Savia Lobo
06 Nov 2019
3 min read
Save for later

Microsoft’s Visual Studio IntelliCode gets improved features: Whole-line code completions, AI-assisted refactoring, and more!

Savia Lobo
06 Nov 2019
3 min read
At the Ignite 2019, Microsoft shared a few improvements to the Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding that offers intelligent suggestions to improve code quality and productivity. Amanda Silver, a director of Microsoft’s developer division, in her official blog post writes, “At Microsoft Ignite, we showed a vision of how AI can be applied to developer tools. After talking with thousands of developers over the last couple years, we found that the most highly effective assistance can only come from one source: the collective knowledge of the open source, GitHub community.” Latest improvements in Microsoft’s IntelliCode Whole-line code completions and AI-assisted suggestions IntelliCode provides whole-line code completion suggestions IntelliCode extends the GPT-2 transformer language model to learn about programming languages and coding patterns. OpenAI-generated GPT model architecture has the ability to generate conditional synthetic text examples without needing domain-specific training datasets. For initial language-specific base models, the team adopted an unsupervised learning approach that learns from over 3000 top GitHub repositories. The base model then extracts statistical coding patterns and learns the intricacies of programming languages from GitHub repos to assist developers in their coding. Based on the code context, as the user types, IntelliCode uses semantic information and sourced patterns to predict the most likely completion in-line with the user’s code. IntelliCode has also extended machine-learning model training capabilities beyond the initial base model to enable teams to train their own team completions. AI-assisted refactoring detection IntelliCode suggests code changes in the IDE and also locally synthesizes, on-demand, edit scripts from any set of repetitive pattern changes. IntelliCode saves developers a lot of time with a new AI technology called program synthesis or programming-by-examples (PBE). PBE has been developed at Microsoft by the PROSE team and has been applied to various products including Flash Fill in Excel and webpage table extraction in PowerBI. “IntelliCode advances the state-of-the-art in PBE by allowing patterns to be learned from noisy traces as opposed to explicitly provided examples, without any additional steps on your part,” Silver writes. Talking about security, Silver says, “our PROSE-based models work entirely locally, so your code never leaves your machine.”  She also said that over the past few months, the team has used unsupervised machine learning techniques to create a model that is predictive for Python. Silver also told VentureBeat, “So the result is that as you’re coding Python, it actually feels more like the editing experience that you might get from a statically typed programming language — without actually having to make Python statically typed. And so as you type, you get statement completion for APIs and you can get argument completion that’s based on the context of the code that you’ve written thus far.” Many users are impressed with the improvements in IntelliCode. A user tweeted, “Training ML against repos is super clever.” https://twitter.com/nathaniel_avery/status/1191760019479519232 https://twitter.com/raschneiderman/status/1191704366035734530 To know more about improvements in IntelliCode, in detail, read Microsoft’s official blog post. Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more Mapbox introduces MARTINI, a client-side terrain mesh generation code DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency
Read more
  • 0
  • 0
  • 3414

article-image-researchers-reveal-light-commands-laser-based-audio-injection-attacks-on-voice-control-devices-like-alexa-siri-and-google-assistant
Fatema Patrawala
06 Nov 2019
5 min read
Save for later

Researchers reveal Light Commands: laser-based audio injection attacks on voice-control devices like Alexa, Siri and Google Assistant

Fatema Patrawala
06 Nov 2019
5 min read
Researchers from the University of Electro-Communications in Tokyo and the University of Michigan released a paper on Monday, that gives alarming cues about the security of voice-control devices. In the research paper the researchers presented ways in which they were able to manipulate Siri, Alexa, and other devices using “Light Commands”, a vulnerability in in MEMS (microelectro-mechanical systems) microphones. Light Commands was discovered this year in May. It allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light. This vulnerability can become more dangerous as voice-control devices gain more popularity. How Light Commands work Consumers use voice-control devices for many applications, for example to unlock doors, make online purchases, and more with simple voice commands. The research team tested a handful of such devices, and found that Light Commands can work on any smart speaker or phone that uses MEMS. These systems contain tiny components that convert audio signals into electrical signals. By shining a laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri. Many users do not enable voice authentication or passwords to protect devices from unauthorized use. Hence, an attacker can use light-injected voice commands to unlock the victim's smart-lock protected home doors, or even locate, unlock and start various vehicles. Further researchers also mentioned that Light Commands can be executed at long distances as well. To prove this they demonstrated the attack in a 110 meter hallway, the longest hallway available in the research phase. Below is the reference image where team demonstrates the attack, additionally they have captured few videos of the demonstration as well. Source: Light Commands research paper. Experimental setup for exploring attack range at the 110 m long corridor The Light Commands attack can be executed using a simple laser pointer, a laser driver, and a sound amplifier. A telephoto lens can be used to focus the laser for long range attacks. Detecting the Light Commands attacks Researchers also wrote how one can detect if the devices are attacked by Light Commands. They believe that command injection via light makes no sound, an attentive user can notice the attacker's light beam reflected on the target device. Alternatively, one can attempt to monitor the device's verbal response and light pattern changes, both of which serve as command confirmation. Additionally they also mention that so far they have not seen any such cases where the Light Command attack has been maliciously exploited. Limitations in executing the attack Light Commands do have some limitations in execution: Lasers must point directly at a specific component within the microphone to transmit audio information. Attackers need a direct line of sight and a clear pathway for lasers to travel. Most light signals are visible to the naked eye and would expose attackers. Also, voice-control devices respond out loud when activated, which could alert nearby people of foul play. Controlling advanced lasers with precision requires a certain degree of experience and equipment. There is a high barrier to entry when it comes to long-range attacks. How to mitigate such attacks Researchers in the paper suggested to add an additional layer of authentication in voice assistants to mitigate the attack. They also suggest that manufacturers can attempt to use sensor fusion techniques, such as acquiring audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands. Another approach proposed is reducing the amount of light reaching the microphone's diaphragm. This can be possible by using a barrier that physically blocks straight light beams to eliminate the line of sight to the diaphragm, or by implementing a non-transparent cover on top of the microphone hole to reduce the amount of light hitting the microphone. However, researchers also agreed that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to pass through the barriers and create a new light path. Users discuss photoacoustic effect at play On Hacker News, this research has gained much attention as users find this interesting and applaud researchers for the demonstration. Some discuss the laser pointers and laser drivers price and features available to hack the voice assistants. Others discuss how such techniques come to play, one of them says, “I think the photoacoustic effect is at play here. Discovered by Alexander Graham Bell has a variety of applications. It can be used to detect trace gases in gas mixtures at the parts-per-trillion level among other things. An optical beam chopped at an audio frequency goes through a gas cell. If it is absorbed, there's a pressure wave at the chopping frequency proportional to the absorption. If not, there isn't. Synchronous detection (e.g. lock in amplifiers) knock out any signal not at the chopping frequency. You can see even tiny signals when there is no background. Hearing aid microphones make excellent and inexpensive detectors so I think that the mics in modern phones would be comparable. Contrast this with standard methods where one passes a light beam through a cell into a detector, looking for a small change in a large signal. https://chem.libretexts.org/Bookshelves/Physical_and_Theoret... Hats off to the Michigan team for this very clever (and unnerving) demonstration.” Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 5435

article-image-microsoft-announces-azure-quantum-an-open-cloud-ecosystem-to-learn-and-build-scalable-quantum-solutions
Savia Lobo
05 Nov 2019
3 min read
Save for later

Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions

Savia Lobo
05 Nov 2019
3 min read
Yesterday, at the Microsoft Ignite 2019 in Orlando, the company released the preview of its first full-stack, scalable, general open cloud ecosystem, ‘Azure Quantum’. For developers, Microsoft has specifically created the open-source Quantum Development Kit, which includes all of the tools and resources you need to start learning and building quantum solutions. Azure Quantum is a set of quantum services including pre-built solutions to software and quantum hardware, providing developers and customers access to some of the most competitive quantum offerings in the market. For this offering, Microsoft has partnered with 1QBit, Honeywell, IonQ, and QCI. With Azure Quantum service, anyone gains deeper insights about quantum computing through a series of tools and learning tutorials such as the quantum katas. It also allows developers to write programs with Q# and QDK and experiment running the code against simulators and a variety of quantum hardware. Customers can also solve complex business challenges with pre-built solutions and algorithms running in Azure. According to Wired, “Azure Quantum has similarities to a service from IBM, which has offered free and paid access to prototype quantum computers since 2016. Google, which said last week that one of its quantum processors had achieved a milestone known as “quantum supremacy” by outperforming a top supercomputer, has said it will soon offer remote access to quantum hardware to select companies.” Microsoft’s Azure Quantum model is more like the existing computing industry, where cloud providers allow customers to choose processors from companies such as Intel and AMD, says William Hurley, CEO of startup Strangeworks. This startup offers services for programmers to build and collaborate with quantum computing tools from IBM, Google, and others. With just a single program, users will be able to target a variety of hardware through Azure Quantum – Azure classical computing, quantum simulators, and resource estimators, and quantum hardware from our partners, as well as our future quantum system being built on revolutionary topological qubit. Microsoft, on its official website, announced that the Azure Quantum will be launched in private preview in the coming months. Many users are excited to try the Quantum service by Azure. https://twitter.com/Daniel_Rubino/status/1191364279339036673 To know more about Azure Quantum in detail, visit Microsoft’s official page. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Using Qiskit with IBM QX to generate quantum circuits [Tutorial] How to translate OpenQASM programs in IBX QX into quantum scores [Tutorial]
Read more
  • 0
  • 0
  • 3398
article-image-introducing-spleeter-tensorflow-python-library-extracts-voice-sound-from-music
Sugandha Lahoti
05 Nov 2019
2 min read
Save for later

Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track

Sugandha Lahoti
05 Nov 2019
2 min read
On Monday, Deezer, a French online music streaming service, released Spleeter which is a music separation engine.  It comes in the form of a Python Library based on Tensorflow. Stating the reason behind Spleeter, the researchers state, “We release Spleeter to help the Music Information Retrieval (MIR) community leverage the power of source separation in various MIR tasks, such as vocal lyrics analysis from audio, music transcription, any type of multilabel classification or vocal melody extraction.” Spleeter comes with pre-trained models for 2, 4 and 5 track separation. These include: Vocals (singing voice) / accompaniment separation (2 stems) Vocals / drums / bass / other separation (4 stems) Vocals / drums / bass / piano / other separation (5 stems) It can also train source separation models or fine-tune pre-trained ones with Tensorflow if you have a dataset of isolated sources. Deezer benchmarked Spleeter against Open-Unmix another open-source model recently released and reported slightly better performances with increased speed. It can perform separation of audio files to 4 stems 100x faster than real-time when running on a GPU. You can use Spleeter straight from the command line as well as directly in your own development pipeline as a Python library. It can be installed with Conda, with pip or be used with Docker. Spleeter creators mention a number of potential applications of source separation engine including remixes, upmixing, active listening, educational purposes, and pre-processing for other tasks such as transcription. Spleeter received mostly positive feedback on Twitter, as people experimented to separate vocals from music. https://twitter.com/lokijota/status/1191580903518228480 https://twitter.com/bertboerland/status/1191110395370586113 https://twitter.com/CholericCleric/status/1190822694469734401 Wavy.org also ran several songs through the two-stem filter and evaluated them in a blog post. They tried a variety of soundtracks across multiple genres. The performance of audio was much better than expected, however, vocals sometimes felt robotically autotuned. The amount of bleed was shockingly low relative to other solutions and surpassed any available free tool and rival commercial plugins and services. https://twitter.com/waxpancake/status/1191435104788238336 Spleeter will be presented and live-demoed at the 2019 ISMIR conference in Delft. For more details refer to the official announcement. DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Firefox 70 released with better security, CSS, and JavaScript improvements
Read more
  • 0
  • 0
  • 18535

article-image-google-releases-patches-for-two-high-level-security-vulnerabilities-in-chrome-one-of-which-is-still-being-exploited-in-the-wild
Vincy Davis
04 Nov 2019
3 min read
Save for later

Google releases patches for two high-level security vulnerabilities in Chrome, one of which is still being exploited in the wild

Vincy Davis
04 Nov 2019
3 min read
Last week, Google notified its users that the ‘stable channel’ desktop Chrome browser is being updated to version 78.0.3904.87 for Windows, Mac, and Linux and will be rolled out in the coming weeks. This comes after some external researchers found two high severity vulnerabilities in the Chrome web browser. The first zero-day vulnerability, assigned CVE-2019-13720, was found by two malware researchers Anton Ivanov and Alexey Kulaev from Kaspersky, a private internet security solutions company. This vulnerability is present in Chrome’s PDFium library. Google has confirmed that this vulnerability still “exists in the wild.” The other vulnerability CVE-2019-13721 was found by banananapenguin and affects Chrome's audio component. No exploitation of this vulnerability has been reported so far. Google has not revealed the technical details of both vulnerabilities. “Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.” Both vulnerabilities are use-after-free vulnerabilities, which means that they have a type of memory flaw that can be leveraged by hackers to execute arbitrary code.  The Kaspersky researchers have named the CVE-2019-13720 vulnerability as Operation WizardOpium, as they have not been able to establish a definitive link of this vulnerability with any known threat actors.  According to Kaspersky, this vulnerability leverages a waterhole-style injection on a Korean-language news portal. This enabled a malicious JavaScript code to be inserted on the main page, which in turn, loads a profiling script from a remote site. The main index page then hosts a small JavaScript tag that loads the remote script. This JavaScript tag checks if the victim’s system can be infected by performing a comparison with the browser’s user agent.  The Kaspersky researchers say, “The exploit used a race condition bug between two threads due to missing proper synchronization between them. It gives an attacker a Use-After-Free (UaF) condition that is very dangerous because it can lead to code execution scenarios, which is exactly what happens in our case.” The attacker can use this vulnerability to perform numerous operations to allocate/free memory along with other techniques that eventually give the attackers an arbitrary read/write primitive. This technique is used by attackers to create a “special object that can be used with WebAssembly and FileReader together to perform code execution for the embedded shellcode payload.” You can read Kaspersky detailed report for more information on the zero-day vulnerability. Adobe confirms security vulnerability in one of their Elasticsearch servers that exposed 7.5 million Creative Cloud accounts Mobile-aware phishing campaign targets UNICEF, the UN, and many other humanitarian organizations NordVPN reveals it was affected by a data breach in 2018
Read more
  • 0
  • 0
  • 4317