Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - News

104 Articles
article-image-amazon-reinvent-2019-day-one-aws-launches-braket-its-new-quantum-service-and-releases-sagemaker-operators-for-kubernetes
Sugandha Lahoti
03 Dec 2019
6 min read
Save for later

Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases SageMaker Operators for Kubernetes

Sugandha Lahoti
03 Dec 2019
6 min read
At day one of the ongoing Amazon re:Invent 2019, there was a flurry of announcements made for AWS. Most importantly, AWS announced the preview launch of Braket, its own quantum computing service following the likes of IBM, Microsoft, and Google. Amazon also released Amazon SageMaker Operators for Kubernetes to help data scientists using Kubernetes to train, tune, and deploy machine learning models in Amazon SageMaker. re:Invent is Amazon’s flagship conference hosted by Amazon Web Services for the global cloud computing community. This year re: Invent is taking place in Las Vegas, December 2-6, 2019. re:Invent 2019 Day One announcements Braket: AWS’ new quantum service in preview now Amazon Braket (named after the common notation for quantum states) is a fully managed service that helps you get started with quantum computing. Braket consists of a full development environment that helps data scientists to: design quantum algorithms from scratch or choose from a set of pre-built algorithms, test these algorithms on simulated quantum computers (including gate based and quantum annealing superconductors, and ion trap hardware) run them on your choice of different quantum hardware technologies ( including D-Wave, IonQ, and Rigetti) Once your tests are complete, you will be automatically notified and your results will be stored in Amazon S3. Amazon Braket publishes event logs and performance metrics such as completion status and execution time to Amazon CloudWatch. To make it easier to develop hybrid algorithms that combine classical and quantum tasks, Amazon Braket helps manage classical compute resources and establish low-latency connections to the quantum hardware. At re:Invent 2019, AWS also launched the Amazon Quantum Solutions Lab, a collaborative research program that connects you with quantum computing experts from Amazon and its technology and consulting partners. They can help you identify potential uses of quantum computing, build internal expertise, and collaborate on programs to design and test quantum algorithms. Braket is available in preview now. Amazon SageMaker Operators for Kubernetes Now developers and data scientists can use Kubernetes to train, tune, and deploy machine learning models in Amazon SageMaker, with the new Amazon SageMaker Operators for Kubernetes. Customers can install these Amazon SageMaker Operators on their Kubernetes cluster to create Amazon SageMaker jobs natively using the Kubernetes API and command-line Kubernetes tools such as ‘kubectl’. Operators can be used to train machine learning models, optimize hyperparameters for a given model, run batch transform jobs over existing models, and set up inference endpoints. With these operators, users can manage their jobs in Amazon SageMaker from their Kubernetes cluster in Amazon Elastic Kubernetes Service EKS. Amazon SageMaker Operators for Kubernetes are available in select AWS regions. AWS DeepComposer, a creative way to learn Machine Learning Amazon has launched AWS DeepComposer, the world’s first machine learning-enabled musical keyboard at re:Invent 2019. AWS DeepComposer is an educational tool to teach people Machine Learning. AWS DeepComposer gives developers of all skill levels a creative way to experience machine learning – music. https://youtu.be/XH2EbK9dQlg You can input a melody by connecting the AWS DeepComposer keyboard to your computer, or play the virtual keyboard in the AWS DeepComposer console. You can generate an original music composition using the pre-trained genre models in the console. You can then publish your tracks to SoundCloud. It is designed specifically to educate developers by means of tutorials, sample code, and training data. These can be used to get started with building generative AI models, all without having to write a single line of code. With AWS DeepComposer, you can train and optimize GAN models to create original music. GAN models pit two different neural networks against each other to produce new and original digital works based on sample inputs. AWS DeepComposer is available in preview now. Amazon Transcribe now extended to healthcare patients Amazon’s automatic speech recognition service Amazon Transcribe is now available for medical speech as announced in re:Invent 2019. Amazon Transcribe Medical allows physicians to easily and quickly dictate their clinical notes and see their speech converted to accurate text in real-time, without any human intervention. Clinicians can use natural speech and do not have to explicitly call out punctuation like “comma” or “full stop”. This text can then be automatically fed to downstream applications such as EHR systems, or to AWS language services such as Amazon Comprehend Medical for entity extraction. To make it work, you need to capture audio using your device’s microphone and send PCM (Pulse-code modulation) audio to a streaming API based on the popular Websocket protocol. This API will respond with a series of JSON blobs with the transcribed text, as well as word-level time stamps, punctuation, etc. Optionally, you can save this data to an Amazon Simple Storage Service (S3) bucket. Amazon Transcribe Medical is available in US East (N. Virginia) and US West (Oregon) regions. Updates to Microsoft Windows Server AWS has released a bring-your-own-license (BYOL) experience for customers as an easier way to bring, and manage, their existing licenses for Microsoft Windows Server and SQL Server to AWS. The new BYOL experience enables customers who want to use their existing Windows Server or SQL Server licenses to seamlessly create virtual machines in EC2, while AWS takes care of managing their licenses to help ensure compliance to licensing rules specified by the customer. Amazon is also providing End-of-Support Migration Program (EMP) for Windows Server. On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. Having an application that can run only on an unsupported version of Windows Server is problematic as you will no longer get free security patch updates, leaving you vulnerable to security and compliance risks. This new program combines technology with expert guidance, to migrate your legacy applications running on outdated versions of Windows Server to newer, supported versions on AWS. Other updates announced at Amazon re:Invent 2019 Amazon EventBridge Schema Registry is now in preview.  The schema registry stores the structure (schema) of Amazon EventBridge events and maps them to Java, Python, and Typescript bindings so that you can use the events as typed objects. The existing AWS IoT SiteWise preview adds new features such as creating a virtual representation of your facility, monitor production performance metrics and use AWS IoT SiteWise Monitor to visualize the data in real-time. AWS IoT SiteWise Monitor is a new SaaS application that lets you monitor and interact with the data collected and organized by AWS IoT SiteWise. The upcoming AWS DeepRacer Evo car will include a stereo camera and a Light Detection and Ranging (LIDAR) sensor.  The DeepRacer League in 2020 will have 8 additional races in 5 countries. The preview of EC2 Image Builder, a service that makes it easier and faster to build and maintain secure OS images for Windows Server and Amazon Linux 2, using automated build pipelines. Amazon re:Invent will continue throughout this week (the last day is the 6th of December). You can access the Livestream here. Keep checking this space for news on other updates and launches. Amazon EKS Windows Container Support is now generally available Amazon’s hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, and more 10 key announcements from Microsoft Ignite 2019 you should know about
Read more
  • 0
  • 0
  • 2983

article-image-julia-computing-research-team-runs-machine-learning-model-on-encrypted-data-without-decrypting-it
Fatema Patrawala
28 Nov 2019
5 min read
Save for later

Julia Computing research team runs machine learning model on encrypted data without decrypting it

Fatema Patrawala
28 Nov 2019
5 min read
Last week, the team at Julia Computing published a research based on cutting edge cryptographic techniques. The research involved cryptography techniques to practically perform computation on data without ever decrypting it. For example, the user would send encrypted data (e.g. images) to the cloud API, which would run the machine learning model and then return the encrypted answer. Nowhere is the user data decrypted and in particular the cloud provider does not have access to either the original image nor is it able to decrypt the prediction it computed. The team made this possible by building a machine learning service for handwriting recognition of encrypted images (from the MNIST dataset). The ability to compute on encrypted data is generally referred to as “secure computation” and is a fairly large area of research, with many different cryptographic approaches and techniques for a plethora of different application scenarios. For their research, Julia team focused on using a technique known as “homomorphic encryption”. What is homomorphic encryption Homomorphic encryption is a form of encryption that allows computation on ciphertexts, generating an encrypted result which, when decrypted, matches the result of the operations as if they had been performed on the plaintext. This technique can be used for privacy-preserving outsourced storage and computation. It allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted. In highly regulated industries, such as health care, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing. In this research, the Julia Computing team used a homomorphic encryption system which involves the following operations: pub_key, eval_key, priv_key = keygen() encrypted = encrypt(pub_key, plaintext) decrypted = decrypt(priv_key, encrypted) encrypted′ = eval(eval_key, f, encrypted) So the first three are fairly straightforward and are familiar to anyone who has used asymmetric cryptography before. The last one is important as it evaluates some function f on the encryption and returns another encrypted value corresponding to the result of evaluating f on the encrypted value. It is this property that gives homomorphic computation its name. Further the Julia Computing team talks about CKKS (Cheon-Kim-Kim-Song), a homomorphic encryption scheme that allowed homomorphic evaluation on the following primitive operations: Element-wise addition of length n vectors of complex numbers Element-wise multiplication of length n complex vectors Rotation (in the circshift sense) of elements in the vector Complex conjugation of vector elements But they also mentioned that computations using CKKS were noisy, and hence they tested to perform these operations in Julia. Which convolutional neural network did the Julia Computing team use As a starting point the Julia Computing team used the convolutional neural network example given in the Flux model zoo. They kept training the loop, prepared the data and tweaked the ML model slightly. It is essentially the same model as the one used in the paper “Secure Outsourced Matrix Computation and Application to Neural Networks”, which uses the same (CKKS) cryptographic scheme. This paper also encrypts the model, which the Julia team neglected for simplicity and they involved bias vectors after every layer (which Flux does by default). This resulted in a higher test set accuracy of the model used by Julia team which was (98.6% vs 98.1%). An unusual feature in this model are the x.^2 activation functions. More common choices here would have been tanh or relu or something more advanced. While those functions (relu in particular) are cheap to evaluate on plaintext values, they would however, be quite expensive to evaluate on encrypted values. Also, the team would have ended up evaluating a polynomial approximation had they adopted these common choices. Fortunately  x.^2 worked fine for their purpose. How was the homomorphic operation carried out The team performed homomorphic operation on Convolutions and Matrix Multiply assuming a batch size of 64. They precomputed each convolution window of 7x7 extraction from the original images which gave them 64 7x7 matrices per input image. Then they collected the same position in each window into one vector and got a 64-element vector for each image, (i.e. a total of 49 64x64 matrices), and encrypted these matrices. In this way the convolution became a scalar multiplication of the whole matrix with the appropriate mask element, and by summing all 49 elements later, the team got the result of the convolution. Then the team moved to Matrix Multiply by rotating elements in the vector to effect a re-ordering of the multiplication indices. They considered a row-major ordering of matrix elements in the vector. Then shifted the vector by a multiple of the row-size, and got the effect of rotating the columns, which is a sufficient primitive for implementing matrix multiply. The team was able to get everything together and it worked. You can take a look at the official blog post to know the step by step implementation process with codes. Further they also executed the whole encryption process in Julia as it allows powerful abstractions and they could encapsulate the whole convolution extraction process as a custom array type. The Julia Computing team states, “Achieving the dream of automatically executing arbitrary computations securely is a tall order for any system, but Julia’s metaprogramming capabilities and friendly syntax make it well suited as a development platform.” Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia v1.3 released with new multithreading features, and much more! The Julia team shares its finalized release process with the community Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 How to make machine learning based recommendations using Julia [Tutorial]
Read more
  • 0
  • 0
  • 3604

article-image-10-key-announcements-from-microsoft-ignite-2019-you-should-know-about
Sugandha Lahoti
26 Nov 2019
7 min read
Save for later

10 key announcements from Microsoft Ignite 2019 you should know about

Sugandha Lahoti
26 Nov 2019
7 min read
This year’s Microsoft Ignite was jam-packed with new releases and upgrades in Microsoft’s line of products and services. The company elaborated on its growing focus to address the needs of its customers to help them do their business in smarter, more productive and more efficient ways. Most of the products were AI-based and Microsoft was committed to security and privacy. Microsoft Ignite 2019 took place on November 4-8, 2019 in Orlando, Florida and was attended by 26,000 IT implementers and decision-makers, developers, data professionals and people from various industries. There were a total of 175 separate announcements made! We have tried to cover the top 10 here. Microsoft’s Visual Studio IDE is now available on the web The web-based version of Microsoft’s Visual Studio IDE is now available to all developers. Called the Visual Studio Online, this IDE will allow developers to configure a fully configured development environment for their repositories and use the web-based editor to work on their code. Visual Studio Online is deeply integrated with GitHub (also owned by Microsoft), although developers can also attach their own physical and virtual machines to their Visual Studio-based environments. Visual Studio Online’s cloud-hosted environments, as well as extended support for Visual Studio Code and the web UI, are now available in preview. Support for Visual Studio 2019 is in private preview, which you can also sign up for through the Visual Studio Online web portal. Project Cortex will classify all content in a single network Project Cortex is a new service in Microsoft 365 useful to maintain the everyday flow of work in enterprises. Project Cortex collates enterprises generated documents and data, which is often spread across numerous repositories. It uses AI and machine learning to automatically classify all your content into topics to form a knowledge network. Cortex improves individual productivity and organizational intelligence and can be used across Microsoft 365, such as in the Office apps, Outlook, and Microsoft Teams. Project Cortex is now in private preview and will be generally available in the first half of 2020. Single-view device management with ‘Microsoft Endpoint Manager’ Microsoft has combined its Configuration Manager with Intune, its cloud-based endpoint management system to form what they call an Endpoint Manager. ConfigMgr allows enterprises to manage the PCs, laptops, phones, and tablets they issue to their employees. Intune is used for cloud-based management of phones. The Endpoint Manager will provide unique co-management options to organizations to provision, deploy, manage and secure endpoints and applications across their organization. Touted as the most important release of the event by Satya Nadella, this solution will give enterprises a single view of their deployments. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management. No-code bot builder ‘Microsoft Power Virtual Agents’ is available in public preview Built on the Azure Bot Framework, Microsoft Power Virtual Agents is a low-code and no-code bot-building solution now available in public preview. Power Virtual Agents enables programmers with little to no developer experience to create and deploy intelligent virtual agents. The solution also includes Azure Machine Learning to help users create and improve conversational agents for personalized customer service. Power Virtual Agents will be generally available Dec. 1. Microsoft’s Chromium-based version of Edge is now more privacy-focused Microsoft Ignite announced the release candidate for Microsoft’s Chromium-based version of Edge browser with the general availability release on January 15. InPrivate search will be available for Microsoft Edge and Microsoft Bing to keep online searches and identities private, giving users more control over their data.  When searching InPrivate, search history and personally identifiable data will not be saved nor be associated back to you. Users’ identities and search histories are completely private. There will also be a new security baseline for the all-new Microsoft Edge. Security baselines are pre-configured groups of security settings and default values that are recommended by the relevant security teams. The next version of Microsoft Edge will feature a new icon symbolizing the major changes in Microsoft Edge, built on the Chromium open source project. It will appear in an Easter egg hunt designed to reward the Insider community. ML.NET 1.4 announces General Availability ML.NET 1.4, Microsoft’s open-source machine learning framework is now generally available. The latest release adds image classification training with the ML.NET API, as well as a relational database loader API for reading data used for training models with ML.NET. ML.NET also includes Model Builder (easy to use UI tool in Visual Studio) and Command-Line Interface to make it super easy to build custom Machine Learning models using AutoML. This release also adds a new preview of the Visual Studio Model Builder extension that supports image classification training from a graphical user interface. A preview of Jupyter support for writing C# and F# code for ML.NET scenarios is also available. Azure Arc extends Azure services across multiple infrastructures One of the most important features of Microsoft Ignite 2019 was Azure Arc. This new service enables Azure services anywhere and extends Azure management to any infrastructure — including those of competitors like AWS and Google Cloud.  With Azure Arc, customers can use Azure’s cloud management experience for their own servers (Linux and Windows Server) and Kubernetes clusters by extending Azure management across environments. Enterprises can also manage and govern resources at scale with powerful scripting, tools, Azure Portal and API, and Azure Lighthouse. Announcing Azure Synapse Analytics Azure Synapse Analytics builds upon Microsoft’s previous offering Azure SQL Data Warehouse. This analytics service combines traditional data warehousing with big data analytics bringing serverless on-demand or provisioned resources—at scale. Using Azure Synapse Analytics, customers can ingest, prepare, manage, and serve data for immediate BI and machine learning applications within the same service. Safely share your big data with Azure Data Share, now generally available As the name suggests, Azure Data Share allows you to safely share your big data with other organizations. Organizations can share data stored in their data lakes with third party organizations outside their Azure tenancy. Data providers wanting to share data with their customers/partners can also easily create a new share, populate it with data residing in a variety of stores and add recipients. It employs Azure security measures such as access controls, authentication, and encryption to protect your data. Azure Data Share supports sharing from SQL Data Warehouse and SQL DB, in addition to Blob and ADLS (for snapshot-based sharing). It also supports in-place sharing for Azure Data Explorer (in preview). Azure Quantum to be made available in private preview Microsoft has been working on Quantum computing for some time now. At Ignite, Microsoft announced that it will be launching Azure Quantum in private preview in the coming months. Azure Quantum is a full-stack, open cloud ecosystem that will bring quantum computing to developers and organizations. Azure Quantum will assemble quantum solutions, software, and hardware across the industry in a  single, familiar experience in Azure. Through Azure Quantum, you can learn quantum computing through a series of tools and learning tutorials, like the quantum katas. Developers can also write programs with Q# and the QDK Solve. Microsoft Ignite 2019 organizers have released an 88-page document detailing about all 175 announcements which you can access here. You can also view the conference Keynote delivered by Satya Nadella on YouTube as well as Microsoft Ignite’s official blog. Facebook mandates Visual Studio Code as default development environment and partners with Microsoft for remote development extensions Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist Yubico reveals Biometric YubiKey at Microsoft Ignite Microsoft announces .NET Jupyter Notebooks
Read more
  • 0
  • 0
  • 4296

article-image-facebook-releases-pytorch-1-3-with-named-tensors-pytorch-mobile-8-bit-model-quantization-and-more
Bhagyashree R
11 Oct 2019
5 min read
Save for later

Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more

Bhagyashree R
11 Oct 2019
5 min read
Yesterday, at the PyTorch Developer Conference, Facebook announced the release of PyTorch 1.3. This release comes with three experimental features: named tensors, 8-bit model quantization, and PyTorch Mobile. Along with these exciting features, Facebook also announced the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. Key updates in PyTorch 1.3 Named Tensors for more readable and maintainable code Though tensors are the building blocks of modern machine learning, researchers have argued that they are “broken.” Tensors have their own share of shortcomings: they expose private dimensions, broadcast based on absolute position, and keep the type information in the documentation. PyTorch 1.3 tries to solve this problem by introducing experimental support for named tensors, which was proposed by Sasha Rush, an Associate Professor at Cornell Tech. He has built a library called NamedTensor, which serves as a “thin-wrapper” on Torch tensor. This update introduces a few changes to the API. Dimension access and reduction now use a ‘dim’ argument instead of an index. Constructing and adding dimensions requires a “name” argument. Functions now broadcast based on set operations, not through heuristic ordering rules. 8-bit model quantization for mobile-optimized AI Quantization in deep learning is the method of approximating a neural network that uses 32-bit floating-point numbers by a neural network that uses a lower-precision numerical format. It is used to reduce the bandwidth and compute requirements of deep learning models. This is extremely essential for on-device applications that have limited memory size and number of computations. PyTorch 1.3 brings experimental support for 8-bit model quantization with the eager mode Python API for efficient deployment on servers and edge devices. This feature includes techniques like post-training quantization, dynamic quantization, and quantization-aware training. Moving from 32-bits to 8-bits can result in two to four times faster computations with one-quarter the memory usage. PyTorch Mobile for more efficient on-device machine learning Running machine learning models directly on edge devices is of great importance as it reduces latency. This is why PyTorch 1.3 introduces PyTorch Mobile that enables “an end-to-end workflow from Python to deployment on iOS and Android.” The current release is experimental. In the future releases, we can expect PyTorch Mobile to come with build-level optimization, selective compilation, support for QNNPACK quantized kernel libraries and ARM CPUs, further performance improvements, and more. Model interpretability and privacy tools in PyTorch 1.3 Captum and Captum Insights Captum is an easy-to-use model interpretability library for PyTorch. It is backed by state-of-the-art interpretability algorithms such as Integrated Gradients, DeepLIFT, and Conductance to help developers improve and troubleshoot their models. Developers can identify different features that contribute to a model’s output and improve its design. Facebook has also released an early release of Captum Insights. It is an interpretability visualization widget built on top of Captum. It works across images, text, and other features to help users understand feature attribution. Check out Facebook’s announcement to know more about Captum. CrypTen Machine learning via cloud-based platforms poses various security and privacy challenges. Facebook writes, “In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools.” PyTorch 1.3 comes with CrypTen, a framework for privacy-preserving machine learning. It aims to make secure computing techniques accessible to machine learning practitioners. You can find more about CrypTen on GitHub. Libraries for multimodal AI systems Detectron2: It is an object detection library implemented in PyTorch. It features support for the latest models and tasks and increased flexibility to aid computer vision research. There are also improvements in maintainability and scalability to support production use cases. Fairseq gets speech extensions: With this release, Fairseq, a framework for sequence-to-sequence applications such as language translation includes support for end-to-end learning for speech and audio recognition tasks. The release of PyTorch 1.3 started a discussion on Hacker News and naturally many developers compared it with TensorFlow 2.0. Here’s what a user commented, “This is a common trend for being second in the market when we see Pytorch and TensorFlow 2.0, TF 2.0 was created to compete directly with Pytorch pythonic implementation (Keras based, Eager execution).” They further added, “Facebook at least on PyTorch has been delivering a quality product. Although for us running production pipelines TF is still ahead in many areas (GPU, TPU implementation, TensorRT, TFX and other pipeline tools) I can see Pytorch catching up on the next couple of years which by my prediction many companies will be running serious and advanced workflows and we may be able to see a winner there.” The named tensors implementation is being well-received by the PyTorch community: https://twitter.com/leopd/status/1182342855886376965 https://twitter.com/rasbt/status/1182647527906140161 These were some of the updates in PyTorch 1.3. Check out the official announcement by Facebook to know more. PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 3929

article-image-tensorflow-2-0-released-tighter-keras-integration-eager-execution-enabled-by-default
Bhagyashree R
03 Oct 2019
5 min read
Save for later

TensorFlow 2.0 released with tighter Keras integration, eager execution enabled by default, and more!

Bhagyashree R
03 Oct 2019
5 min read
After releasing the beta version of TensorFlow 2.0 in June, Google announced its final release on Monday. This release comes with tighter integration with Keras, eager execution enabled by default, promises three times faster training performance, a cleaned-up API, and more. Key updates in TensorFlow 2.0 Tighter Keras integration for better developer productivity One of the important updates in TensorFlow 2.0 is its tighter integration with Keras, a popular high-level API used for easy and fast prototyping, building, and training deep learning models. This will enable developers to easily leverage its various model-building APIs including Sequential, Functional, and Subclassing. Explaining the motivation behind this change, the TensorFlow team wrote, “By establishing Keras as the high-level API for TensorFlow, we are making it easier for developers new to machine learning to get started with TensorFlow. A single high-level API reduces confusion and enables us to focus on providing advanced capabilities for researchers.” Eager execution enabled by default In TensorFlow 1.x, developers were required to define an abstract data structure named Graph and to run this graph they needed an encapsulation called Session. TensorFlow 2.0 has eager execution enabled by default to “eagerly” run code, similar to normal Python code. Eager execution enables fast iteration and intuitive debugging without building a graph. It also makes creating and experimenting with models using TensorFlow much easier. It can be especially useful when using the tf.keras model subclassing API. Also Read: Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out Distribution Strategy API The Distribution Strategy API in TensorFlow 2.0 allows machine learning researchers to distribute training across a wide variety of compute configurations. This will allow them to “attain great out-of-the-box performance” with minimal code changes. This release also allows distributed training with Keras’ model.fit and custom training loops. Performance improvements on GPUs TensorFlow 2.0 includes multi-GPU support and experimental support for multi worker and Cloud TPUs. This release also has a number of performance improvements on GPUs. It promises three times faster training performance when using mixed precision on NVIDIA’s Volta and Turing GPUs. It includes tight integration with NVIDIA TensorRT, a platform for high-performance deep learning inference. The standardized SavedModel file format The SavedModel API allows you to save your trained ML model into a language-neutral format. With TensorFlow 2.0, all TensorFlow ecosystem projects including TensorFlow Lite, TensorFlow JS, TensorFlow Serving, and TensorFlow Hub, support SavedModels. Standardizing the SavedModel file format will enable developers to run their models on a variety of runtimes including the cloud, web, browser, Node.js, mobile, and embedded systems. “This allows you to run your models with TensorFlow, deploy them with TensorFlow Serving, use them on mobile and embedded systems with TensorFlow Lite, and train and run in the browser or Node.js with TensorFlow.js,” the team writes. API simplification TensorFlow 2.0 includes a number of API updates. Many API symbols are removed or renamed for better consistency and clarity. Also, the tf.app, tf.flags, and tf.logging API are removed in favor of abseil-py. Because of the huge number of API changes, developers in a discussion on Hacker News expressed that transitioning from TensorFlow 1.X to TensorFlow 2.0 is quite complicated. Some also mentioned switching to PyTorch instead. A user commented, “As someone who uses TensorFlow a lot, I predict an enormous clusterfuck of a transition. Tensorflow has turned into a multiheaded monster, supporting many things and approaches but none of them very well...In my opinion, there are some architectural problems with TF, which have not been addressed in this update...If you need to transition from TF1 to TF2, consider doing the TF1 to PyTorch transition instead.” While some others were happy with the recommended Keras API and eager execution. “I don't know if I'm the only one, but I actually love the changes they've made since v1. Eager execution and tf.function are fantastic, and the built-in Keras is even better than the standalone version. A big improvement compared to TF from last year,” a user commented on Reddit. Another user added, “The most important change in terms of usability, IMO, is the use of tf.keras as the recommended interface to TensorFlow. There hasn't been a case yet where I've needed to dip outside of Keras into raw TensorFlow, but the option is there and is easy to do. That said, TF 2.0 changes a lot. Many repos might break, so expect to see lots of tensorflow==1.14 in requirement.txt files from now on.” These were some of the updates in TensorFlow 2.0. Check out the official announcement and release notes to know more in detail. Transformers 2.0: NLP library with deep interoperability between TensorFlow 2.0 and PyTorch, and 32+ pretrained models in 100+ languages TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Train a convolutional neural network in Keras and improve it with data augmentation [Tutorial] Train a convolutional neural network in Keras and improve it with data augmentation [Tutorial]
Read more
  • 0
  • 0
  • 5964

article-image-transformers-2-0-nlp-library-with-deep-interoperability-between-tensorflow-2-0-and-pytorch
Fatema Patrawala
30 Sep 2019
3 min read
Save for later

Transformers 2.0: NLP library with deep interoperability between TensorFlow 2.0 and PyTorch, and 32+ pretrained models in 100+ languages

Fatema Patrawala
30 Sep 2019
3 min read
Last week, Hugging Face, a startup specializing in natural language processing, released a landmark update to their popular Transformers library, offering unprecedented compatibility between two major deep learning frameworks, PyTorch and TensorFlow 2.0. Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch. Transformers 2.0 embraces the ‘best of both worlds’, combining PyTorch’s ease of use with TensorFlow’s production-grade ecosystem. The new library makes it easier for scientists and practitioners to select different frameworks for the training, evaluation and production phases of developing the same language model. “This is a lot deeper than what people usually think when they talk about compatibility,” said Thomas Wolf, who leads Hugging Face’s data science team. “It’s not only about being able to use the library separately in PyTorch and TensorFlow. We’re talking about being able to seamlessly move from one framework to the other dynamically during the life of the model.” https://twitter.com/Thom_Wolf/status/1177193003678601216 “It’s the number one feature that companies asked for since the launch of the library last year,” said Clement Delangue, CEO of Hugging Face. Notable features in Transformers 2.0 8 architectures with over 30 pretrained models, in more than 100 languages Load a model and pre-process a dataset in less than 10 lines of code Train a state-of-the-art language model in a single line with the tf.keras fit function Share pretrained models, reducing compute costs and carbon footprint Deep interoperability between TensorFlow 2.0 and PyTorch models Move a single model between TF2.0/PyTorch frameworks at will Seamlessly pick the right framework for training, evaluation, production As powerful and concise as Keras About Hugging Face Transformers With half a million installs since January 2019, Transformers is the most popular open-source NLP library. More than 1,000 companies including Bing, Apple or Stitchfix are using it in production for text classification, question-answering, intent detection, text generation or conversational. Hugging Face, the creators of Transformers, have raised US$5M so far from investors in companies like Betaworks, Salesforce, Amazon and Apple. On Hacker News, users are appreciating the company and how Transformers has become the most important library in NLP. Other interesting news in data Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks Dr Joshua Eckroth on performing Sentiment Analysis on social media platforms using CoreNLP Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 6436
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-can-a-modified-mit-hippocratic-license-to-restrict-misuse-of-open-source-software-prompt-a-wave-of-ethical-innovation-in-tech
Savia Lobo
24 Sep 2019
5 min read
Save for later

Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech?

Savia Lobo
24 Sep 2019
5 min read
Open source licenses allow software to be freely distributed, modified, and used. These licenses give developers an additional advantage of allowing others to use their software as per their own rules and conditions. Recently, software developer and open-source advocate Coraline Ada Ehmke has caused a stir in the software engineering community with ‘The Hippocratic License.’ Ehmke was also the original author of Contributor Covenant, a “code of conduct" for open source projects that encourages participants to use inclusive language and to refrain from personal attacks and harassment. In a tweet posted in September last year, following the code of conduct, she mentioned, “40,000 open source projects, including Linux, Rails, Golang, and everything OSS produced by Google, Microsoft, and Apple have adopted my code of conduct.” [box type="shadow" align="" class="" width=""]The term ‘Hippocratic’ is derived from the Hippocratic Oath, the most widely known of Greek medical texts. The Hippocratic Oath in literal terms requires a new physician to swear upon a number of healing gods that he will uphold a number of professional ethical standards.[/box] Ehmke explained the license in more detail in a post published on Sunday. In it, she highlights how the idea that writing software with the goals of clarity, conciseness, readability, performance, and elegance are limiting, and potentially dangerous.“All of these technologies are inherently political,” she writes. “There is no neutral political position in technology. You can’t build systems that can be weaponized against marginalized people and take no responsibility for them.”The concept of the Hippocratic license is relatively simple. In a tweet, Ehmke said that it “specifically prohibits the use of open-source software to harm others.” Open source software and the associated harm Out of the many privileges that open source software allows such as free redistribution of the software as well as the source code, the OSI also defines there is no discrimination against who uses it or where it will be put to use. A few days ago, a software engineer, Seth Vargo pulled his open-source software, Chef-Sugar, offline after finding out that Chef (a popular open source DevOps company using the software) had recently signed a contract selling $95,000-worth of licenses to the US Immigrations and Customs Enforcement (ICE), which has faced widespread condemnation for separating children from their parents at the U.S. border and other abuses. Vargo took down the Chef Sugar library from both GitHub and RubyGems, the main Ruby package repository, as a sign of protest. In May, this year, Mijente, an advocacy organization released documents stating that Palantir was responsible for the 2017 ICE operation that targeted and arrested family members of children crossing the border alone. Also, in May 2018, Amazon employees, in a letter to Jeff Bezos, protested against the sale of its facial recognition tech to Palantir where they “refuse to contribute to tools that violate human rights”, citing the mistreatment of refugees and immigrants by ICE. Also, in July, the WYNC revealed that Palantir’s mobile app FALCON was being used by ICE to carry out raids on immigrant communities as well as enable workplace raids in New York City in 2017. Founder of OSI responds to Ehmke’s Hippocratic License Bruce Perens, one of the founders of the Open Source movement in software, responded to Ehmke in a post titled “Sorry, Ms. Ehmke, The “Hippocratic License” Can’t Work” . “The software may not be used by individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups,” he highlights in his post. “The terms are simply far more than could be enforced in a copyright license,” he further adds.  “Nobody could enforce Ms. Ehmke’s license without harming someone, or at least threatening to do so. And it would be easy to make a case for that person being underprivileged,”  he continued. He concluded saying that, though the terms mentioned in Ehmke’s license were unagreeable, he will “happily support Ms. Ehmke in pursuit of legal reforms meant to achieve the protection of underprivileged people.” Many have welcomed Ehmke's idea of an open source license with an ethical clause. However, the license is not OSI approved yet and chances are slim after Perens’ response. There are many users who do not agree with the license. Reaching a consensus will be hard. https://twitter.com/seannalexander/status/1175853429325008896 https://twitter.com/AdamFrisby/status/1175867432411336704 https://twitter.com/rishmishra/status/1175862512509685760 Even though developers host their source code on open source repositories, a license may bring certain level of restrictions on who is allowed to use the code. However, as Perens mentions, many of the terms in Ehmke’s license hard to implement. Irrespective of the outcome of this license’s approval process, Coraline Ehmke has widely opened up the topic of the need for long overdue FOSS licensing reforms in the open source community. It would be interesting to see if such a license would boost ethical reformation by giving more authority to the developers in imbibing their values and preventing the misuse of their software. Read the Hippocratic license to know more in detail. Other interesting news Tech ImageNet Roulette: New viral app trained using ImageNet exposes racial biases in artificial intelligent system Machine learning ethics: what you need to know and what you can do Facebook suspends tens of thousands of apps amid an ongoing investigation into how apps use personal data
Read more
  • 0
  • 0
  • 5259

article-image-github-acquires-semmle-to-secure-open-source-supply-chain-attains-cve-numbering-authority-status
Savia Lobo
19 Sep 2019
5 min read
Save for later

GitHub acquires Semmle to secure open-source supply chain; attains CVE Numbering Authority status

Savia Lobo
19 Sep 2019
5 min read
Yesterday, GitHub announced that it has acquired Semmle, a code analysis platform provider and also that it is now a Common Vulnerabilities and Exposures (CVE) Numbering Authority. https://twitter.com/github/status/1174371016497405953 The Semmle acquisition is a part of the plan to securing the open-source supply chain, Nat Friedman explains in his blog post. Semmle provides a code analysis engine, named QL, which allows developers to write queries that identify code patterns in large codebases and search for vulnerabilities and their variants. Security researchers use Semmle to quickly find vulnerabilities in code with simple declarative queries. “Semmle is trusted by security teams at Uber, NASA, Microsoft, Google, and has helped find thousands of vulnerabilities in some of the largest codebases in the world, as well as over 100 CVEs in open source projects to date,” Friedman writes. Also Read: GitHub now supports two-factor authentication with security keys using the WebAuthn API Semmle originally spun out of research at Oxford in 2006 announced a $21 million Series B investment led by Accel Partners, last year. “In total, the company raised $31 million before this acquisition,” Techcrunch reports. Shanku Niyogi, Senior Vice President of Product at GitHub, in his blog post writes, “An important measure of the success of Semmle’s approach is the number of vulnerabilities that have been identified and disclosed through their technology. Today, over 100 CVEs in open source projects have been found using Semmle, including high-profile projects like Apache Struts, Apple’s XNU, the Linux Kernel, Memcached, U-Boot, and VLC. No other code analysis tool has a similar success rate.” GitHub also announced that it has been approved as a CVE Numbering Authority for open source projects. Now, GitHub will be able to issue CVEs for security advisories opened on GitHub, allowing for even broader awareness across the industry. With Semmle integration, every CVE-ID can be associated with a Semmle QL query, which can then be shared and tracked by the broader developer community. The CVE approval will make it easier for project maintainers to report security flaws directly from their repositories. Also, GitHub can assign CVE identifiers directly and post them to the CVE List and the National Vulnerability Database (NVD). Earlier this year, GitHub acquired Dependabot, to provide automatic security fixes natively within GitHub. With automatic security fixes, developers no longer need to manually patch their dependencies. When a vulnerability is found in a dependency, GitHub will automatically issue a pull request on downstream repositories with the information needed to accept the patch. In August, GitHub was in the limelight for being a part of the Capital One data breach that affected 106 million users in the US and Canada. The law firm Tycko & Zavareei LLP filed a lawsuit in California’s federal district court on behalf of their plaintiffs Seth Zielicke and Aimee Aballo. Also Read: GitHub acquires Spectrum, a community-centric conversational platform Both plaintiffs claimed Capital One and GitHub were unable to protect user’s personal data. The complaint highlighted that Paige A. Thompson, the alleged hacker stole the data in March, posted about the theft on GitHub in April. According to the lawsuit, “As a result of GitHub’s failure to monitor, remove, or otherwise recognize and act upon obviously-hacked data that was displayed, disclosed, and used on or by GitHub and its website, the Personal Information sat on GitHub.com for nearly three months.” The Semmle acquisition may be GitHub’s move to improve security for users in the future. It would be interesting to know how GitHub will mold security for users with additional CVE approval. A user on Reddit writes, “I took part in a tutorial session Semmle held at a university CS society event, where we were shown how to use their system to write semantic analysis passes to look for things like use-after-free and null pointer dereferences. It was only an hour and a bit long, but I found the query language powerful & intuitive and the platform pretty effective. At the time, you could set up your codebase to run Semmle passes on pre-commit hooks or CI deployments etc. and get back some pretty smart reporting if you had introduced a bug.” The user further writes, “The session focused on Java, but a few other languages were supported as first-class, iirc. It felt kinda like writing an SQL query, but over AST rather than tuples in a table, and using modal logic to choose the selections. It took a little while to first get over the 'wut' phase (like 'how do I even express this'), but I imagine that a skilled team, once familiar with the system, could get a lot of value out of Semmle's QL/semantic analysis, especially for large/enterprise-scale codebases.” https://twitter.com/kurtseifried/status/1174395660960796672 https://twitter.com/timneutkens/status/1174598659310313472 To know more about this announcement in detail, read GitHub’s official blog post. Other news in Data Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine GQL (Graph Query Language) joins SQL as a Global Standards Project and is now the international standard declarative query language for graphs
Read more
  • 0
  • 0
  • 2732

article-image-gql-graph-query-language-joins-sql-as-a-global-standards-project-and-is-now-the-international-standard-declarative-query-language-for-graphs
Amrata Joshi
19 Sep 2019
6 min read
Save for later

GQL (Graph Query Language) joins SQL as a Global Standards Project and will be the international standard declarative query language for graphs

Amrata Joshi
19 Sep 2019
6 min read
On Tuesday, the team at Neo4j, the graph database management system announced that the international committees behind the development of the SQL standard have voted to initiate GQL (Graph Query Language) as the new database query language. GQL is now going to be the international standard declarative query language for property graphs and it is also a Global Standards Project. GQL is developed and maintained by the same international group that maintains the SQL standard. How did the proposal for GQL pass? Last year in May, the initiative for GQL was first time processed in the GQL Manifesto. This year in June, the national standards bodies across the world from the ISO/IEC’s Joint Technical Committee 1 (responsible for IT standards) started voting on the GQL project proposal.  The ballot closed earlier this week and the proposal was passed wherein ten countries including Germany, Korea, United States, UK, and China voted in favor. And seven countries agreed to put forward their experts to work on this project. Japan was the only country to vote against in the ballot because according to Japan, existing languages already do the job, and SQL/Property Graph Query extensions along with the rest of the SQL standard can do the same job. According to the Neo4j team, the GQL project will initiate development of next-generation technology standards for accessing data. Its charter mandates building on core foundations that are established by SQL and ongoing collaboration in order to ensure SQL and GQL interoperability and compatibility. GQL would reflect rapid growth in the graph database market by increasing adoption of the Cypher language.  Stefan Plantikow, GQL project lead and editor of the planned GQL specification, said, “I believe now is the perfect time for the industry to come together and define the next generation graph query language standard.”  Plantikow further added, “It’s great to receive formal recognition of the need for a standard language. Building upon a decade of experience with property graph querying, GQL will support native graph data types and structures, its own graph schema, a pattern-based approach to data querying, insertion and manipulation, and the ability to create new graphs, and graph views, as well as generate tabular and nested data. Our intent is to respect, evolve, and integrate key concepts from several existing languages including graph extensions to SQL.” Keith Hare, who has served as the chair of the international SQL standards committee for database languages since 2005, charted the progress toward GQL, said, “We have reached a balance of initiating GQL, the database query language of the future whilst preserving the value and ubiquity of SQL.” Hare further added, “Our committee has been heartened to see strong international community participation to usher in the GQL project.  Such support is the mark of an emerging de jure and de facto standard .” The need for a graph-specific query language Researchers and vendors needed a graph-specific query language because of the following limitations: SQL/PGQ language is restricted to read-only queries SQL/PGQ cannot project new graphs The SQL/PGQ language can only access those graphs that are based on taking a graph view over SQL tables. Researchers and vendors needed a language like Cypher that would cover insertion and maintenance of data and not just data querying. But SQL wasn’t the apt model for a graph-centric language that takes graphs as query inputs and outputs a graph as a result. But GQL, on the other hand, builds in openCypher, a project that brings Cypher to Apache Spark and gives users a composable graph query language. SQL and GQL can work together According to most of the companies and national standards bodies that are supporting the GQL initiative, GQL and SQL are not competitors. Instead, these languages can complement each other via interoperation and shared foundations. Alastair Green, Query Languages Standards & Research Lead at Neo4j writes, “A SQL/PGQ query is in fact a SQL sub-query wrapped around a chunk of proto-GQL.” SQL is a language that is built around tables whereas GQL is built around graphs. Users can use GQL to find and project a graph from a graph.  Green further writes, “I think that the SQL standards community has made the right decision here: allow SQL, a language built around tables, to quote GQL when the SQL user wants to find and project a table from a graph, but use GQL when the user wants to find and project a graph from a graph. Which means that we can produce and catalog graphs which are not just views over tables, but discrete complex data objects.” It is still not clear when will the first implementation version of GQL will be out. The official page reads,  “The work of the GQL project starts in earnest at the next meeting of the SQL/GQL standards committee, ISO/IEC JTC 1 SC 32/WG3, in Arusha, Tanzania, later this month. It is impossible at this stage to say when the first implementable version of GQL will become available, but it is highly likely that some reasonably complete draft will have been created by the second half of 2020.” Developer community welcomes the new addition Users are excited to see how GQL will incorporate Cypher, a user commented on HackerNews, “It's been years since I've worked with the product and while I don't miss Neo4j, I do miss the query language. It's a little unclear to me how GQL will incorporate Cypher but I hope the initiative is successful if for no other reason than a selfish one: I'd love Cypher to be around if I ever wind up using a GraphDB again.” Few others mistook GQL to be Facebook’s GraphQL and are sceptical about the name. A comment on HackerNews reads, “Also, the name is of course justified, but it will be a mess to search for due to (Facebook) GraphQL.” A user commented, “I read the entire article and came away mistakenly thinking this was the same thing as GraphQL.” Another user commented, “That's quiet an unfortunate name clash with the existing GraphQL language in a similar domain.” Other interesting news in Data Media manipulation by Deepfakes and cheap fakes refquire both AI and social fixes, finds a Data & Society report Percona announces Percona Distribution for PostgreSQL to support open source databases  Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out
Read more
  • 0
  • 0
  • 6525

article-image-what-can-you-expect-at-neurips-2019
Sugandha Lahoti
06 Sep 2019
5 min read
Save for later

What can you expect at NeurIPS 2019?

Sugandha Lahoti
06 Sep 2019
5 min read
Popular machine learning conference NeurIPS 2019 (Conference on Neural Information Processing Systems) will be held on Sunday, December 8 through Saturday, December 14 at the Vancouver Convention Center. The conference invites papers tutorials, and submissions on cross-disciplinary research where machine learning methods are being used in other fields, as well as methods and ideas from other fields being applied to ML.  NeurIPS 2019 accepted papers Yesterday, the conference published the list of their accepted papers. A total of 1429 papers have been selected. Submissions opened from May 1 on a variety of topics such as Algorithms, Applications, Data implementations, Neuroscience, and Cognitive Science, Optimization, Probabilistic Methods, Reinforcement Learning and Planning, and Theory. (The full list of Subject Areas are available here.) This year at NeurIPS 2019, authors of accepted submissions were mandatorily required to prepare either a 3-minute video or a PDF of slides summarizing the paper or prepare a PDF of the poster used at the conference. This was done to make NeurIPS content accessible to those unable to attend the conference. NeurIPS 2019 also introduced a mandatory abstract submission deadline, a week before final submissions are due. Only a submission with a full abstract was allowed to have the full paper uploaded. The authors were also asked to answer questions from the Reproducibility Checklist during the submission process. NuerIPS 2019 tutorial program NeurIPS also invites experts to present tutorials that feature topics that are of interest to a sizable portion of the NeurIPS community and are different from the ones already presented at other ML conferences like ICML or ICLR. They looked for tutorial speakers that cover topics beyond their own research in a comprehensive manner that encompasses multiple perspectives.  The tutorial chairs for NeurIPS 2019 are Danielle Belgrave and Alice Oh. They initially compiled a list based on the last few years’ publications, workshops, and tutorials presented at NeurIPS and at related venues. They asked colleagues for recommendations and conducted independent research. In reviewing the potential candidates, the chair read papers to understand their expertise and watch their videos to appreciate their style of delivery. The list of candidates was emailed to the General Chair, Diversity & Inclusion Chairs, and the rest of the Organizing Committee for their comments on this shortlist. Following a few adjustments based on their input, the potential speakers were selected. A total of 9 tutorials have been selected for NeurIPS 2019: Deep Learning with Bayesian Principles - Emtiyaz Khan Efficient Processing of Deep Neural Network: from Algorithms to Hardware Architectures - Vivienne Sze Human Behavior Modeling with Machine Learning: Opportunities and Challenges - Nuria Oliver, Albert Ali Salah Interpretable Comparison of Distributions and Models - Wittawat Jitkrittum, Dougal Sutherland, Arthur Gretton Language Generation: Neural Modeling and Imitation Learning -  Kyunghyun Cho, Hal Daume III Machine Learning for Computational Biology and Health - Anna Goldenberg, Barbara Engelhardt Reinforcement Learning: Past, Present, and Future Perspectives - Katja Hofmann Representation Learning and Fairness - Moustapha Cisse, Sanmi Koyejo Synthetic Control - Alberto Abadie, Vishal Misra, Devavrat Shah NeurIPS 2019 Workshops NeurIPS Workshops are primarily used for discussion of work in progress and future directions. This time the number of Workshop Chairs doubled, from two to four; selected chairs are Jenn Wortman Vaughan, Marzyeh Ghassemi, Shakir Mohamed, and Bob Williamson. However, the number of workshop submissions went down from 140 in 2018 to 111 in 2019. Of these 111 submissions, 51 workshops were selected. The full list of selected Workshops is available here.  The NeurIPS 2019 chair committee introduced new guidelines, expectations, and selection criteria for the Workshops. This time workshops had an important focus on the nature of the problem, intellectual excitement of the topic, diversity, and inclusion, quality of proposed invited speakers, organizational experience and ability of the team and more.  The Workshop Program Committee consisted of 37 reviewers with each workshop proposal assigned to two reviewers. The reviewer committee included more senior researchers who have been involved with the NeurIPS community. Reviewers were asked to provide a summary and overall rating for each workshop, a detailed list of pros and cons, and specific ratings for each of the new criteria. After all reviews were submitted, each proposal was assigned to two of the four chair committee members. The chair members looked through assigned proposals and their reviews to form an educated assessment of the pros and cons of each. Finally, the entire chair held a meeting to discuss every submitted proposal to make decisions.  You can check more details about the conference on the NeurIPS website. As always keep checking this space for more content about the conference. In the meanwhile, you can read our previous year coverage: NeurIPS Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk] NeurIPS 2018: Rethinking transparency and accountability in machine learning NeurIPS 2018: Developments in machine learning through the lens of Counterfactual Inference [Tutorial] Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]
Read more
  • 0
  • 0
  • 4549
article-image-google-is-circumventing-gdpr-reveals-braves-investigation-for-the-authorized-buyers-ad-business-case
Bhagyashree R
06 Sep 2019
6 min read
Save for later

Google is circumventing GDPR, reveals Brave's investigation for the Authorized Buyers ad business case

Bhagyashree R
06 Sep 2019
6 min read
Last year, Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, filed a complaint against Google’s DoubleClick/Authorized Buyers ad business with the Irish Data Protection Commission (DPC). New evidence produced by Brave reveals that Google is circumventing GDPR and also undermining its own data protection measures. Brave calls Google’s Push Pages a GDPR workaround Brave’s new evidence rebuts some of Google’s claims regarding its DoubleClick/Authorized Buyers system, the world’s largest real-time advertising auction house. Google says that it prohibits companies that use its real-time bidding (RTB) ad system “from joining data they receive from the Cookie Matching Service.” In September last year, Google announced that it has removed encrypted cookie IDs and list names from bid requests with buyers in its Authorized Buyers marketplace. Brave’s research, however, found otherwise, “Brave’s new evidence reveals that Google allowed not only one additional party, but many, to match with Google identifiers. The evidence further reveals that Google allowed multiple parties to match their identifiers for the data subject with each other.” When you visit a website that has Google ads embedded on its web pages, Google will run a real-time bidding ad auction to determine which advertiser will get to display its ads. For this, it uses Push Pages, which is the mechanism in question here. Brave hired Zach Edwards, the co-founder of digital analytics startup Victory Medium, and MetaX, a company that audits data supply chains, to investigate and analyze a log of Dr. Ryan’s web browsing. The research revealed that Google's Push Pages can essentially be used as a workaround for user IDs. Google shares a ‘google_push’ identifier with the participating companies to identify a user. Brave says that the problem here is that the identifier that was shared was common to multiple companies. This means that these companies could have cross-referenced what they learned about the user from Google with each other. Used by more than 8.4 million websites, Google's DoubleClick/Authorized Buyers broadcasts personal data of users to 2000+ companies. This data includes the category of what a user is reading, which can reveal their political views, sexual orientation, religious beliefs, as well as their locations. There are also unique ID codes that are specific to a user that can let companies uniquely identify a user. All this information can give these companies a way to keep tabs on what users are “reading, watching, and listening to online.” Brave calls Google’s RTB data protection policies “weak” as they ask these companies to self-regulate. Google does not have much control over what these companies do with the data once broadcast. “Its policy requires only that the thousands of companies that Google shares peoples’ sensitive data with monitor their own compliance, and judge for themselves what they should do,” Brave wrote. A Google spokesperson, as a response to this news, told Forbes, “We do not serve personalised ads or send bid requests to bidders without user consent. The Irish DPC — as Google's lead DPA — and the UK ICO are already looking into real-time bidding in order to assess its compliance with GDPR. We welcome that work and are co-operating in full." Users recommend starting an “information campaign” instead of a penalty that will hardly affect the big tech This news triggered a discussion on Hacker News where users talked about the implications of RTB and what strict actions the EU can take to protect user privacy. A user explained, "So, let's say you're an online retailer, and you have Google IDs for your customers. You probably have some useful and sensitive customer information, like names, emails, addresses, and purchase histories. In order to better target your ads, you could participate in one of these exchanges, so that you can use the information you receive to suggest products that are as relevant as possible to each customer. To participate, you send all this sensitive information, along with a Google ID, and receive similar information from other retailers, online services, video games, banks, credit card providers, insurers, mortgage brokers, service providers, and more! And now you know what sort of vehicles your customers drive, how much they make, whether they're married, how many kids they have, which websites they browse, etc. So useful! And not only do you get all these juicy private details, but you've also shared your customers sensitive purchase history with anyone else who is connected to the exchange." Others said that a penalty is not going to deter Google. "The whole penalty system is quite silly. The fines destroy small companies who are the ones struggling to comply, and do little more than offer extremely gentle pokes on the wrist for megacorps that have relatively unlimited resources available for complete compliance, if they actually wanted to comply." Users suggested that the EU should instead start an information campaign. "EU should ignore the fines this time and start an "information campaign" regarding behavior of Google and others. I bet that hurts Google 10 times more." Some also said that not just Google but the RTB participants should also be held responsible. "Because what Google is doing is not dissimilar to how any other RTB participant is acting, saying this is a Google workaround seems disingenuous." With this case, Brave has launched a full-fledged campaign that aims to “reform the multi-billion dollar RTB industry spans sixteen EU countries.” To achieve this goal it has collaborated with several privacy NGOs and academics including the Open Rights Group, Dr. Michael Veale of the Turing Institute, among others. In other news, a Bloomberg report reveals that Google and other internet companies have recently asked for an amendment to the California Consumer Privacy Act, which will be enacted in 2020. The law currently limits how digital advertising companies collect and make money from user data. The amendments proposed include approval for collecting user data for targeted advertising, using the collected data from websites for their own analysis, and many others. Read the Bloomberg report to know more in detail. Other news in Data Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge GDPR complaint in EU claim billions of personal data leaked via online advertising bids European Union fined Google 1.49 billion euros for antitrust violations in online advertising  
Read more
  • 0
  • 0
  • 2545

article-image-google-open-sources-an-on-device-real-time-hand-gesture-recognition-algorithm-built-with-mediapipe
Sugandha Lahoti
21 Aug 2019
3 min read
Save for later

Google open sources an on-device, real-time hand gesture recognition algorithm built with MediaPipe

Sugandha Lahoti
21 Aug 2019
3 min read
Google researchers have unveiled a new real-time hand tracking algorithm that could be a new breakthrough for people communicating via sign language. Their algorithm uses machine learning to compute 3D keypoints of a hand from a video frame. This research is implemented in MediaPipe which is an open-source cross-platform framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. What is interesting is that the 3D hand perception can be viewed in real-time on a mobile phone. How real-time hand perception and gesture recognition works with MediaPipe? The algorithm is built using the MediaPipe framework. Within this framework, the pipeline is built as a directed graph of modular components. The pipeline employs three different models: a palm detector model, a handmark detector model and a gesture recognizer. The palm detector operates on full images and outputs an oriented bounding box. They employ a single-shot detector model called BlazePalm, They achieve an average precision of 95.7% in palm detection. Next, the hand landmark takes the cropped image defined by the palm detector and returns 3D hand keypoints. For detecting key points on the palm images, researchers manually annotated around 30K real-world images with 21 coordinates. They also generated a synthetic dataset to improve the robustness of the hand landmark detection model. The gesture recognizer then classifies the previously computed keypoint configuration into a discrete set of gestures. The algorithm determines the state of each finger, e.g. bent or straight, by the accumulated angles of joints. The existing pipeline supports counting gestures from multiple cultures, e.g. American, European, and Chinese, and various hand signs including “Thumb up”, closed fist, “OK”, “Rock”, and “Spiderman”. They also trained their models to work in a wide variety of lighting situations and with a diverse range of skin tones. Gesture recognition - Source: Google blog With MediaPipe, the researchers built their pipeline as a directed graph of modular components, called Calculators. Individual calculators like cropping, rendering , and neural network computations can be performed exclusively on the GPU. They employed TFLite GPU inference on most modern phones. The researchers are open sourcing the hand tracking and gesture recognition pipeline in the MediaPipe framework along with the source code. The researchers Valentin Bazarevsky and Fan Zhang write in a blog post, “Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method, achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.” People commended the fact that this algorithm can run on mobile devices and is useful for people who communicate via sign language. https://twitter.com/SOdaibo/status/1163577788764495872 https://twitter.com/anshelsag/status/1163597036442148866 https://twitter.com/JonCorey1/status/1163997895835693056 Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube. Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research
Read more
  • 0
  • 0
  • 7583

article-image-twitter-and-facebook-removed-accounts-of-chinese-state-run-media-agencies-aimed-at-undermining-hong-kong-protests
Sugandha Lahoti
20 Aug 2019
5 min read
Save for later

Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests

Sugandha Lahoti
20 Aug 2019
5 min read
Update August 23, 2019: After Twitter, and Facebook Google has shutdown 210 YouTube channels that were tied to misinformation about Hong Kong protesters. The article has been updated accordingly. Chinese state-run media agencies have been buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. These ads, reported by Pinboard’s Twitter account were circulated by State-run news agency Xinhua calling these protesters as those "escalating violence" and calls for "order to be restored." In reality, Hong Kong protests have been called a completely peaceful march. Pinboard warned and criticized Twitter about these tweets and asked for its takedown. Though Twitter and Facebook are banned in China, the Chinese state-run media runs several English-language accounts to present its views to the outside world. https://twitter.com/pinboard/status/1162711159000055808 https://twitter.com/Pinboard/status/1163072157166886913 Twitter bans 936 accounts managed by the Chinese state Following this revelation, in a blog post yesterday, Twitter said that they are discovering a “significant state-backed information operation focused on the situation in Hong Kong, specifically the protest movement”.  They identified 936 accounts that were undermining “the legitimacy and political positions of the protest movement on the ground.” They found a larger, spammy network of approximately 200,000 accounts which represented the most active portions of this campaign. These were suspended for a range of violations of their platform manipulation policies.  These accounts were able to access Twitter through VPNs and over a "specific set of unblocked IP addresses" from within China. “Covert, manipulative behaviors have no place on our service — they violate the fundamental principles on which our company is built,” said Twitter. Twitter bans ads from Chinese state-run media Twitter also banned advertising from Chinese state-run news media entities across the world and declared that affected accounts will be free to continue to use Twitter to engage in public conversation, but not in their advertising products. This policy will apply to news media entities that are either financially or editorially controlled by the state, said Twitter. They will be notified directly affected entities who will be given 30 days to offboard from advertising products. No new campaigns will be allowed. However, Pinboard argues that 30 days is too long; Twitter should not wait and suspend Xinhua's ad account immediately. https://twitter.com/Pinboard/status/1163676410998689793 It also calls on Twitter to disclose: How much money it took from Xinhua How many ads it ran for them since the start of the Hong Kong protests in June and How those ads were targeted Facebook blocks Chinese accounts engaged in inauthentic behavior Following a tip shared by Twitter, Facebook also removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior as part of a small network that originated in China and focused on Hong Kong. However, unlike Twitter, Facebook did not announce any policy changes in response to the discovery. YouTube was also notably absent in the fight against Chinese misinformation propagandas. https://twitter.com/Pinboard/status/1163694701716766720 However, on 22nd August, Youtube axed 210 Youtube channels found to be spreading misinformation about the Hong Kong protests. “Earlier this week, as part of our ongoing efforts to combat coordinated influence operations, we disabled 210 channels on YouTube when we discovered channels in this network behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong,” Shane Huntley, director of software engineering for Google Security’s Threat Analysis Group said in a blog post. “We found use of VPNs and other methods to disguise the origin of these accounts and other activity commonly associated with coordinated influence operations.” Kyle Bass, Chief Investment Officer Hayman Capital Management, called on all social media outlets to ban all Chinese state-run propaganda sources. He tweeted, “Twitter, Facebook, and YouTube should BAN all State-backed propaganda sources in China. It’s clear that these 200,000 accounts were set up by the “state” of China. Why allow Xinhua, global times, china daily, or any others to continue to act? #BANthemALL” Public acknowledges Facebook and Twitter’s role in exposing Chinese state media Experts and journalists were appreciative of the role social media played in exposing those guilty and liked how they are responding to state interventions. Bethany Allen-Ebrahimian, President of the International China Journalist Association called it huge news. “This is the first time that US social media companies are openly accusing the Chinese government of running Russian-style disinformation campaigns aimed at sowing discord”, she tweeted. She added, “We’ve been seeing hints that China has begun to learn from Russia’s MO, such as in Taiwan and Cambodia. But for Twitter and Facebook to come out and explicitly accuse the Chinese govt of a disinformation campaign is another whole level entirely.” Adam Schiff, Representative (D-CA 28th District) tweeted, “Twitter and Facebook announced they found and removed a large network of Chinese government-backed accounts spreading disinformation about the protests in Hong Kong. This is just one example of how authoritarian regimes use social media to manipulate people, at home and abroad.” He added, “Social media platforms and the U.S. government must continue to identify and combat state-backed information operations online, whether they’re aimed at disrupting our elections or undermining peaceful protesters who seek freedom and democracy.” Social media platforms took an appreciable step against Chinese state-run media actors attempting to manipulate their platforms to discredit grassroots organizing in Hong Kong. It would be interesting to see if they would continue to protect individual freedoms and provide a safe and transparent platform if state actors from countries where they have a huge audiences like India or US, adopted similar tactics to suppress or manipulate the public or target movements. Facebook bans six toxic extremist accounts and a conspiracy theory organization Cloudflare terminates services to 8chan following yet another set of mass shootings in the US YouTube’s ban on “instructional hacking and phishing” videos receives backlash from the infosec community
Read more
  • 0
  • 0
  • 2193
article-image-terrifyingly-realistic-deepfake-video-of-bill-hader-transforming-into-tom-cruise-is-going-viral-on-youtube
Sugandha Lahoti
14 Aug 2019
4 min read
Save for later

Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube

Sugandha Lahoti
14 Aug 2019
4 min read
Deepfakes are becoming scaringly and indistinguishably real. A YouTube clip of Bill Hader in conversation with David Letterman on his late-night show in 2008 is going viral where Hader’s face subtly shifts to Cruise’s as Hader does his impression. This viral Deepfake clip has been viewed over 3 million times and is uploaded by Ctrl Shift Face (a Slovakian citizen who goes by the name of Tom), who has created other entertaining videos using Deepfake technology. For the unaware, Deepfake uses Artificial intelligence and deep neural networks to alter audio or video to pass it off as true or original content. https://www.youtube.com/watch?v=VWrhRBb-1Ig Deepfakes are problematic as they make it hard to differentiate between fake and real videos or images. This gives people the liberty to use deepfakes for promoting harassment and illegal activities. The most common use of deepfakes is found in revenge porn, political abuse, and fake celebrities videos as this one. The top comments on the video clip express dangers of realistic AI manipulation. “The fade between faces is absolutely unnoticeable and it's flipping creepy. Nice job!” “I’m always amazed with new technology, but this is scary.” “Ok, so video evidence in a court of law just lost all credibility” https://twitter.com/TheMuleFactor/status/1160925752004624387 Deepfakes can also be used as a weapon of misinformation since they can be used to maliciously hoax governments, populations and cause internal conflict. Gavin Sheridan, CEO of Vizlegal also tweeted the clip, “Imagine when this is all properly weaponized on top of already fractured and extreme online ecosystems and people stop believing their eyes and ears.” He also talked about future impact. “True videos will be called fake videos, fake videos will be called true videos. People steered towards calling news outlets "fake", will stop believing their own eyes. People who want to believe their own version of reality will have all the videos they need to support it,” he tweeted. He also tweeted whether we would require A-list movie actors at all in the future, and could choose which actor will portray what role. His tweet reads, “Will we need A-list actors in the future when we could just superimpose their faces onto the faces of other actors? Would we know the difference?  And could we not choose at the start of a movie which actors we want to play which roles?” The past year has seen accelerated growth in the use of deepfakes. In June, a fake video of Mark Zuckerberg was posted on Instagram, under the username, bill_posters_uk. In the video, Zuckerberg appears to give a threatening speech about the power of Facebook. Facebook had received strong criticism for promoting fake videos on its platform when in May, the company had refused to remove a doctored video of senior politician Nancy Pelosi. Samsung researchers also released a deepfake that could animate faces with just your voice and a picture using temporal GANs. Post this, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Tom, the creator of the viral video told The Guardian that he doesn't see deepfake videos as the end of the world and hopes his deepfakes will raise public awareness of the technology's potential for misuse. “It’s an arms race; someone is creating deepfakes, someone else is working on other technologies that can detect deepfakes. I don’t really see it as the end of the world like most people do. People need to learn to be more critical. The general public are aware that photos could be Photoshopped, but they have no idea that this could be done with video.” Ctrl Shift Face is also on Patreon offering access to bonus materials, behind the scenes footage, deleted scenes, early access to videos for those who provide him monetary support. Now there is a Deepfake that can animate your face with just your voice and a picture. Mark Zuckerberg just became the target of the world’s first high profile white hat deepfake op. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts.
Read more
  • 0
  • 0
  • 3853

article-image-facebook-research-suggests-chatbots-and-conversational-ai-will-empathize-humans
Fatema Patrawala
06 Aug 2019
6 min read
Save for later

Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans

Fatema Patrawala
06 Aug 2019
6 min read
Last week, the Facebook AI research team published a progress report on dialogue research that is fundamentally building more engageable and personalized AI systems. According to the team, “Dialogue research is a crucial component of building the next generation of intelligent agents. While there’s been progress with chatbots in single-domain dialogue, agents today are far from capable of carrying an open-domain conversation across a multitude of topics. Agents that can chat with humans in the way that people talk to each other will be easier and more enjoyable to use in our day-to-day lives — going beyond simple tasks like playing a song or booking an appointment.” In their blog post, they have described new open source data sets, algorithms, and models that improve five common weaknesses of open-domain chatbots today. The weaknesses identified are maintaining consistency, specificity, empathy, knowledgeability, and multimodal understanding. Let us look at each one in detail: Dataset called Dialogue NLI introduced for maintaining consistency Inconsistencies are a common issue for chatbots partly because most models lack explicit long-term memory and semantic understanding. Facebook team in collaboration with their colleagues at NYU, developed a new way of framing consistency of dialogue agents as natural language inference (NLI) and created a new NLI data set called Dialogue NLI, used to improve and evaluate the consistency of dialogue models. The team showcased an example in the Dialogue NLI model, where in they considered two utterances in a dialogue as the premise and hypothesis, respectively. Each pair was labeled to indicate whether the premise entails, contradicts, or is neutral with respect to the hypothesis. Training an NLI model on this data set and using it to rerank the model’s responses to entail previous dialogues — or maintain consistency with them — improved the overall consistency of the dialogue agent. Across these tests they say they saw 3x lesser contradictions in the sentences. Several conversational attributes were studied to balance specificity As per the team, generative dialogue models frequently default to generic, safe responses, like “I don’t know” to some query which needs specific responses. Hence, the Facebook team in collaboration with Stanford’s AI researcher Abigail See, studied how to fix this by controlling several conversational attributes, like the level of specificity. In one experiment, they conditioned a bot on character information and asked “What do you do for a living?” A typical chatbot responds with the generic statement “I’m a construction worker.” With control methods, the chatbots proposed more specific and engaging responses, like “I build antique homes and refurbish houses." In addition to specificity, the team mentioned, “that balancing question-asking and answering and controlling how repetitive our models are make significant differences. The better the overall conversation flow, the more engaging and personable the chatbots and dialogue agents of the future will be.” Chatbot’s ability to display empathy while responding was measured The team worked with researchers from the University of Washington to introduce the first benchmark task of human-written empathetic dialogues centered on specific emotional labels to measure a chatbot’s ability to display empathy. In addition to improving on automatic metrics, the team showed that using this data for both fine-tuning and as retrieval candidates leads to responses that are evaluated by humans as more empathetic, with an average improvement of 0.95 points (on a 1-to-5 scale) across three different retrieval and generative models. The next challenge for the team is that empathy-focused models should perform well in complex dialogue situations, where agents may require balancing empathy with staying on topic or providing information. Wikipedia dataset used to make dialogue models more knowledgeable The research team has improved dialogue models’ capability of demonstrating knowledge by collecting a data set with conversations from Wikipedia, and creating new model architectures that retrieve knowledge, read it, and condition responses on it. This generative model has yielded the most pronounced improvement and it is rated by humans as 26% more engaging than their knowledgeless counterparts. To engage with images, personality based captions were used To engage with humans, agents should not only comprehend dialogue but also understand images. In this research, the team focused on image captioning that is engaging for humans by incorporating personality. They collected a data set of human comments grounded in images, and trained models capable of discussing images with given personalities, which makes the system interesting for humans to talk to. 64% humans preferred these personality-based captions over traditional captions. To build strong models, the team considered both retrieval and generative variants, and leveraged modules from both the vision and language domains. They defined a powerful retrieval architecture, named TransResNet, that works by projecting the image, personality, and caption in the same space using image, personality, and text encoders. The team showed that their system was able to produce captions that are close to matching human performance in terms of engagement and relevance. And annotators preferred their retrieval model’s captions over captions written by people 49.5% of the time. Apart from this, Facebook team has released a new data collection and model evaluation tool, a Messenger-based Chatbot game called Beat the Bot, that allows people to interact directly with bots and other humans in real time, creating rich examples to help train models. To conclude, the Facebook AI team mentions, “Our research has shown that it is possible to train models to improve on some of the most common weaknesses of chatbots today. Over time, we’ll work toward bringing these subtasks together into one unified intelligent agent by narrowing and eventually closing the gap with human performance. In the future, intelligent chatbots will be capable of open-domain dialogue in a way that’s personable, consistent, empathetic, and engaging.” On Hacker News, this research has gained positive and negative reviews. Some of them discuss that if AI will converse like humans, it will do a lot of bad. While other users say that this is an impressive improvement in the field of conversational AI. A user comment reads, “I gotta say, when AI is able to converse like humans, a lot of bad stuff will happen. People are so used to the other conversation partner having self-interest, empathy, being reasonable. When enough bots all have a “swarm” program to move conversations in a particular direction, they will overwhelm any public conversation. Moreover, in individual conversations, you won’t be able to trust anything anyone says or negotiates. Just like playing chess or poker online now. And with deepfakes, you won’t be able to trust audio or video either. The ultimate shock will come when software can render deepfakes in realtime to carry on a conversation, as your friend but not. As a politician who “said crazy stuff” but really didn’t, but it’s in the realm of believability. I would give it about 20 years until it all goes to shit. If you thought fake news was bad, realtime deepfakes and AI conversations with “friends” will be worse.  Scroll Snapping and other cool CSS features come to Firefox 68 Google Chrome to simplify URLs by hiding special-case subdomains Lyft releases an autonomous driving dataset “Level 5” and sponsors research competition
Read more
  • 0
  • 0
  • 3210