Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Application Development

279 Articles
article-image-pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility
Amrata Joshi
11 Jun 2019
3 min read
Save for later

PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility

Amrata Joshi
11 Jun 2019
3 min read
Yesterday, the team at PyTorch announced the availability of PyTorch Hub which is a simple API and workflow that offers the basic building blocks to improve machine learning research reproducibility. Reproducibility plays an important role in research as it is an essential requirement for a lot of fields related to research including the ones based on machine learning techniques. But most of the machine learning based research publications are either not reproducible or are too difficult to reproduce. With the increase in the number of research publications, tens of thousands of papers being hosted on arXiv and submissions to conferences, research reproducibility has now become even more important. Though most of the publications are accompanied by code and trained models that are useful but still it is difficult for users to figure out for most of the steps, themselves. PyTorch Hub consists of a pre-trained model repository that is designed to facilitate research reproducibility and also to enable new research. It provides built-in support for Colab, integration with Papers With Code and also contains a set of models including classification and segmentation, transformers, generative, etc. By adding a simple hubconf.py file, it supports the publication of pre-trained models to a GitHub repository, which provides a list of models that are to be supported and a list of dependencies that are required for running the models. For example, one can check out the torchvision, huggingface-bert and gan-model-zoo repositories. Considering the case of torchvision hubconf.py: In torchvision repository, each of the model files can function and can be executed independently. These model files don’t require any package except for PyTorch and they don't need separate entry-points. A hubconf.py can help users to send a pull request based on the template mentioned on the GitHub page. The official blog post reads, “Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility. Hence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published. Once we accept your pull request, your model will soon appear on Pytorch hub webpage for all users to explore.” PyTorch Hub allows users to explore available models, load a model as well as understand the kind of methods available for any given model. Below mentioned are few of the examples: Explore available entrypoints: With the help of torch.hub.list() API, users can now list all available entrypoints in a repo.  PyTorch Hub also allows auxillary entrypoints apart from pretrained models such as bertTokenizer for preprocessing in the BERT models and making the user workflow smoother. Load a model: With the help of torch.hub.load() API, users can load a model entrypoint. This API can also provide useful information about instantiating the model. Most of the users are happy about this news as they think it will be useful for them. A user commented on HackerNews, “I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary.” Another user commented, “This will also make things easier for people writing algorithms on top of one of the base models.” To know more about this news, check out PyTorch’s blog post. Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet   .
Read more
  • 0
  • 0
  • 4035

article-image-tensorflow-2-0-beta-releases-with-distribution-strategy-api-freeze-easy-model-building-with-keras-and-more
Vincy Davis
10 Jun 2019
5 min read
Save for later

TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more

Vincy Davis
10 Jun 2019
5 min read
After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0. The 2.0 API is final with the symbol renaming/deprecation changes completed. The 2.0 API is ready and available as part of the TensorFlow 1.14 release in compat.v2 module. TensorFlow 2.0 support for Keras features Distribution Strategy for hardware The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes. The tf.distribute.Strategy can be used with: TensorFlow's high level APIs Tf.keras Tf.estimator Custom training loops TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy - tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop. Model Subclassing Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the  __init__  method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible. Breaking Changes The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely. In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release. Bug Fixes and Other Changes This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below: In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added. The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights. The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics. Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added. This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks. The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed. The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose. General reaction to the release of TensorFlow 2.0 beta is positive. https://twitter.com/markcartertm/status/1137238238748266496 https://twitter.com/tonypeng_Synced/status/1137128559414087680 A user on reddit comments, “Can't wait to try that out !” However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production. A user on Hacker News comments, “Maybe I'll give TF another try, but right now I'm really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It's great for research and proofs-of-concept. Maybe for production too.” Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it's hard to transition from one to the other.” The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer. Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet ML.NET 1.0 RC releases with support for TensorFlow models and much more!
Read more
  • 0
  • 0
  • 3327

article-image-github-introduces-template-repository-for-easy-boilerplate-code-management-and-distribution
Bhagyashree R
10 Jun 2019
2 min read
Save for later

GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution

Bhagyashree R
10 Jun 2019
2 min read
Yesterday GitHub introduced ‘Template repository’ using which you can share boilerplate code and directory structure across projects easily. This is similar to the idea of ‘Boilr’ and ‘Cookiecutter’. https://twitter.com/github/status/1136671651540738048 How to create a GitHub template repository? As per its name, ‘Template repository’ enable developers to mark a repository as a template, which they can use later for creating new repositories containing all of the template repository’s files and folders. You can create a new template repository or mark an existing one as a template with admin permissions. Just navigate to the Settings page and then click on the ‘Template repository’ checkbox. Once the template repository is created anyone who has access to it will be able to generate a new repository with same directory structure and files via ‘Use this template’ button. Source: GitHub All the templates that you own, have access to, or have used in a previous project will also be available to you when creating a new repository through ‘Choose a template’ drop-down. Every template repository will have a new URL ‘/generate’ endpoint that will allow you to distribute your template more efficiently. You just need to link your template users directly to this endpoint. Source: GitHub Templating is similar to cloning a repository, except it does not retain the history of the repository unlike cloning and gives users a clean new project with an initial commit. Though this function is still pretty basic, as GitHub will add more functionality in the future, it will be useful for junior developers and beginners to help them get started. Here’s what a Hacker News user believes we can do with this feature: “This is a part of something which could become a very powerful pattern: community-wide templates which include many best practices in a single commit: - Pre-commit hooks for linting/formatting and unit tests. - Basic CI pipeline configuration with at least build, test and release/deploy phases. - Package installation configuration for the frameworks you want. - Container/VM configuration for the languages you want to enable cross-platform and future-proof development. - Documentation to get started with it all.” Read the official announcement by GitHub for more details. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack
Read more
  • 0
  • 0
  • 18259

article-image-square-updated-its-terms-of-services-community-raise-concerns-about-restriction-to-use-the-agpl-licensed-software-in-online-stores
Amrata Joshi
07 Jun 2019
4 min read
Save for later

Square updated its terms of services; community raise concerns about restriction to use the AGPL-licensed software

Amrata Joshi
07 Jun 2019
4 min read
Last month, Square a financial services and mobile payment company updated its terms of service effective from this year in July. Developers are raising concerns upon one of the terms of service which restricts the use of AGPL-licensed software in online stores. What is GNU AGPL Affero General Public License The GNU Affero General Public License (AGPL) is a free and copyleft license for software and other kinds of works. AGPL guarantees the freedom for sharing and changing all versions of a program. It protects developers’ right by asserting copyright on the software, and by giving legal permission to copy, distribute and/or modify the software. What does the developer community think about AGPL The Content Restrictions section B-15  under the Online Store, reads, “You will not use, under any circumstance, any open source software subject to the GNU Affero General Public License v.3, or greater.” Few of the developers think that Square has misunderstood AGPL and this rule doesn’t make sense to them. A user commented on HackerNews, “This makes absolutely no sense. I'm almost certain that Square lawyers fucked up big time. They looked at the AGPL and completely misunderstood the context. There is no way in hell anyone can interpret AGPL in a way that makes Square responsible for any license violations their customers make selling software.” While according to few others the code which is licensed under AGPL can’t be used in a website hosted by Square, is what the rule means. Also, if the AGPL code is used by Square then the code might be sent to the browsers along with Square’s own proprietary code. And this could possibly mean that Square has violated AGPL. But a lot of companies follow the same rule, including Google, which clearly states, “WARNING: Code licensed under the GNU Affero General Public License (AGPL) MAY NOT be used at Google.”  But this could be useful for the developers as it keeps the code safe from the big tech companies using it. Chris DiBona, Director of open source at Google, said in a statement to The Register that “Google continues to ban the lightning-rod AGPL open source license within the company because doing so "saves engineering time" and because most AGPL projects are of no use to the company.” According to him, AGPL is designed for closing the "application service provider loophole" in the GPL and which lets ASPs use GPL code without distributing their changes back to the open source community. Under the AGPL, one has to open source their code if they use the AGPL code in their web service, and why would a company like Google do that? As its core components and back-end infrastructure that run its online services are not open source. But it also seems that it is something that needs the interference of lawyers and it is a matter of concern for them as well. https://twitter.com/MarkKriegsman/status/1136589805024923649 Also, the websites using AGPL code might have to provide the entire source code to their back end system. So, few think that AGPL is not an efficient license and they would want to see a better one that goes with the idea of freedom completely. And according to them such licenses should come from copyleft folks and not from the profit-oriented companies. While the rest argue that it is an efficient license and is useful for the developers and giving them enough freedom to share and protecting their software from companies. https://twitter.com/MarkKriegsman/status/1136589799341600769 https://twitter.com/mikeym0p/status/1136392884306010112 https://twitter.com/kjjaeger/status/1136633898526490624 https://twitter.com/fuzzychef/status/1136386203756818433 To know more about this news, check out the post by Square. AWS announces Open Distro for Elasticsearch licensed under Apache 2.0 Blue Oak Council publishes model license version 1.0.0 to simplify software licensing for everyone Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)  
Read more
  • 0
  • 0
  • 2268

article-image-apple-previews-macos-catalina-10-15-beta-featuring-apple-music-tv-apps-security-zsh-shell-driverkit-and-much-more
Amrata Joshi
04 Jun 2019
6 min read
Save for later

Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!

Amrata Joshi
04 Jun 2019
6 min read
Yesterday, Apple previewed the next version of macOS called Catalina, at its ongoing Worldwide Developers Conference (WWDC) 2019. macOS 10.15 or Catalina comes with new features, apps, and technology for developers. With Catalina, Apple is replacing iTunes with entertainment apps such as  Apple PodcastsApple Music, and the Apple TV app. macOS Catalina is expected to be released this fall. Craig Federighi, Apple’s senior vice president of Software Engineering, said, “With macOS Catalina, we’re bringing fresh new apps to the Mac, starting with new standalone versions of Apple Music, Apple Podcasts and the Apple TV app.” He further added, “Users will appreciate how they can expand their workspace with Sidecar, enabling new ways of interacting with Mac apps using iPad and Apple Pencil. And with new developer technologies, users will see more great third-party apps arrive on the Mac this fall.” What’s new in macOS Catalina Sidecar feature Sidecar is a new feature in the macOS 10.15 that helps users in extending their Mac desktop with the help of iPad as a second display or as a high-precision input device across the creative Mac apps. Users can use their iPad for drawing, sketching or writing in any Mac app that supports stylus input by pairing it with an Apple Pencil. Sidecar can be used for editing video with Final Cut Pro X, marking up iWork documents or drawing with Adobe Illustrator iPad app support Catalina comes with iPad app support which is a new way for developers to port their iPad apps to Mac. Previously, this project was codenamed as “Marzipan,” but it’s now called Catalyst. Developers will now be able to use Xcode for targeting their iPad apps at macOS Catalina. Twitter is planning on porting its iOS Twitter app to Mac, and even Atlassian is planning to bring its Jira iPad app to macOS Catalina. Though it is still not clear how many developers are going to support this porting, Apple is encouraging developers to port their iPad apps to the Mac. https://twitter.com/Atlassian/status/1135631657204166662 https://twitter.com/TwitterSupport/status/1135642794473558017 Apple Music Apple Music is a new music app that will help users discover new music with over 50 million songs, playlists, and music videos. Users will now have access to their entire music library, including the songs they have downloaded, purchased or ripped from a CD. Apple TV Apps The Apple TV app features Apple TV channels, personalized recommendations, and more than 100,000 iTunes movies and TV shows. Users can browse, buy or rent and also enjoy 4K HDR and Dolby Atmos-supported movies. It also comes with a Watch Now section that has the Up Next option, where users can easily keep track of what they are currently watching and then resume on any screen. Apple TV+ and Apple’s original video subscription service will be available in the Apple TV app this fall. Apple Podcasts Apple Podcasts app features over 700,000 shows in its catalog and comes with an option for automatically being notified of new episodes as soon as they become available. This app comes with new categories, curated collections by editors around the world and even advanced search tools that help in finding episodes by the host, guest or even discussion topics. Users can now easily sync their media to their devices using a cable in the new entertainment apps. Security In macOS Catalina, Gatekeeper checks all the apps for known security issues, and the new data protections now require all apps to get permission before accessing user documents. Approve that comes with Apple Watch helps users to approve security prompts by tapping the side button on their Apple Watch. With the new Find My app, it is easy to find the location of a lost or stolen Mac and it can be anonymously relayed back to its owner by other Apple devices, even when offline. Macs will be able to send a secure Bluetooth signal occasionally, which will be used to create a mesh network of other Apple devices to help people track their products. So, a map will populate of where the device is located and this way it will be useful for the users in order to track their device. Also, all the Macs will now come with the T2 Security Chip support Activation Lock which will make them less attractive to thieves. DriverKit MacOS Catalina SDK 10.15+ beta comes with DriverKit framework which can be used for creating device drivers that the user installs on their Mac. Drivers built with DriverKit can easily run in user space for improved system security and stability. This framework also provides C++ classes for IO services, memory descriptors, device matching, and dispatch queues. DriverKit further defines IO-appropriate types for numbers, strings, collections, and other common types. You use these with family-specific driver frameworks like USBDriverKit and HIDDriverKit. zsh shell on Mac With macOS Catalina beta, Mac uses zsh as the interactive shell and the default login shell and is available currently only to the members of the Apple Developer Program. Users can now make zsh as the default in earlier versions of macOS as well. Currently, bash is the default shell in macOS Mojave and earlier. Zsh shell is also compatible with the Bourne shell (sh) and bash. The company is also signalling that developers should start moving to zsh on macOS Mojave or earlier. As bash isn’t a modern shell, so it seems the company thinks that switching to something less aging would make more sense. https://twitter.com/film_girl/status/1135738853724000256 https://twitter.com/_sjs/status/1135715757218705409 https://twitter.com/wongmjane/status/1135701324589256704 Additional features Safari now has an updated start page that uses Siri Suggestions for elevating frequently visited bookmarks, sites, iCloud tabs, reading list selections and links sent in messages. macOS Catalina comes with an option to block an email from a specified sender or even mute an overly active thread and unsubscribe from commercial mailing lists. Reminders have been redesigned and now come with a new user interface that makes it easier for creating, organizing and tracking reminders. It seems users are excited about the announcements made by the company and are looking forward to exploring the possibilities with the new features. https://twitter.com/austinnotduncan/status/1135619593165189122 https://twitter.com/Alessio____20/status/1135825600671883265 https://twitter.com/MasoudFirouzi/status/1135699794360438784 https://twitter.com/Allinon85722248/status/1135805025928851457 To know more about this news, check out Apple’s post. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case
Read more
  • 0
  • 0
  • 4445

article-image-apple-releases-native-swiftui-framework-with-declarative-syntax-live-editing-and-support-of-xcode-11-beta
Vincy Davis
04 Jun 2019
4 min read
Save for later

Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta

Vincy Davis
04 Jun 2019
4 min read
Yesterday, at the ongoing Worldwide Developers Conference (WWDC) 2019, Apple announced a new framework called SwiftUI for building user interfaces across all Apple platforms. With an aim to decrease the line of codes, SwiftUI supports declarative syntax, design tools, and live editing. SwiftUI has an incredible native performance, thus allowing developers to feel fully integrated by taking advantage of the features from the previous technologies and developer experiences of Apple platforms. It's also automatically supported for dynamic type, dark mode, localization, and accessibility. The tools for SwiftUI development are only available when running on macOS 10.15 beta. Declarative syntax SwiftUI enables a developer to simply state the requirements of a user interface and it will be done directly. For example, if a developer wants a list of items consisting of text fields, then the developer will have to just describe the alignment, font, and color for each field. This makes the code simpler and easier to read, thus saving time and maintenance. SwiftUI also makes complex concepts like animation, much simpler. It enables developers to add animation to almost any control and choose a collection of ready-to-use effects with only a few lines of code. Design tools During the WWDC, Xcode 11 beta release notes were also released. Xcode 11 beta includes SDKs for iOS 13, macOS 10.15, watchOS 6, and tvOS 13.  Xcode 11 beta also supports development with SwiftUI. It supports uploading apps from the Organizer window and its editors can now be added to any window without needing an Assistant Editor. Also the LaunchServices on macOS, now respects the selected Xcode when launching Instruments, Simulator, and other developer tools embedded within Xcode. Thus using these intuitive new design tools of Xcode11, SwiftUI can be used to build interfaces like dragging and dropping, dynamic replacement, and previews. Drag and drop A developer can arrange components within the user interface by simply dragging controls on the canvas. It can be done by opening an inspector to select font, color, alignment, and other design options, and easily rearrange controls with the cursor. Many of these visual editors are also available within the code editor. It is also possible to drag controls from the library and drop them on the design canvas or directly on the code. Dynamic replacement When working in a design canvas, every edit by the developer will be completely in sync with the code in the adjoining editor. Xcode will recompile the changes instantly such that a developer can constantly build an app and run it at the same time, like a ‘live app’. With this feature, Xcode can also swap the edited code directly in the live app. Previews It is now possible to create one or many previews of any SwiftUI views to get sample data and configure almost anything the users can see, such as large fonts, localizations, or dark mode. The users' code will be instantly visible as a preview, and if any change is made in the preview, it will immediately appear in the code. Previews can also display a UI, in any device and any orientation. Native on all Apple platforms SwiftUI has been created in such a way that all controls and platform-specific experiences are included in the code. It allows an app to directly access the features from the previous technologies of each platform, with a small amount of code and an interactive design canvas. It can be used to build user interfaces for any Apple device, including iPhone, iPad, iPod touch, Apple Watch, and Apple TV. SwiftUI’s striking features have made developers very excited to try out the framework. https://twitter.com/stroughtonsmith/status/1135647926439632899 https://twitter.com/fjeronimo/status/1135626395168563201 https://twitter.com/sascha_p/status/1135626257884782592 https://twitter.com/cocoawithlove/status/1135626052678574080 For more details on SwiftUI framework, head over to the Apple Developers website. Apple promotes app store principles & practices as good for developers and consumers following rising antitrust worthy allegations Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments
Read more
  • 0
  • 0
  • 3775
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-managed-streaming-for-apache-kafka-amazon-msk-is-now-generally-available
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available

Amrata Joshi
03 Jun 2019
3 min read
Last week, Amazon Web Services announced the general availability of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK makes it easy for developers to build and run applications based on Apache Kafka without having to manage the underlying infrastructure. It is fully compatible with Apache Kafka that enables customers to easily migrate their on-premises or Amazon Elastic Cloud Compute (Amazon EC2) clusters to Amazon MSK without code changes. Customers can use Apache Kafka for capturing and analyzing real-time data streams from a range of sources, including database logs, IoT devices, financial systems, and website clickstreams. Many customers choose to self-manage their Apache Kafka clusters and they end up spending their time and cost in securing, scaling, patching, and ensuring high availability for Apache Kafka and Apache ZooKeeper clusters. But Amazon MSK offers attributes of Apache Kafka that are combined with the availability, security, and scalability of AWS. Customers can now create Apache Kafka clusters that are designed for high availability that span multiple Availability Zones (AZs) with few clicks. Amazon MSK also monitors the server health and automatically replaces servers when they fail. Customers can now easily scale out cluster storage in the AWS management console to meet changes in demand. Amazon MSK runs the Apache ZooKeeper nodes at no additional cost and provides multiple levels of security for Apache Kafka clusters which include VPC network isolation, AWS Identity and Access Management (IAM), etc. It allows customers to continue to run applications built on Apache Kafka and allow them to use Apache Kafka compatible tools and frameworks. General Manager of Amazon MSK, AWS, Rajesh Sheth, wrote to us in an email, "Customers who are running Apache Kafka have told us they want to spend less time managing infrastructure and more time building applications based on real-time streaming data.” He further added, “Amazon MSK gives these customers the ability to run Apache Kafka without having to worry about managing the underlying hardware, and it gives them an easy way to integrate their Apache Kafka applications with other AWS services. With Amazon MSK, customers can stand up Apache Kafka clusters in minutes instead of weeks, so they can spend more time focusing on the applications that impact their businesses.” Amazon MSK is currently available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (Paris), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney), and will expand to additional AWS Regions in the next year. Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 3921

article-image-safari-technology-preview-release-83-now-available-for-macos-mojave-and-macos-high-sierra
Amrata Joshi
03 Jun 2019
2 min read
Save for later

Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra

Amrata Joshi
03 Jun 2019
2 min read
Last week, the team at WebKit announced that Safari Technology Preview release 83 is now available for macOS Mojave and macOS High Sierra. Safari Technology Preview is a version of Safari for OS X includes an in-development version of the WebKit browser engine. What’s new in Safari Technology Preview release 83? Web authentication This release comes with web authentication enabled by default on macOS. The web authentication has been changed to cancel the pending request when a new request is made. Web authentication has been changed to return InvalidStateError to sites whenever authenticators return such error. Pointer events With this release, the issue with isPrimary property of pointercancel events has been fixed. Also, the issue with calling preventDefault() on pointerdown has been fixed. Rendering The team has implemented backing-sharing in compositing layers and have further allowed overlap layers to paint into the backing store of another layer. The team has also fixed rendering of backing-sharing layers with transforms. The issue with layer-related flashing with composited overflow: scroll has been fixed. CSS In this release, “clearfix” with display: flow-root has been implemented. Also, page-break-* and -webkit-column-break-* have been implemented. The issue with font-optical-sizing applying the wrong variation value has been implemented. The CSS grid has also  been updated. WebRTC This release now allows sequential playback of media files. Also, the issue with video stream freezing has been fixed. Major bug fixes In this release, the CPU timeline and memory timeline bars have been fixed. The colors in the network table waterfall container have been fixed. The issue with context menu items in the DOM tree has been fixed. To know more about this news, check out the release notes. Chrome, Safari, Opera, and Edge to make hyperlink auditing compulsorily enabled Safari Technology Preview 71 releases with improvements in Dark Mode, Web Inspector, WebRTC, and more! Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users
Read more
  • 0
  • 0
  • 3594

article-image-material-ui-v4-releases-with-css-specificity-classes-boilerplate-migration-to-typescript-and-more
Amrata Joshi
28 May 2019
2 min read
Save for later

Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more

Amrata Joshi
28 May 2019
2 min read
Last week, the team behind Material-UI released Material-UI v4 with CSS specificity, migration to Typescript and much more. The release of Material-UI v4 is influenced by two major factors. Firstly, the team analyzed the Developer survey results done in March. Secondly, the team wanted to be up to date with the latest best practices in the React community and with the Material Design Specification. What’s new in Material-UI v4? CSS specificity CSS specificity needs to be good enough and by default, Material-UI injects its style at the end of the <head> element. But styled components and few other popular styling solutions inject the style just before it and therefore loses specificity. In order to solve this problem, the team has introduced a new prop: injectFirst. Classes boilerplate In v1, the team had introduced classes API to target all the elements but they observed the use of this API for sometime and saw few users struggling. It is challenging to apply the class name on the right element and it further requires boilerplate as well. In order to improve this situation, the team changed the class name generation to output global class names and kept the classes API working as before. TypeScript All the demos have been migrated from JavaScript to TypeScript. The team has even type checked their demos which improves their TypeScript test coverage. Also, they have fixed many issues during the migration. While writing an application with TypeScript, users can now directly copy & paste the demos without the need of converting them or having to fix the errors. Improved UX The team has changed the menu organization to group all the components under a single navigation item. The team has also changed the background color to white for increasing the text contrast and readability. Tree shaking with ES modules This is the first version that supports native tree shaking with ES modules, users  can now use destructured imports while importing multiple components. To know more about this release, check out the post on Medium. Implementing autocompletion in a React Material UI application [Tutorial] Applying styles to Material-UI components in React [Tutorial] Fyne 1.0 released as a cross-platform GUI in Go based on Material Design  
Read more
  • 0
  • 0
  • 4357

article-image-facebook-releases-pythia-a-deep-learning-framework-for-vision-and-language-multimodal-research
Amrata Joshi
22 May 2019
2 min read
Save for later

Facebook releases Pythia, a deep learning framework for vision and language multimodal research

Amrata Joshi
22 May 2019
2 min read
Yesterday, the team at Facebook released Pythia, a deep learning framework that supports multitasking in the vision and language multimodal research. Pythia is built on the open-source PyTorch framework and enables researchers to easily build, reproduce, and benchmark AI models. https://twitter.com/facebookai/status/1130888764945907712 It is designed for vision and language tasks, such as answering questions that are related to visual data and automatically generates image captions. This framework also incorporates elements of Facebook’s winning entries in recent AI competitions including the VQA Challenge 2018 and Vizwiz Challenge 2018. Features of Pythia Reference implementations: Pythia references implementations to show how previous state-of-the-art models achieved related benchmark results. Performance gauging: It also helps in gauging the performance of new models. Multitasking: Pythia supports multitasking and distributed training. Datasets: It also includes support for various datasets built-in including VizWiz, VQA,TextVQA and VisualDialog. Customization: Pythia features custom losses, metrics, scheduling, optimizers, tensorboard as per the needs of the customers. Unopinionated: Pythia is unopinionated about the dataset and model implementations that are built on top of it. The goal of the team behind Pythia is to accelerate the AI models and their results and further make it easier for the AI community to build on, and benchmark against, successful systems. The team hopes that Pythia will also help researchers to develop adaptive AI that synthesizes multiple kinds of understanding into a more context-based, multimodal understanding. The team also plans to continue adding tools, data sets, tasks, and reference models. To know more about this news, check out the official Facebook announcement. Facebook tightens rules around live streaming in response to the Christchurch terror attack Facebook again, caught tracking Stack Overflow user activity and data Facebook bans six toxic extremist accounts and a conspiracy theory organization  
Read more
  • 0
  • 0
  • 3345
article-image-net-core-releases-may-2019-updates
Amrata Joshi
15 May 2019
3 min read
Save for later

.NET Core releases May 2019 updates

Amrata Joshi
15 May 2019
3 min read
This month, during the Microsoft Build 2019, the team behind .NET Core announced that .NET Core 5 will be coming in 2020. Yesterday the team at .NET Core released the .NET Core May 2019 updates for 1.0.16, 1.1.14, 2.1.11 and 2.2.5. The updates include security, reliability fixes, and updated packages. Expected updates in .NET Core Security .NET Core Tampering Vulnerability(CVE-2019-0820) When .NET Core improperly processes RegEx strings, a denial of service vulnerability exists. In this case, the attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET application. Even a remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core applications handle RegEx string processing. This security advisory provides information about a vulnerability in .NET Core 1.0, 1.1, 2.1 and 2.2. Denial of Service vulnerability in .NET Core and ASP.NET Core (CVE-2019-0980 & CVE-2019-0981) When .NET Core and ASP.NET Core improperly handle web requests, denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET Core and ASP.NET Core application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core and ASP.NET Core web applications handle web requests. This security advisory provides information about the two vulnerabilities (CVE-2019-0980 & CVE-2019-0981) in .NET Core and ASP.NET Core 1.0, 1.1, 2.1, and 2.2. ASP.NET Core Denial of Service vulnerability(CVE-2019-0982) When ASP.NET Core improperly handles web requests, a denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against an ASP.NET Core web application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to the ASP.NET Core application. This update addresses this vulnerability by correcting how the ASP.NET Core web application handles web requests. This security advisory provides information about a vulnerability (CVE-2019-0982) in ASP.NET Core 2.1 and 2.2. Docker images .NET Docker images have now been updated. microsoft/dotnet, microsoft/dotnet-samples, and microsoft/aspnetcore repos have also been updated. Users can get the latest .NET Core updates on the .NET Core download page. To know more about this news, check out the official announcement. .NET 5 arriving in 2020! Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 .NET for Apache Spark Preview is out now!  
Read more
  • 0
  • 0
  • 3403

article-image-graalvm-19-0-releases-with-java-8-se-compliant-java-virtual-machine
Bhagyashree R
13 May 2019
2 min read
Save for later

GraalVM 19.0 releases with Java 8 SE compliant Java Virtual Machine, and more!

Bhagyashree R
13 May 2019
2 min read
Last week, the team behind GraalVM announced the release of GraalVM 19.0. This is the first production release, which comes with early adopter Windows support, class initialization update in GraalVM Native Image, Java 8 SE compliant Java Virtual Machine, and more. https://twitter.com/graalvm/status/1126607204860289024 GraalVM is a polyglot virtual machine that allows users to run applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Kotlin, Clojure, and LLVM-based languages such as C and C++. Updates in GraalVM 19.0 GraalVM Native Image GraalVM Native Image is responsible for compiling Java code ahead-of-time to a standalone executable called a native image. Currently, it is available as an early adopter plugin and you can install it by executing the ‘gu install native-image’ command. With this release, Native Image is updated in how classes are initialized in a native-image. The application classes are now initialized at runtime by default and all the JDK classes are initialized at the build time. This change was made to improve user experience, as it eliminates the need to write substitutions and to deal with instances of unsupported classes ending up in the image heap. Early adopter Windows support With this release, early adopter builds for Windows users are also made available. These builds include the JDK with the GraalVM compiler enabled, Native Image capabilities, and GraalVM’s JavaScript engine and the developer tools. Java 8 SE compliant Java VM This release comes with Java 8 SE compliant Java Virtual Machine, which is based on OpenJDK 1.8.0_212. Read also: No more free Java SE 8 updates for commercial use after January 2019 Node.js with polyglot capabilities This release comes with Node.js with polyglot capabilities, based on Node.js 10.15.2. With these capabilities, you will be able to leverage Java or Scala libraries from Node.js and also use Node.js modules in Java applications. JavaScript engine compliant with ECMAScript 2019 GraalVM 19.0 comes with JavaScript engine compliant with the latest ECMAScript 2019 standard. You can now migrate from JavaScript engines Rhino or Nashorn, which are no longer maintained, to GraalVM’s JavaScript engine compatible with the latest standards. Check out the GraalVM 19.0 release notes for more details. OpenJDk team’s detailed message to NullPointerException and explanation in JEP draft Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more What’s new in ECMAScript 2018 (ES9)?
Read more
  • 0
  • 0
  • 2633

article-image-introducing-swiftwasm-a-tool-for-compiling-swift-to-webassembly
Bhagyashree R
13 May 2019
2 min read
Save for later

Introducing SwiftWasm, a tool for compiling Swift to WebAssembly

Bhagyashree R
13 May 2019
2 min read
The attempts of porting Swift to WebAssembly has been going on for very long, and finally, a team of developers has come up with SwiftWasm, which was released last week. With this tool, you will now be able to run your Swift code on the web by compiling it to WebAseembly. https://twitter.com/swiftwasm/status/1127324144121536512 The SwiftWasm tool is built on top of the WASI SDK, which is a WASI-enabled C/C++ toolchain. This makes the WebAssembly executables generated by SwiftWasm work on both browsers and standalone WebAssembly runtimes such as Wasmtime, Fastly’s Lucet, or any other WASI-compatible WebAssembly runtime. How you can work with SwiftWasm? While macOS does not need any dependencies to be installed, some dependencies need to be installed on Ubuntu and Windows: On Ubuntu install ‘libatomic1’: sudo apt-get install libatomic1 On Windows: First Install the Windows Subsystem for Linux, and then install the libatomic1 library. The next step is to compile SwiftWasm by running the following commands: ./swiftwasm example/hello.swift hello.wasm To run the resulting ‘hello.wasm’ file, go to the SwiftWasm polyfill and upload the file. You will see the output in the textbox. This polyfill supports Firefox 66, Chrome 74, and Safari 12.1. The news of having a tool for running Swift on the web has got many developers excited. https://twitter.com/pvieito/status/1127620197668487169 https://twitter.com/johannesweiss/status/1126913408455053312 https://twitter.com/jedisct1/status/1126909145926569986 The project is still work-in-progress and thus has some limitations. Currently, only the Swift ‘stdlib’ is compiled and other libraries such as Foundation or SwiftPM are not included. Few functions such as ‘Optional.Map’ does not work because of the calling convention differences between throwing and non-throwing closures. If you want to contribute to this project, check out its pull request on Swift’s GitHub repository to know more about its current status. You can try SwiftWasm on its official website. Swift is improving the UI of its generics model with the “reverse generics” system Swift 5 for Xcode 10.2 is here! Implementing Dependency Injection in Swift [Tutorial]
Read more
  • 0
  • 0
  • 7656
article-image-introducing-tensorflow-graphics-packed-with-tensorboard-3d-object-transformations-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team at TensorFlow introduced TensorFlow Graphics. A computer graphics pipeline requires 3D objects and their positioning in the scene, and a description of the material they are made of, lights and a camera. This scene description then gets interpreted by a renderer for generating a synthetic rendering. In contrast, a computer vision system starts from an image and then tries to infer the parameters of the scene. This also allows the prediction of which objects are in the scene, what materials they are made of, and their three-dimensional position and orientation. Developers usually require large quantities of data to train machine learning systems that are capable of solving these complex 3D vision tasks.  As labelling data is a bit expensive and complex process, so it is better to have mechanisms to design machine learning models. They can easily comprehend the three dimensional world while being trained without much supervision. By combining computer vision and computer graphics techniques we get to leverage the vast amounts of unlabelled data. For instance, this can be achieved with the help of analysis by synthesis where the vision system extracts the scene parameters and the graphics system then renders back an image based on them. In this case, if the rendering matches the original image, which means the vision system has accurately extracted the scene parameters. Also, we can see that in this particular setup, computer vision and computer graphics go hand-in-hand. This also forms a single machine learning system which is similar to an autoencoder that can be trained in a self-supervised manner. Image source: TensorFlow We will now explore some of the functionalities of TensorFlow Graphics. Object transformations Object transformations are responsible for controlling the position of objects in space. The axis-angle formalism is used for rotating a cube and the rotation axis points up to form a positive which leads the cube to rotate counterclockwise. This task is also at the core of many applications that include robots that focus on interacting with their environment. Modelling cameras Camera models play a crucial role in computer vision as they influence the appearance of three-dimensional objects projected onto the image plane. For more details about camera models and a concrete example of how to use them in TensorFlow, check out the Colab example. Material models Material models are used to define how light interacts with objects to give them their unique appearance. Some materials like plaster and mirrors usually reflect light uniformly in all directions. Users can now play with the parameters of the material and the light to develop a good sense of how they interact. TensorBoard 3d TensorFlow Graphics features a TensorBoard plugin to interactively visualize 3d meshes and point clouds. Through which visual debugging is also possible that helps to assess whether an experiment is going in the right direction. To know more about this news, check out the post on Medium. TensorFlow 1.13.0-rc2 releases! TensorFlow 1.13.0-rc0 releases! TensorFlow.js: Architecture and applications  
Read more
  • 0
  • 0
  • 4105

article-image-differentialequations-jl-v6-4-0-released-with-gpu-support-in-ode-solvers-linsolve-defaults-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

DifferentialEquations.jl v6.4.0 released with GPU support in ODE solvers, linsolve defaults, and much more!

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team behind JuliaDiffeEq released DifferentialEquations.jl v6.4.0,  a suite for numerically solving differential equations in Julia. This release gives users the ability to use ODE solvers on GPU, with automated tooling for faster broadcast, matrix-free Newton-Krylov, better Jacobian re-use algorithms, memory use reduction, etc. What’s new in DifferentialEquations.jl v6.4.0? Full GPU support in ODE solvers With this release, the stiff ODE solvers allow expensive calculations, like those in neural ODEs or PDE discretizations, and utilize GPU acceleration. This release also allows the initial condition to be a GPUArray where the internal methods don’t perform any indexing in order to allow for all computations to take place on the GPU without data transfers. Fast DiffEq-Specific Broadcast This release comes with a broadcast wrapper that allows all sorts of information to be passed to the compiler in the differential equation solver’s internals. This makes a bunch of no-aliasing and sizing assumptions that are normally not possible. This leads the internals to use a special @..,which also turns out to be faster than standard loops. Smart linsolve defaults This release comes with a smarter linsolve defaults, which automatically detects the BLAS installation and utilizes RecursiveFactorizations.jl that speeds up the process for ODE. Users can use the linear solver to automatically switch to a form that works for sparse Jacobians. Even banded matrices and Jacobians on the GPU are now automatically handled. Automated J*v Products via Autodifferentiation Users can now use GMRES, easily without the need for constructing the full Jacobian matrix. Users can simply use the directional derivatives in the direction of v in order to compute J*v. Performance improvement With this release, the performance of all implicit methods like KenCarp4 has been improved. DiffEqBiological.jl can now handle large reaction networks and can parse the networks much faster and can build Jacobians that utilize sparse matrices. Though there is still plenty of room for improvement. Partial Neural ODEs This release comes with a lot of improvements and gives a glimpse of working examples of partial neural differential equations that are equations, which have pre-specified portions. These equations allow for batched data and GPU acceleration. Memory optimization  This release comes with memory optimizations of low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs. These methods now have a minimal number of registers which are required for the method. Large PDE discretizations can now make use of DifferentialEquations.jl without loss of memory efficiency. Robust callbacks The team has introduced the ContinuousCallback implementation in this release that has increased robustness in double event detection. To know more about this news, check out the official announcement. The solvers – these great unknown Moving Further with NumPy Modules How to build an options trading web app using Q-learning  
Read more
  • 0
  • 0
  • 1728