Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-github-has-blocked-an-iranian-software-developers-account
Richard Gall
25 Jul 2019
3 min read
Save for later

GitHub has blocked an Iranian software developer's account

Richard Gall
25 Jul 2019
3 min read
GitHub's importance to software developers can't be overstated. In the space of a decade it has become central to millions of people's professional lives. For it to be taken away, then, must be incredibly hard to take. Not only does it cut you off from your work, it also cuts your identity as a developer. But that's what appears to have happened today to Hamed Saeedi, an Iranian software developer. Writing on Medium, Saeedi revealed that he today received an email from GitHub explaining that his account has been restricted "due to U.S. trade controls law restrictions." As Saeedi notes, he is not a paying GitHub customer, only using their free services, which makes the fact he has been clocked by the platform surprising. Does GitHub really think a developer is developing dangerous software in a public repo? Digging down into the terms and conditions around U.S. trade laws, Saeedi found a paragraph that states the platform cannot: "...be used for services prohibited under applicable export control laws, including purposes related to the development, production, or use of nuclear, biological, or chemical weapons or long range missiles or unmanned aerial vehicles." The implication - in Saeedi's reading at least - is that he is using GitHub for precisely that. But the impact of this move is massive for Saeedi. The incident has echoes of when Slack terminated Iranian users' accounts at the end of 2018, but, as one Twitter user noted, this is even more critical because "GitHub is hosting all the efforts of a programmer/engineer." How has GitHub and the developer community responded? GitHub hasn't, as of writing, responded publicly to the incident. However, it would be reasonable to assume that the organization would lean heavily on existing trades sanctions against Iran as an explanation for the actions. The ethical and moral implications of that notwithstanding, it's a move that would ensure that would protect the company. Given increased scrutiny on the geopolitical impact of technology, and current Iran/U.S. tensions, perhaps it isn't that surprising. But it has received condemnation from a number of developers on Twitter. One commented on the need to break up GitHub's monopoly, while another suggested that the incident emphasised the importance of #deletegithub - a small movement that sees GitHub (and other ostensibly 'free' software) as compromised and failing to live up to the ideals of free and open source software. Mikhail Novikov, a developer part of the GatsbyJS team, had words of solidariy for Saeedi, reading the situation in the context of the U.S. President's rhetoric towards Iran: https://twitter.com/freiksenet/status/1154297497290006528?s=20 It appears that other Iranian users have been affected in the same way - however, it remains unclear to what extent GitHub has been restricting Iranian accounts.
Read more
  • 0
  • 0
  • 4485

article-image-julia-announces-the-preview-of-multi-threaded-task-parallelism-in-alpha-release-v1-3-0
Vincy Davis
24 Jul 2019
5 min read
Save for later

Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0

Vincy Davis
24 Jul 2019
5 min read
Yesterday, Julia team announced the alpha release of v1.3.0, which is an early preview of Julia version 1.3.0, expected to be out in a couple of months. The alpha release includes a preview of a new threading interface for Julia programs called multi-threaded task parallelism. The task parallelism model allows many programs to be marked in parallel for execution, where a ‘task’ will run all the codes simultaneously on the available thread. This functionality works similar to a GC model (garbage collection) as users can freely release millions of tasks and not worry about how the libraries are implemented. This portable model has been included over all the Julia packages. Read Also: Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Jeff Bezanson and Jameson Nash from Julia Computing, and Kiran Pamnany from Intel say the Julia task parallelism is “inspired by parallel programming systems like Cilk, Intel Threading Building Blocks(TBB) and Go”. With multi-threaded task parallelism, Julia model can schedule many parallel tasks that call library functions. This works smoothly as the CPUs are not overcrowded with threads. This acts as an important feature for high-level languages as they require library functions frequently. How to resolve challenges while implementing task parallelism Allocating and switching task stacks Each task requires its own execution stack distinct from the usual process or thread stacks provided by Unix operating systems. Julia has an alternate implementation of stack switching which trades time for memory when a task switches. However, it may not be compatible with foreign code that uses cfunction. This implementation is used when stacks consume large address space. Event loop thread issues an async signal If a thread needs an event loop thread to wake up, it issues an async signal. This may be due to another thread scheduling new work, or a thread which is beginning to run garbage collection, or a thread which wants to take the I/O lock to perform I/O. Task migration across system threads In general, a task may start running on one thread, block for a while, and then restart on another thread. Julia uses thread-local variables every time a memory is allocated internally. Currently, a task always runs on the thread it started running on initially. To support this, Julia is using the concept of a sticky task where a task must run on a given thread and per-thread queues for running tasks associated with each thread. Sleeping idle threads To avoid 100% usage of CPUs all the time, some tasks are made to sleep. This can lead to a synchronization problem as some threads might be scheduled for new work while others have been kept on sleep. Dedicated scheduler task cause overhead problem When a task is blocked, the scheduler is called to pick another task to run. But, on what stack does the code run? It is possible to have a dedicated scheduler task; however, it may cause less overhead if the scheduler code runs in the context of the recently-blocked task. One suggested measure is to pull a task out of the scheduler queue to avoid switch away. Classic bugs The Julia team faced many difficult bugs while implementing multi-threaded functionality. One of the many bug was a mysterious one on Windows which got fixed by flipping a single bit. Future goals for Julia version 1.3.0 increase performance work on task switch and the I/O latency allow task migration use multiple threads in the compiler improved debugging tools provide alternate schedulers Developers are impressed with the new multithreaded parallelism functionality. A user on Hacker News comments “Great to see this finally land - thanks for all the team's work. Looking forward to giving it a whirl. Threading is something of a prerequisite for acceptance as a serious language among many folks. So great to not just check that box, but to stick the pen right through it. The devil is always in the details, but from the doc the interface looks pretty nice.” Another user says, “This is huge! I was testing out the master branch a few days ago and the parallelism improvements were amazing.” Many users are expecting Julia to challenge Python in the future. A comment on Hacker News reads “Not only is this huge for Julia, but they've just thrown down the gauntlet. The status quo has been upset. I expect Julia to start eating everyone's lunch starting with Python. Every language can use good concurrency & parallelism support and this is the biggest news for all dynamic languages.” Another user says, “I worked in a computational biophysics department with lots of python/bash/R and I was the only one who wrote lots of high-performance code in Julia. People were curious about the language but it was still very much unknown. Hope we will see a broader adoption of Julia in the future - it's just that it is much better for the stuff we do on a daily basis.” To learn how to implement Julia using task parallelism, head over to Julia blog. Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Announcing Julia v1.1 with better exception handling and other improvements Julia for machine learning. Will the new language pick up pace?
Read more
  • 0
  • 0
  • 5569

article-image-typescript-3-6-beta-is-now-available
Amrata Joshi
23 Jul 2019
2 min read
Save for later

TypeScript 3.6 beta is now available!

Amrata Joshi
23 Jul 2019
2 min read
Last week, the team behind TypeScript announced the availability of TypeScript 3.6 Beta. The full release of TypeScript 3.6 is scheduled for the end of the next month with a Release Candidate coming a few weeks prior.  What’s new in TypeScript 3.6? Stricter checking TypeScript 3.6 comes with stricter checking for iterators and generator functions. The earlier versions didn’t let users of generators differentiate whether a value was yielded or returned from a generator. With TypeScript 3.6, users can narrow down values from iterators while dealing with them. Simpler emit The emit for constructs like for/of loops and array spreads can be a bit heavy so TypeScript opts for a simpler emit by default that supports array types, and helps in iterating on other types using the --downlevelIteration flag. With this flag, the emitted code is more accurate, but is larger. Semicolon-aware code edits Older versions of TypeScript added semicolons to the end of every statement which was not appreciated by many users as it didn’t go along with their style guidelines. TypeScript 3.6 can easily detect if a file uses semicolons while applying edits and if a file lack semicolons, TypeScript doesn’t add one. DOM updates Following are a few of the declarations that have been removed or changed within lib.dom.d.ts: Instead of GlobalFetch, WindowOrWorkerGlobalScope is used. Non-standard properties on Navigator no more exist. webgl or webgl2 is used instead of experimental-webgl context. To know more about this news, check out the official post.  Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more      
Read more
  • 0
  • 0
  • 3625

article-image-github-services-were-down-for-4-hours-yesterday
Bhagyashree R
23 Jul 2019
4 min read
Save for later

GitHub services experienced a 41-minute disruption yesterday

Bhagyashree R
23 Jul 2019
4 min read
Update: Yesterday the GitHub team in a blog post stated what they have uncovered in their initial investigation, “On Monday at 3:46 pm UTC, several services on GitHub.com experienced a 41-minute disruption, and as a result, some services were degraded for a longer period. Our initial investigation suggests a logic error introduced into our deployment pipeline manifested during a subsequent and unrelated deployment of the GitHub.com website. This chain of events destabilized a number of internal systems, complicated our recovery efforts, and resulted in an interruption of service.” It was not a very productive Monday for many developers when GitHub started showing 500 and 422 error code on their repositories. This was because several services on GitHub were down yesterday from around 15:46 UTC for 41 minutes. Soon GitHub engineers began their investigation and all the services were back to normal by 19:47 UTC. https://twitter.com/githubstatus/status/1153391172167114752 The outage affected GitHub services including Git operations, API requests, Gist, among others. The experiences that developers reported were quite inconsistent. Some developers said that though they were able to open the main repo page, they could not see commit log or PRs. Others reported that all the git commands that required interaction with GitHub’s remotes failed. A developer commented on Hacker News, “Git is fine, and the outage does not affect you and your team if you already have the source tree anywhere. What it does affect is the ability to do code reviews, work with issues, maybe even do releases. All the non-DVCS stuff.” GitHub is yet to share the cause and impact of the downtime. However, developers took to different discussion forums to share what they think the reason behind GitHub outage could be. While some speculated that it might be its increasing user base, others believed it was because GitHub might be still moving “stuff to Azure after the acquisition.” Developers also discussed what steps they can take so that such outages do not affect their workflow in the future. One developer suggested not to rely on a single point of failure by setting two different URLs for the same remote so that a single push command will push to both. You can do something like this, a developer suggested: git remote set-url --add --push origin git@github.com:Foo/bar.git git remote set-url --add --push origin git@gitlab.com:Foo/bar.git Another developer suggested, “I highly recommend running at least a local, self-hosted git mirror at any tech company, just in these cases. Gitolite + cgit is extremely low maintenance, especially if you host them next to your other production services. Not to mention, if you get the self-hosted route you can use Gerrit, which is still miles better for code review than GitHub, Gitlab, bitbucket and co.” Others joked that this was a good opportunity to take a few hours of break and relax. “This is the perfect time to take a break. Kick back, have a coffee, contemplate your life choices. That commit can wait, that PR (i was about to merge) can wait too. It's not the end of the world,” a developer commented. Lately, we are seeing many cases of outages. Earlier this month, almost all of Apple’s iCloud services were down for some users. On July 2, Cloudflare suffered a major outage due to a massive spike in CPU utilization in the network. Last month, Google Calendar was down for nearly three hours around the world. In May, Facebook and its family of apps Whatsapp, Messenger, and Instagram faced another outage in a row. Last year, Github faced issues due to a failure in its data storage system which left the site broken for a complete day. Several developers took to Twitter to kill their time and vent out frustration: https://twitter.com/jameskbride/status/1153332862587944960 https://twitter.com/BobString/status/1153329356284055552 https://twitter.com/pikesley/status/1153332278774439941 https://twitter.com/francesc/status/1153336190390550528 Cloudflare RCA: Major outage was a lot more than “a regular expression went bad” EU’s satellite navigation system, Galileo, suffers major outage; nears 100 hours of downtime Twitter experienced major outage yesterday due to an internal configuration issue
Read more
  • 0
  • 0
  • 2371

article-image-why-was-rust-chosen-for-libra-us-congressman-questions-facebook-on-libra-security-design-choices
Sugandha Lahoti
22 Jul 2019
6 min read
Save for later

“Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices

Sugandha Lahoti
22 Jul 2019
6 min read
Last month, Facebook announced that it’s going to launch its own cryptocurrency, Libra and Calibra, a payment platform that sits on top of the cryptocurrency, unveiling its plans to develop an entirely new ecosystem for digital transactions. It also developed a new programming language, “Move” for implementing custom transaction logic and “smart contracts” on the Libra Blockchain. The Move language is written entirely in Rust. Although Facebook’s media garnered a massive media attention and had investors and partners from the likes of PayPal, loan platform Kiva, Uber, and Lyft, it had its own share of concerns. The US administration is worried about a non-governmental currency in the hands of big tech companies. Early July, the US congress asked Facebook to suspend the implementation of Libra until the ramifications were investigated. Last week, at the U.S. House Committee on Financial Services hearing, investigating Libra’s security related challenges, Congressman Denver Riggleman posed an important question to David Marcus, head of Calibra, asking why the Rust language was chosen for Libra. Riggleman: I was really surprised about the Rust language. So my first question is, why was the Rust language chosen as the implementation language for Libra? Do you believe it's mature enough to handle the security challenges that will affect these large cryptocurrency transactions? Marcus: The Libra association will own the repository for the code. While there are many flavors and branches being developed by third parties, only safe and verified code will actually be committed to the actual Libra code base which is going to be under the governance of the Libra association. Riggleman: It looks like Libra was built on the nightly build of the Rust programming language. It's interesting because that's not how we did releases at the DoD. What features of Rust are only available in the nightly build that aren't in the official releases of Rust? Does Facebook see it as a concern that they are dependent on unofficially released features of the Rust programming language? Why the nightly releases? Do you see this as a function of the prototyping phase of this? Marcus: Congressman, I don’t have the answers to your very technical questions but I commit that we will get back to you with more details on your questions. Marcus appeared before two US congressional hearing sessions last week where he was constantly grilled by legislators. The grilling led to a dramatic alteration in the strategy of Libra. Marcus has clarified that Facebook won't move forward with Libra until all concerns are addressed. The original vision of Facebook with Libra was to be an open and largely decentralized network which would be beyond the reach of regulators. Instead, regulatory compliance would be the responsibility of exchanges, wallets, and other services called the Libra association. Post the hearing Marcus has stated that the Libra Association would have a deliberately limited role in regulatory matters. Per ArsTechnica, Calibra, would follow US regulations on consumer protection, money laundering, sanctions, and so forth. But Facebook didn't seem to have plans for the Libra Association, Facebook, or any associated entity to police illegal activity on the Libra network as a whole. This video clipping sparked quite the discussion on Hacker News and Reddit with people applauding the QnA session. Some appreciated that legislators are now asking tough questions like these. “It's cool to see a congressman who has this level of software dev knowledge and is asking valid questions.” “Denver Riggleman was an Air Force intelligence officer for 11 years, then he became an NSA contractor. I'm not surprised he's asking reasonable questions.” “I don't think I've ever heard of a Congressman going to GitHub, poking around in some open source code, and then asking very cogent and relevant questions about it. This video is incredible if only because of that.” Others commented on why Congress may have trust issues with using a young programming language like Rust for something like Libra, which requires layers of privacy and security measures. “Traditionally, government people have trust issues with programming languages as the compiler is, itself, an attack vector. If you are using a nightly release of the compiler, it may be assumed by some that the compiler is not vetted for security and could inject unstable or malicious code into another critical codebase. Also, Rust is considered very young for security type work, people rightly assume there are unfound weaknesses due to the newness of the language and related libraries”, reads one comment from Hacker News. Another adds, “Governments have issues with non-stable code because it changes rapidly, is untested and a security risk. Facebook moves fast and break things.” Rust was declared as the most loved programming language by developers in the Stack Overflow survey 2019. This year more or less most major platforms have  jumped on the bandwagon of writing or rewriting its components in the Rust programming language. Last month, post the release of Libra, Calibra tech lead Ben Maurer took to Reddit to explain why Facebook chose the programming language Rust. Per Maurer, “As a project where security is a primary focus, the type-safety and memory-safety of Rust were extremely appealing. Over the past year, we've found that even though Rust has a high learning curve, it's an investment that has paid off. Rust has helped us build a clean, principled blockchain implementation. Part of our decision to choose Rust was based on the incredible momentum this community has achieved. We'll need to work together on challenges like tooling, build times, and strengthening the ecosystem of 3rd-party crates needed by security-sensitive projects like ours.” Not just Facebook, last week, Microsoft announced plans to replace their C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features. In June, Brave ad-blocker also released a new engine written in Rust which gives 69x better performance. Airbnb has introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. “I’m concerned about Libra’s model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 7144

article-image-to-create-effective-api-documentation-know-how-developers-use-it-says-acm
Bhagyashree R
19 Jul 2019
5 min read
Save for later

To create effective API documentation, know how developers use it, says ACM

Bhagyashree R
19 Jul 2019
5 min read
Earlier this year, the Association for Computing Machinery (ACM) in its January 2019 issue of Communication Design Quarterly (CDQ), talked about how developers use API documentation when getting into a new API and also suggested a few guidelines for writing effective API documentation. Application Programming Interfaces (APIs) are standardized and documented interfaces that allow applications to communicate with each other, without having to know how they are implemented. Developers often turn to API references, tutorials, example projects, and other resources to understand how to use them in their projects. To support the learning process effectively and write optimized API documentation, this study tried to answer the following questions: Which information resources offered by the API documentation developers use and to what extent? What approaches developers take when they start working with a new API? What aspects of the content hinders efficient task completion? API documentation and content categories used in the study The study was done on 12 developers (11 male and 1 female), who were asked to solve a set of pre-defined tasks using an unfamiliar public API. To solve these tasks, they were allowed to refer to only the documentation published by the API provider. The participants used the API documentation about 49% of the time while solving the task. On an individual level, there was not much variation, with the means for all but two participants ranging between 41% and 56%. The most used content category was API reference, followed by the Recipes page. The aggregate time spent on both Recipes and Samples categories was almost equal to the time spent on the API reference category. The Concepts page, however, was used less often as compared to the API reference. Source: ACM “These findings show that the API reference is an important source of information, not only to solve specific programming issues when working with an API developers already have some experience with, but even in the initial stages of getting into a new API, in line with Meng et al. (2018),” the study concludes. How do developers learn a new API The researchers observed two different problem-solving behaviors that were very similar to the opportunistic and systematic developer personas discussed by Clarke (2007). Developers with the opportunistic approach tried to solve the problem in an “exploratory fashion”. They were more intuitive, open to making errors, and often tried solutions without double-checking in the documentation. This group was the one who does not invest much time to get a general overview of the API before starting with the first task. Developers from this group prefer fast and direct access to information instead of large sections of the documentation. On the contrary, developers with the systematic approach tried to first get a deeper understanding of the API before using it. They took some time to explore the API and prepare the development environment before starting with the first task. This group of developers attempted to follow the proposed processes and suggestions closely. They were also able to notice parts of the documentation that were not directly relevant to the given task. What aspects of API documentation make it hard for developers to complete tasks efficiently? Lack of transparent navigation and search function Some participants felt that the API documentation lacked a consistent system of navigation aids and did not offer side navigation including within-page links. Developers often required a search function when they were missing a particular piece of information, such as a term they did not know. As the documentation used in the test did not offer a search field, developers had to use a simple page search instead, which was often unsuccessful. Issues with high-level structuring of API documentation The participants observed several problems in the high-level structuring of the API documentation, that is, the split of information in Concepts, Samples, API reference, and so on. For instance, to search for a particular piece of information, participants sometimes found it difficult to decide which content category to select. It was particularly unclear how the content provided in the Samples and Recipes were distinct. Unable to reuse code examples Most of the time participants developed their solution using the sample code provided in the documentation. However, the efficient use of sample code was hindered because of the presence of placeholders in the code referencing some other code example. Few guidelines for writing efficient API documentation Organizing the content according to API functionality: The API documentation should be divided into categories that reflect the functionality or content domain of the API. So participants would have found it more convenient if instead of dividing documentation into “Samples,” “Concepts,” “API reference” and “Recipes,” the API used categories such as “Shipment Handling,” “Address Handling” and so on. Enabling efficient access to relevant content: While designing API documentation, it is important to take specific measures for improved accessibility to content that is relevant to the task at hand. This can be done by organizing the content according to API functionality, presenting conceptual information integrated with related tasks, and providing transparent navigation and powerful search function. Facilitating initial entry into the API: For this, you need to identify appropriate entry points into the API and relate particular tasks to specific API elements. Provide clean and working code examples, provide relevant background knowledge, and connect concepts to code. Supporting different development strategies: While creating the API documentation, you should also keep in mind the different strategies that developers adopt when approaching a new API. Both the content and the way it is presented should serve the needs of both opportunistic and systematic developers. These were some observations and implications from the study. To know more, read the paper: How Developers Use API Documentation: An Observation Study. GraphQL API is now generally available Best practices for RESTful web services: Naming conventions and API Versioning [Tutorial] Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times
Read more
  • 0
  • 0
  • 4174
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introducing-abscissa-security-oriented-rust-application-framework-by-iqlusion
Bhagyashree R
19 Jul 2019
2 min read
Save for later

Introducing Abscissa, a security-oriented Rust application framework by iqlusion

Bhagyashree R
19 Jul 2019
2 min read
Earlier this month, iqlusion, an infrastructure provider for next-generation cryptocurrency technologies, announced the release of Abscissa 0.1, a security-oriented microframework for building Rust applications. Yesterday, the team announced the release of Abscissa 0.2. Tony Arcieri, the co-founder of iqlusion, wrote in a blog post, “After releasing v0.1, we’ve spent the past few weeks further polishing it up in tandem with this blog post, and just released a follow-up v0.2.” After developing a lot of Rust applications ranging from CLI to network services and managing a lot of the same copy/paste boilerplate, iqlusion decided to create the Abscissa framework. It aims to maximize functionality while minimizing the number of dependencies. What features does Abscissa come with? Command-line option parsing Abscissa comes with simple declarative option parser, which is based on the gumdrop crate. The option parser encompasses several improvements to provide better UX and tighter integration with the other parts of the framework, for example, overriding configuration settings using command-line options. Uses component architecture It uses a component architecture for extensibility, with a minimalist implementation and still is able to offer features like calculating dependency ordering and providing hooks into the application lifecycle. Configuration Allows simple parsing of Tom's Obvious, Minimal Language (TOML) configurations to serde-parsed configuration types that can be dynamically updated at runtime. Error handling Abscissa has a generic ‘Error’ type based on the ‘failure’ crate and a unified error-handling subsystem. Logging It uses the ‘log’ crate to provide application-level logging. Secrets management The optional ‘secrets’ module contains a ‘Secret’ type that derives serde’s Deserialize, which can be used for representing secret values parsed from configuration files or elsewhere. Terminal interactions It supports colored terminal output and is useful for Cargo-like status messages with easy-to-use macros. Read the official announcement for more details on Abscissa. You can also check out its GitHub repository. Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more
Read more
  • 0
  • 0
  • 4451

article-image-fedora-announces-the-first-preview-release-of-fedora-coreos-as-an-automatically-updating-linux-os-for-containerized-workloads
Vincy Davis
19 Jul 2019
3 min read
Save for later

Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads

Vincy Davis
19 Jul 2019
3 min read
Three days ago, Fedora announced the first preview release of the open-source project Fedora CoreOS as a secure and reliable host for computer clusters. It is specifically designed for running containerized workloads with automatic updates to the latest OS improvements, bug fixes, and security updates. It is secure, minimal, monolithic and is optimized for working with Kubernetes. The main goal of Fedora CoreOS is to be a reliable container host to run containerized workloads securely and at scale. It integrates Ignition from Container Linux technology and rpm-ostree and SELinux hardening from Project Atomic Host. Fedora CoreOS is expected to be a successor to Container Linux eventually. The Container Linux project will continue to be supported throughout 2019, leaving users with ample time to migrate and provide feedback. Fedora has also assured Container Linux users that continued support will be provided to them without any disruption. Fedora CoreOS will also become the successor to Fedora Atomic Host. The current plan is for Fedora Atomic Host to have at least a 29 version and 6 months of lifecycle. Fedora CoreOS will support AWS, Azure, DigitalOcean, GCP, OpenStack, Packet, QEMU, VirtualBox, VMware, and bare-metal system platforms. The initial release of Fedora CoreOS will only run on bare metal, Quick Emulator (QEMU), VMware, and AWS on the 64-bit version of the x86 instruction set (x86_64) only. It supports provisioning via Ignition spec 3.0.0 and the Fedora CoreOS Config Transpiler, and will provide automatic updates with Zincati and rpm-ostree, and will run containers with Podman and Moby. Benjamin Gilbert from Red Hat, who is the primary sponsor for FedoraOS wrote a mail archive announcing the preview. Per Gilbert,  in the coming months, more platforms will be added to Fedora CoreOS and new functionalities will be explored. He has also notified users that the Fedora CoreOS preview should not be used for production workloads, as it may change before the stable release. Since Fedora CoreOS is freely available, it will embrace a variety of containerized use cases while Red Hat CoreOS will continue to provide a focused immutable host for OpenShift. It will be released and life-cycled at the same time as the platform. Users are happy with the first preview of Fedora CoreOS. https://twitter.com/datamattsson/status/1151963024175050758 A user on Reddit comments, “Wow looks awesome”. For details on how to create Ignition configs, head over to the Fedora Project docs. Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more
Read more
  • 0
  • 0
  • 4192

article-image-nativescript-6-0-releases-with-nativescript-appsync-tabview-dark-theme-and-much-more
Amrata Joshi
19 Jul 2019
2 min read
Save for later

NativeScript 6.0 releases with NativeScript AppSync, TabView, Dark theme and much more!

Amrata Joshi
19 Jul 2019
2 min read
Yesterday, the team behind NativeScript announced the release of NativeScript 6.0. This release features faster delivery of patches with the help of NativeScript AppSync and it comes with the NativeScript Core Theme that works for all NativeScript components. This release comes with an improved TabView that enables common scenarios without custom development. NativeScript 6.0 comes with support for AndroidX and Angular 8. https://twitter.com/ufsa/status/1151755519062958081 Introducing NativeScript AppSync Yesterday, the team also introduced NativeScript AppSync which is a beta service that enables users to deliver a new version of their application instantly. Users can have a look at the demo here: https://youtu.be/XG-ucFqjG6c Core Theme v2 and Dark Theme The NativeScript Core Theme provides common UI infrastructure for building consistent and good-looking user interface. The team is also introducing a Dark Theme that comes with the skins of the Light Theme.  Kendo Themes  The users who are using the Kendo components for their web applications can now reuse their Kendo theme in NativeScript. They can also use the Kendo Theme Builder for building a new theme for their NativeScript application.  Plug and play With this release, the NativeScript Core Theme is now completely plug and play. Users can now manually set classes to their components and can easily install the theme. TabView All the components of the TabView are now styleable and also the font icons are now supported. Users can now have multiple TabView components that are nested, similar to having tabs and bottom navigation on the same page. These new capabilities are still in beta. Bundle Workflow With NativeScript 6.0, the NativeScript CLI will now support the Bundle Workflow, a single unified way for building applications. Hot Module Replacement (HMR) is also enabled by default and users can disable it by providing the `--no-hmr` flag to the executed command. To know more about this news, check out the official blog post. NativeScript 5.0 released with code sharing, hot module replacement, and more! JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript Nativescript 4.1 has been released  
Read more
  • 0
  • 0
  • 1914

article-image-linux-mint-19-2-beta-releases-with-update-manager-improved-menu-and-much-more
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Linux Mint 19.2 beta releases with Update Manager, improved menu and much more!

Amrata Joshi
18 Jul 2019
3 min read
This week the team behind Linux Mint announced the release of Linux Mint 19.2 beta, a desktop Linux distribution used for producing a modern operating system. This release is codenamed as Tina. This release comes with updated software and refinements and new features for making the desktop more comfortable to use. What’s new in Linux Mint 19.2 beta? Update Manager The Update Manager now shows how long kernels are supported and users no longer need to install or remove kernels one by one anymore. Users can now queue installations and removals as well as install and remove multiple kernels in one go. A new button called "Remove Kernels" has been added to make for removing obsolete kernels. There is also support for kernel flavors now. The Update Manager will now show a combobox for users to switch between flavors. Improved menu mintMenu, the main application menu, has received many bug fixes and performance improvements. Also,Even the search bar position and the tooltips are now configurable. In this release, the applet icon now supports both icon files and themed icons. Software Manager A loading screen now shows up when the cache is being refreshed in the Software Manager. Software Manager can now share the same cache and can also list the applications which were installed via other means (other than Software Manager). The cache used by the Software Manager has been moved to mint-common and is turned into a Python module that can recognize manually installed software.  New buttons added in the Maintenance section In this release, two new buttons are made available in the "Maintenance" section of the "Software Sources" configuration tool: Add Missing Keys: With the help of this button, users can now scan their repositories and PPAs and download any key that might be missing. Remove duplicate sources: With the help of this button, users can find and fix duplicated definitions in their sources configuration. Read Also: Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Announcing MATE 1.22 The Mint team also announced that Linux Mint 19.2 will be shipped with MATE 1.22 which now comes with improved stability and bug fixes. MATE is the Linux desktop that started as a fork of GNOME 2 in 2011 due to the poor reception of GNOME 3.  What’s new in MATE 1.22? It comes with support for metacity-3 themes. This release features better-looking window and desktop switchers. MATE 1.22 features systemd support in the session manager. It has support for new compression formats and can easily pause/resume compression/decompression. It seems users are happy with this news. A user commented on the official post, “Hi Mint Team. Great job so far. Looks very smooth – even for a beta. Menu is crazy fast!!!”  Few others are complaining about the graphical glitches they faced. Another user commented, “Hi team and thanks for your latest offering, there is a LOT to like about this and I will provide as much useful feedback as I can, I have had an issue with graphical glitches from Linux Mint 19x Cinnamon.” To know more about this news, check out the official blog post. Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Is Linux hard to learn? Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 3107
article-image-introducing-ballista-a-distributed-compute-platform-based-on-kubernetes-and-rust
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Introducing Ballista, a distributed compute platform based on Kubernetes and Rust

Amrata Joshi
18 Jul 2019
3 min read
Andy Grove, a software engineer introduced Ballista, a distributed compute platform and in his recent blog post, he explained his journey on this project. Roughly around eighteen months ago, he started the DataFusion project, an in-memory query engine that uses Apache Arrow as the memory model. The aim was to build a distributed compute platform in Rust that can compete with Apache Spark but which turned out to be difficult for him. Grove writes in a blog post, “Unsurprisingly, this turned out to be an overly ambitious goal at the time and I fell short of achieving that. However, some very good things came out of this effort. We now have a Rust implementation of Apache Arrow with a growing community of committers, and DataFusion was donated to the Apache Arrow project as an in-memory query execution engine and is now starting to see some early adoption.” He then took a break from working on Arrow and DataFusion for a couple of months and focused on some deliverables at work.  He then started a new PoC (Proof of Concept) project which was his second attempt at building a distributed platform with Rust. But this time he had the advantage of already having Arrow and DataFusion in his plate. His new project is called Ballista, a distributed compute platform that is based on Kubernetes and the Rust implementation of Apache Arrow.  A Ballista cluster currently comprises of a number of individual pods within a Kubernetes cluster and it can be created and destroyed via the Ballista CLI. Ballista applications can be deployed to Kubernetes with the help of Ballista CLI and they use Kubernetes service discovery for connecting to the cluster. Since there is no distributed query planner yet, Ballista applications must manually build the query plans that need to be executed on the cluster.  To make this project practically work and push it beyond the limit of just a PoC, Grove listed some of the things on the roadmap for v1.0.0: First is to implement a distributed query planner. Then bringing support for all DataFusion logical plans and expressions. User code has to be supported as part of distributed query execution. They plan to bring support for interactive SQL queries against a cluster with gRPC. Support for Arrow Flight protocol and Java bindings. This PoC project will help in driving the requirements for DataFusion and it has already led to three DataFusion PRs that are being merged into the Apache Arrow codebase. It seems there are mixed reviews for this initiative, a user commented on HackerNews, “Hang in there mate :) I really don't think you deserve a lot of the crap you've been given in this thread. Someone has to try something new.” Another user commented, “The fact people opposed to your idea/work means it is valuable enough for people to say something against and not ignore it.” To know more about this news, check out the official announcement.  Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust  
Read more
  • 0
  • 0
  • 6583

article-image-graphql-api-is-now-generally-available
Amrata Joshi
17 Jul 2019
3 min read
Save for later

GraphQL API is now generally available

Amrata Joshi
17 Jul 2019
3 min read
Last month, the team at Fauna, provider of FaunaDB, the cloud-first database announced the general availability of its GraphQL API, a query language for APIs. With the support for GraphQL, FaunaDB now provides cloud database services in the market and allows developers to use any API of choice to manipulate all their data. GraphQL also helps developers with their productivity by enabling fast, easy development of serverless applications. It makes FaunaDB the only serverless backend that has support for universal database access. Matt Biilmann, CEO at Netlify, a Fauna partner said, “Fauna’s GraphQL support is being introduced at a perfect time as rich, serverless apps are disrupting traditional development models.” Biilmann added, “GraphQL is becoming increasingly important to the entire developer community as they continue to leverage JAMstack and serverless to simplify cloud application development. We applaud Fauna’s work as the first company to bring a serverless GraphQL database to market.” GraphQL helps developers in specifying the shape of the data they need without requiring changes to the backend components that provide data. The GraphQL API in FaunaDB helps teams in collaborating smoothly and allows back-end teams to focus on security and business logic, and helps front-end teams to concentrate on presentation and usability.  In 2017, the global serverless architecture market was valued at $3.46 billion in 2017 and is expected to reach $18.04 billion by 2024 as per the Zion Research. GraphQL brings growth and development to serverless development so developers can look for back-end GraphQL support like the one found in FaunaDB. The GraphQL API also supports three general functions: Queries, Mutations, and Subscriptions and currently, FaunaDB natively supports Queries and Mutations.  FaunaDB's GraphQL API provides developers with uniform access to transactional consistency, quality of service (QoS), user authorization, data access, and temporal storage. No limits on data history FaunaDB is the only database that provides support without any limits on data history. Any API such as SQL in FaunaDB can return data at any given time. Consistency FaunaDB provides the highest consistency levels for its transactions that are automatically applied to all APIs. Authorization FaunaDB provides access control at the row level which is applicable to all APIs, be it GraphQL or SQL. Shared data access It also features shared data access, so the data which is written by one API (e.g., GraphQL) can be read and modified by another API such as FQL.  To know more about the news, check out the press release. 7 reasons to choose GraphQL APIs over REST for building your APIs Best practices for RESTful web services : Naming conventions and API Versioning [Tutorial] Implementing routing with React Router and GraphQL [Tutorial]
Read more
  • 0
  • 0
  • 6365

article-image-facebook-released-hermes-an-open-source-javascript-engine-to-run-react-native-apps-on-android
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Facebook released Hermes, an open source JavaScript engine to run React Native apps on Android

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Facebook released a new JavaScript engine called Hermes under an open source MIT license. According to Facebook, this new engine will speed up start times for native Android apps built with React Native framework. https://twitter.com/reactnative/status/1149347916877901824 Facebook software engineer Marc Horowitz unveiled Hermes at the Chain React 2019 conference held yesterday in Portland, Oregon. Hermes is a new tool for developers to primarily improve app startup performance in the same way Facebook does for its apps, and to make apps more efficient on low-end smartphones. The supposed advantage of Hermes is that developers can target all three mobile platforms with a single code base; but as with any cross-platform framework, there are trade offs in terms of performance, security and flexibility. Hermes is available on GitHub for all developers to use. It has also got its own Twitter account and home page. In a demo, Horowitz showed that a React Native app with Hermes was fully loaded within half the time the same app without Hermes loaded, or about two seconds faster. Check out the video below: Horowitz emphasized on the fact that Hermes cuts the APK size (the size of the app file) to half the 41MB of a stock React Native app, and removes a quarter of the app's memory usage. In other words, with Hermes developers can get users interacting with an app faster with fewer obstacles like slow download times and constraints caused by multiple apps sharing in a limited memory resources, especially on lower-end phones. And these are exactly the phones Facebook is aiming at with Hermes, compared to the fancy high-end phones that well-paid developers typically use themselves. "As developers we tend to carry the latest flagship devices. Most users around the world don't," he said. "Commonly used Android devices have less memory and less storage than the newest phones and much less than a desktop. This is especially true outside of the United States. Mobile flash is also relatively slow, leading to high I/O latency." It's not every day a new JavaScript engine is born, but while there are plenty such engines available for browsers, like Google's V8, Mozilla's SpiderMonkey, Microsoft's Chakra, Horowitz notes Hermes is not aimed at browsers or, for example, how Node.js on the server side. "We're not trying to compete in the browser space or the server space. Hermes could in theory be for those kinds of use cases, that's never been our goal." The Register reports that Facebook has no plan to push Hermes' beyond React Native to Node.js or to turn it into the foundation of a Facebook-branded browser. This is because it's optimized for mobile apps and wouldn't offer advantages over other engines in other usage scenarios. Hermes tries to be efficient through bytecode precompilation – rather than loading JavaScript and then parsing it. Hermes employs ahead-of-time (AOT) compilation during the mobile app build process to allow for more extensive bytecode optimization. Along similar lines, the Fuchsia Dart compiler for iOS is an AOT compiler. There are other ways to squeeze more performance out of JavaScript. The V8 engine, for example, offers a capability called custom snapshots. However, this is a bit more technically demanding than using Hermes. Hermes also abandons the just in time (JIT) compiler used by other JavaScript engines to compile frequently interpreted code into machine code. In the context of React Native, the JIT doesn't do that much to ease mobile app workloads. The reason Hermes exists, as per Facebook, is to make React Native better. "Hermes allows for more optimization on mobile since developers control the build stack," said a Facebook spokesperson in an email to The Register. "For example, we implemented bytecode precompilation to improve performance and developed more efficient garbage collection to reduce memory usage." In a discussion on Hacker News, Microsoft developer Andrew Coates claims that internal testing of Hermes and React Native in conjunction with Microsoft Office for Android shows TTI using Hermes at 1.1s, compared to 1.4s for V8, and with 21.5MB runtime memory impact, compared to 30MB with V8. Hermes is mostly compatible with ES6 JavaScript. To keep the engine small, support for some language features is missing, like with statements and local mode eval(). Facebook’s spokesperson also said to The Register that they are planning to publish benchmark figures in the next week to support its performance claims. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more
Read more
  • 0
  • 0
  • 7454
article-image-intellij-idea-2019-2-beta-2-released-with-new-services-tool-window-and-profiling-tools
Bhagyashree R
11 Jul 2019
4 min read
Save for later

IntelliJ IDEA 2019.2 Beta 2 released with new Services tool window and profiling tools

Bhagyashree R
11 Jul 2019
4 min read
Yesterday, JetBrains announced the release of IntelliJ IDEA 2019.2 Beta 2, which marks the next step towards the stable release. The team has already implemented major features like profiling tools, better shell script support, a new Services tool window, among others. With this release, the team has given a final polish to the existing features including the Terminal that now soft-wraps long lines better. This solves the previous problem of breaking links while wrapping lines. Source: IntelliJ IDEA Shell script support This release will come with rich editing features for shell scripts including word and path completion, quick documentation preview, and textual rename. Additionally, it will also allow integration with various other external tools to provide developers an enhanced shell script support. For instance, the IDE will prompt you to install ShellCheck to detect possible errors in your scripts and also suggest quick fixes for them. A new Services tool window IntelliJ IDEA 2019.2 will introduce a new Services tool window, which will be your single stop to view all connections and run configurations that are configured to be reported to the Services view.  The Services view will incorporate windows for several tools such as RunDashboard, Database Console, Docker, and Application Servers. You have the option of viewing all the service types as nodes or tabs. To view a service type on a separate tab you can either use the Show in New tab action from the toolbar or simply drag and drop the needed node on to the edge of the Services tool window. You can also create a custom tab to group various services using the Group Services action from the context menu or from the toolbar. Source: IntelliJ IDEA Profiling tools for IntelliJ IDEA Ultimate You will be able to analyze the performance of your application right from the IDE using the new CPU Profiler integration and Memory Profiler integration on macOS, Linux, and Windows. It will also come integrated with Java Flight Recorder and Async profiler. This will help you get an insight into how the CPU and memory resources are allocated in your application. To run Java Flight Recorder or Async profiler, you just need to click the icon on the main toolbar or the run icon in the gutter. These tools will only be available in the professional and fully-featured commercial IDE, IntelliJ IDEA Ultimate. Source: IntelliJ IDEA Syntax highlighting for over 20 different programming languages IntelliJ IDEA 2019.2 will provide syntax highlighting for more than 20 different languages. To provide this support, this upcoming version comes integrated with TextMate text editor and a collection of built-in grammar files for various languages. You can find the full list of supported languages in Preferences / Settings | Editor | TextMate Bundles. In case you require syntax highlighting for any additional languages, you can download the TextMate bundle for the selected language and import it into IntelliJ IDEA. Commit directly from the Local Changes With this version, developers will be able to commit directly from the Local Changes tab without having to go through a separate Commit dialog. While working on a commit, you will be able to browse through the source code, view the file history, view the diff for the file in the same area as the commit, or use other features of the IDE. In previous versions, all these actions were impossible because the modal commit dialog blocked all the other IDE functionality. Additionally, there is a new feature for projects that are using version systems like Git or Mercurial. You just need to press the Commit shortcut (Ctrl-K on Windows, Linux/Cmd-K on macOS) and the IDE will select the modified files for the commit. You will then be able to review the selected files and change the file or code chunk. Source: IntelliJ IDEA These were some of the features coming in IntelliJ IDEA 2019.2. You can read the entire release notes and stay updated with the IntelliJ IDEA blog to know more in detail. Developers are excited about the profiling tools and other shining features bundled with this release: https://twitter.com/Rahamat87523498/status/1149221123256492032 https://twitter.com/goKarumi/status/1148849477136146432 https://twitter.com/matsumana/status/1140659765518852097 What’s new in IntelliJ IDEA 2018.2 IntelliJ IDEA 2018.3 Early Access Program is now open! Netbeans, Intellij IDEA and PyCharm come to Haiku OS
Read more
  • 0
  • 0
  • 5420

article-image-risc-v-foundation-officially-validates-risc-v-base-isa-and-privileged-architecture-specifications
Vincy Davis
11 Jul 2019
2 min read
Save for later

RISC-V Foundation officially validates RISC-V base ISA and privileged architecture specifications

Vincy Davis
11 Jul 2019
2 min read
Yesterday, the RISC-V Foundation announced that the RISC-V base Instruction Set Architecture (ISA) and privileged architecture specifications have been ratified.  The RISC-V Foundation drives the adoption and implementation of the free and open RISC-V ISA. The RISC-V base architecture acts as the interface between application software and hardware.  Krste Asanović, chairman of the RISC-V Foundation Board of Directors says, “The RISC-V ecosystem has already demonstrated a large degree of interoperability among various implementations. Now that the base architecture has been ratified, developers can be assured that their software written for RISC-V will run on all similar RISC-V cores forever.” The RISC-V privileged architecture covers all aspects of RISC-V systems including privileged instructions, additional functionality required for running operating systems and attaching external devices. Privilege levels are used to provide protection between different components of the software stack, such that it has a core set of privileged ISA extensions. The ISA extensions have optional extensions and variants, including the machine ISA, supervisor ISA and hypervisor ISA. “The RISC-V privileged architecture serves as a contract between RISC-V hardware and software such as Linux and FreeBSD. Ratifying these standards is a milestone for RISC-V,” said Andrew Waterman, chair of the RISC-V Privileged Architecture Task Group.  To know more about this announcement in detail, head over to RISC-V blog. Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation Western Digital RISC-V SweRV Core is now on GitHub
Read more
  • 0
  • 0
  • 3629