Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-chaos-engineering-comes-to-kubernetes-thanks-to-gremlin
Richard Gall
18 Nov 2019
2 min read
Save for later

Chaos engineering comes to Kubernetes thanks to Gremlin

Richard Gall
18 Nov 2019
2 min read
Kubernetes causes problems. Just last week Cindy Sridharan wrote on Twitter that while Docker "succeeded... because it was a great developer tool," Kubernetes "decided to be all things tech and not much by way of UX. It was and remains a hostile piece of software to learn, run, operate, maintain." https://twitter.com/copyconstruct/status/1194701905248673792?s=20 That's just a drop in the ocean - you don't have to look hard to find more hot takes, jokes, and memes about how complicated working with Kubernetes can feel. Despite all this, it's certainly here to stay. That makes chaos engineering platform Gremlin's announcement that the platform will offer native support for Kubernetes particularly welcome. Citing container orchestration research done by Datadog in the press release, which indicates the rapid rate of Kubernetes adoption, Gremlin is hoping that it can provide some additional support for users that might be concerned about the platforms complexity. From last year: Gremlin makes chaos engineering with Docker easier with new container discovery feature Gremlin CTO Matt Fornaciari said "our goal is to provide SRE and DevOps teams that are building and deploying modern applications with the tools and processes necessary to understand how their systems handle failure, before that failure has the chance to impact customers and business." The new feature is designed to help engineers do exactly that by allowing them "to automate the process of identifying Kubernetes primitives such as nodes and pods," and to select and attack traffic from different services within Kubernetes. The other important element to all this is that Gremlin wants to make things as straightforward as possible for engineering teams. With a neat and easy to use UI, it would seem that, to return to Sridharan's words, the team are eager to make sure their product is "a great developer tool."   The tool has already been tried and tested in the wild. Simon Govier, Expedia's Director of Program Management described how performing chaos experiments on Kubernetes with Gremlin "significantly reduces the amount of time it takes to do fault injection and increases our systems' resilience to failure." Learn more on the Gremlin website.
Read more
  • 0
  • 0
  • 4051

article-image-why-geospatial-analysis-and-gis-matters-more-than-ever-today
Richard Gall
18 Nov 2019
7 min read
Save for later

Why geospatial analysis and GIS matters more than ever today

Richard Gall
18 Nov 2019
7 min read
Due to the hype around big data and artificial intelligence, it can be easy to miss some of the powerful but specific ways data can be truly impactful. One of the most important areas of modern data analysis that rarely gets given its due is geospatial analysis. At a time when both the natural and human worlds are going through a period of seismic change, the ability to throw a spotlight on issues of climate and population change is as transformative as the smartest chatbot (indeed, probably much more transformative). The foundation of geospatial analysis are GIS systems. GIS, in case you’re new to the field ,is an acronym for Geographic Information System. GIS applications and tools allow you to store, manipulate, analyze, and visualize data that corresponds to different aspects of the existing environment. Central to this is topographical information, but it could also include many other aspects, from contours and slopes, the built environment, land types and bodies of water. In the context of climate and human geography it’s easy to see how this kind of data can help us see the bigger picture - quite literally - behind what’s happening in our region, across our countries, and indeed, across the whole world. The history of geospatial analysis is a testament to its power. In 1854 physician John Snow identified the source of a cholera outbreak in London by marking out the homes of victims on a map. The cluster of victims that Snow’s map revealed led him to an infected water supply. Read next: Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database How GIS and geospatial analysis is being used today While this example is, of course, incredibly low-tech, it highlights exactly why geospatial analysis and GIS tools can be so valuable. To bring us up to date, there are many more examples of how geospatial analysis is making a real impact in social and environmental issues. This article on Forbes, for example, details some of the ways in which GIS projects are helping to uncover information that offers some unique insights on the history of racism, and its continuing reality today. The list includes a map of historical lynchings occurring between 1877 and 1950, and a map by the Urban Institute that shows the reality of racial segregation in U.S. schools in the 21st century. https://twitter.com/urbaninstitute/status/504668921962577921 That’s just a small snapshot - there are a huge range of incredible GIS projects that are having a massive impact on both how we understand issues, but even on policy. That's analytics enacting real, demonstrable change. Here are a few of the different areas in which GIS is being used: How GIS can be used in agriculture GIS can be used to tackle crop diseases by identifying issues across a large area of land. It’s possible to gain a deeper insight into what can drive improvements to crop yields by looking at the geographic and environmental factors that influence successful growth. How GIS can be used in retail GIS can help provide an insight on the relationship between consumer behavior and factors such as weather and congestion. It can also be used to better understand how consumers interact with products in shops. This can influence things like store design and product placement. How GIS can be used in meteorology and climate science Without GIS, it would be impossible to properly understand and visualise rainfall around the world. GIS can also be used to make predictions about the weather. For example, identifying anomalies in patterns and trends could indicate extreme weather events. How GIS can be used in medicine and health As we saw in the example above, by identifying clusters of disease, it becomes much easier to determine the causes of certain illnesses. GIS can also help us better understand the relationship between illness and environment - like pollution and asthma. How GIS can be used for humanitarian purposes Geospatial tools can help humanitarian teams to understand patterns of violence in given areas. This can help them to better manage and distribute resources and support to where it’s needed (Map Kibera is a great example of how this can be done). GIS tools are good at helping to bridge the gap between local populations and humanitarian workers in times of crisis. For example, during the Haiti earthquake non-profit tech company Ushahidi’s product helped to collate and coordinate reports from across the island. This made it possible to align what might have otherwise been a mess of data and information. There are many, many more examples of GIS being used for both commercial and non-profit purposes. If you want an in-depth look at a huge range of examples, it’s well worth checking out this article, which features 1000 GIS projects. Although geospatial analysis can be used across many different domains, all the examples above have a trend running through them: they all help us to understand the impact of space and geography. From social mobility and academic opportunity to soil erosion, GIS and other geospatial tools are brilliant because they help us to identify relationships that we might otherwise be unable to see. GIS and geospatial analysis project ideas This is an important point if you’re not sure where to start when it comes to starting a new GIS project. Forget the data (to begin with at least) and just think about what sort of questions you’d like to answer. The list is potentially endless, but here are some questions that I thought of just off the top of my head: Are there certain parts of your region more prone to flooding? Why are certain parts of your town congested and not others? Do economically marginalised people have to travel further to receive healthcare? Does one part of your region receive more rainfall/snowfall than other parts? Are there more new buildings in one area than another? Getting this right is integral to any good analysis project. Ultimately it’s what makes the whole thing worthwhile. Read next: PostGIS 3.0.0 releases with raster support as a separate extension Where to find data for a GIS project Once you’ve decided on something you want to find out, the next part is to collect your data. This can be tricky, but there are nevertheless a massive range of free data sources you can use for your project. This web page has a comprehensive collection of datasets; while it might not have exactly what you’re looking for, it's nevertheless a good place to begin if you simply want to try something out. Conclusion: Geospatial analysis is one of the most exciting and potentially transformative fields in analytics GIS and geospatial analysis is quite literally rooted in the real world. In the maps and visualizations that we create we’re able to offer unique perspectives on history or provide practical guidance on how we should act, what we need to do. This is significant: all too often technology can feel like its divorced from reality, as if it is folded into its own world that has no connection to real people. So, be ambitious, and be bold with your next GIS project: who knows what impact it could have.
Read more
  • 0
  • 0
  • 5530

article-image-theres-more-to-learning-programming-than-just-writing-code
Richard Gall
15 Nov 2019
8 min read
Save for later

There's more to learning programming than just writing code

Richard Gall
15 Nov 2019
8 min read
Everyone should learn to code, right? If everyone learned programming not only would people have better jobs, the economy would be growing, and ultimately we’d all have far superior lives to the ones we lead now. Except - clearly - that’s just not true. Yes, perhaps that position is a bit of a caricature, but it’s one that isn’t that uncommon. Lawmakers talk about the importance of making programming and coding part of the curriculum, and are keen to make loud and enthusiastic noises about investing in STEM subjects. We need more engineers to power the digital economy, the thinking goes. While introducing children to code certainly isn’t a bad thing, this way of viewing the world is pretty damaging - not least to those already in engineering roles and those organizations that depend on them. This is because is reduces the activity of writing code to something simple. It turns programming, a complex and ultimately deeply human activity into something more machine like. It almost suggests it’s just a question of writing letters and numbers into a code editor and then just watching the whole thing run. Programming might involve working with machines, but in truth its anything but machine-like. For business leaders, failing to understand what programming actually involves can lead to a really poor engineering culture. A reductive view of the work that software engineers do mean increased pressure, more burnout and lower quality software being delivered. In turn, that has a negative impact on the bottom line. It might not be immediately apparent but poor software means code rewrites, poor user experiences, and high turnover of personnel. That costs money because organizations will be spending valuable time and energy trying to fix mistakes of the past. We need to keep an open mind about what it means to "learn programming" However, with a more open minded perspective on what it actually means to be a programmer, and what ‘learning programming’ actually means, you can build a much more productive engineering culture. This involves not only respecting the learning process, but also recognising that learning isn’t about just taking a course or doing a live coding exercise. It does, in fact, involve a much more diverse range of activities. Let’s look at what some of them are. Evaluating software One of the most important parts of a software developers work is evaluating software. This can happen in various ways. Most obviously, technology leaders (CTOs, Principal Architects, Development Leads) have to evaluate different tools and platforms before they implement a project. Questions here will revolve primarily around cost, but it certainly won't be the leadership team's sole concern. Other issues like integration, product capabilities, even the learning curve and level of complexity will need to be considered (will we need to hire specialist engineers or can our existing team pick it up quickly?). Perhaps that all sounds obvious, but too often we forget that this is work that needs to be done. To make these sorts of assessments - which are often business critical - individuals will need a high degree of knowledge. Without it they can’t be confident that they’re making the right decision for the business. In this sense then, learning about technologies is just as important as the process of learning how to use technologies. Some might say it’s even more important. It's not only senior developers and tech leaders that evaluate software Evaluating software is by no means a task limited to those in senior positions. Developers and engineers who spend the majority of their time shipping code will still need to learn about technologies too. They might not be responsible for architecting a new software system or purchasing PaaS products, but they will have to make personal decisions about what tools they use to solve specific problems. This might sometimes be about the tools they use to boost their productivity and better manage their development workflow, but it isn't limited to that. In broad terms it’s about having an open mind about the range of approaches that can be taken to new challenges. This means that all technology professionals need to learn about technologies - how they work, how they compare to one another, and even what the trade offs between them are. This shouldn’t be treated as an additional extra, but instead as a fundamental part of the learning process. Read next: Developers are today’s technology decision makers Programming techniques and design principles When talking about learning it’s easy to fall into a trap where we privilege practice over theory. Theory, certain lines of thinking go, is self-indulgent, unnecessary, and time-consuming. What’s really important is that people can simply start getting their hands dirty and learn by doing. While it’s true that the practical dimension of learning is vital - in technology or any other field - we overlook theory at our peril. In reality, theory and practice should go together. Practice should be a way of illuminating the theory and theory should be a way of explaining why something works the way it does, or why you should do something in a certain way. Think of it this way: if everyone only learned through practice, we’d all be incapable of applying our skills and knowledge to new problems and challenges. We’d be fixed in our mindset, more like machines than creative human beings. For developers and software engineers this is particularly true. By understanding the principles behind how something works, it becomes much easier to apply solutions to new contexts or even reconfigure them in ways that are appropriate and effective. Improving software with design-led principles Programming techniques and philosophies, like functional or object oriented programming, for example, can help developers and engineers to write code in a specific way, helping them to unlock greater performance and efficiency (both personally and from a technical perspective). Similarly, design patterns also provide a way for thinking about your code in a predetermined way in relation to various commonly occurring problems. It’s true that this still requires developers to get close to code. But this is actually a level of abstraction above the practice of writing code that allows developers to think critically about what they do. So, while a good way to learn these sorts of principles is to see what it looks like in practice, it’s still essential for developers to have a robust conceptual understanding of what this means in practice. Understanding users and business needs Software doesn’t exist in a vacuum. On one side there's the business, on the other there's a user. It sounds obvious, but it's essential that technology professionals are sensitive to these two contextual elements. Business needs and user needs are what ultimately make their work meaningful. In practice, this doesn’t mean people working in technology all need to go and take an MBA. But they do need to have a clear conceptual understanding of how software development and software systems should align with the needs of both internal stakeholders (ie. the business), and users. This isn’t always easy to learn, and there’s no manual for how it should be done. However, it fits across the two points we mentioned above. The software we decide to use, and the way we decide to use it will always be informed by the needs of both the business and users. What this means in practice, then, is that learning about software needs to be informed by the wider context of what that software is for, and what a business is trying to achieve. Some technology professionals enter the industry possessing this kind of awareness and sensitivity. Many others, however, do not, and for these people it’s essential that they have the space to understand how the various facets of the work they do are connected to real-life consequences. Writing code doesn’t help you to do that. Taking a step back and understanding the context in which that code is being written can and will. Read next: 6 reasons why employers should pay for their developers’ training and learning resources Conclusion: Great programming requires a combination of theoretical knowledge and practical talent The opposition between theory and practice is false. It doesn’t help anyone. A culture of ‘getting stuff done’ and shipping code regardless is not only bad for individual developers, it can also be damaging at an organizational level. Without a careful consideration of what you’re trying to achieve, how software can help you to do it, and what it requires to execute it effectively, organizations can become prone to error and mistakes. This leads to wasted time and, more importantly, wasted money. While Facebook’s mantra of ‘move fast and break things’ might sound like the defining phrase of the modern tech industry, good developers need both space and resources to think, plan, and conceptualize. This doesn’t mean we all need to go slow. Instead, it means we need to try to empower engineers to do the right thing, not the quick thing. Give your team access to a diverse range of resources to learn everything they need to build better software. Start a Packt for Teams subscription today.
Read more
  • 0
  • 0
  • 5338

article-image-github-universe-2019-github-for-mobile-github-archive-program-and-more-announced-amid-protests-against-githubs-ice-contract
Vincy Davis
14 Nov 2019
4 min read
Save for later

GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract

Vincy Davis
14 Nov 2019
4 min read
Yesterday, GitHub commenced its popular product conference GitHub Universe 2019 in San Francisco. The two-day annual conference celebrates the contribution of GitHub’s 40+ million developers and their contributions to the open source community. Day 1 of the conference had many interesting announcements like GitHub for mobile, GitHub Archive Program, and more. Let’s look at some of the major announcements at the GitHub Universe 2019 conference. GitHub for mobile iOS (beta) Github for mobile is a beta app that aims to give users the flexibility to work and interact with the team, anywhere they want. This will enable users to share feedback on a design discussion or review codes in a non-complex development environment. This native app will adapt to any screen size and will also work in dark mode based on the device preference. Currently available only on iOS, the GitHub team has said that it will soon come up with the Android version of it. https://twitter.com/italolelis/status/1194929030518255616 https://twitter.com/YashSharma___/status/1194899905552105472 GitHub Archive Program “Our world is powered by open source software. It’s a hidden cornerstone of our civilization and the shared heritage of all humanity. The mission of the GitHub Archive Program is to preserve it for generations to come,” states the official GitHub blog. GitHub has partnered with the Stanford Libraries, the Long Now Foundation, the Internet Archive, the Software Heritage Foundation, Piql, Microsoft Research, and the Bodleian Library to preserve all the available open source code in the world. It will safeguard all the data by storing multiple copies across various data formats and locations. This includes a “very-long-term archive” called the GitHub Arctic Code Vault which is designed to last at least 1,000 years. https://twitter.com/vithalreddy/status/1194846571835183104 https://twitter.com/sonicbw/status/1194680722856042499 Read More: GitHub Satellite 2019 focuses on community, security, and enterprise Automating workflows from code to cloud General availability of GitHub Actions Last year, at the GitHub Universe conference, GitHub Actions was announced in beta. This year, GitHub has made it generally available to all the users. In the past year, GitHub Actions has received contributions from the developers of AWS, Google, and others. Actions has now developed as a new standard for building and sharing automation for software development, including a CI/CD solution and native package management.GitHub has also announced the free use of self-hosted runners and artifact caching. https://twitter.com/qmixi/status/1194379789483704320 https://twitter.com/inversemetric/status/1194668430290345984 General availability of GitHub Packages In May this year, GitHub had announced the beta version of the GitHub Package Registry as its new package management service. Later in September, after gathering community feedback, GitHub announced that the service has proxy support for the primary npm registry. Since its launch, GitHub Package has received over 30,000 unique packages that served the needs of over 10,000 organizations. Now, at the GitHub Universe 2019, the GitHub team has announced the general availability of GitHub Packages and also informed that they have added support for using the GitHub Actions token. https://twitter.com/Chris_L_Ayers/status/1194693253532020736 These were some of the major announcements at day 1 of the GitHub Universe 2019 conference, head over to GitHub’s blog for more details of the event. Tech workers protests against GitHub’s ICE contract Major product announcements aside, one thing that garnered a lot of attention at the GitHub Universe conference was the protest conducted by the GitHub workers along with the Tech Workers Coalition to oppose GitHub’s $200,000 contract with Immigration and Customs Enforcement (ICE). Many high-profile speakers have dropped out of the GitHub Universe 2019 conference and at least five GitHub employees have resigned from GitHub due to its support for ICE. https://twitter.com/lily_dart/status/1194216293668401152 Read More: Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE Yesterday at the event, the protesting tech workers brought a giant cage to symbolize how ICE uses them to detain migrant children. https://twitter.com/githubbers/status/1194662876587233280 Tech workers around the world have extended their support to the protest against GitHub. https://twitter.com/ConMijente/status/1194665524191318016 https://twitter.com/CoralineAda/status/1194695061717450752 https://twitter.com/maybekatz/status/1194683980877975552 GitHub along with Weights & Biases introduced CodeSearchNet challenge evaluation and CodeSearchNet Corpus GitHub acquires Semmle to secure open-source supply chain; attains CVE Numbering Authority status GitHub Package Registry gets proxy support for the npm registry GitHub updates to Rails 6.0 with an incremental approach GitHub now supports two-factor authentication with security keys using the WebAuthn API
Read more
  • 0
  • 0
  • 4362
Banner background image

article-image-web-assembly-beyond-browser-bytecode-alliance-mozilla-fastly-intel-red-hat
Sugandha Lahoti
13 Nov 2019
4 min read
Save for later

Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastly, Intel and Red Hat partnership

Sugandha Lahoti
13 Nov 2019
4 min read
Mozilla has partnered with Fastly, Intel and Red Hat to form the Bytecode Alliance to build a secure-by-default future for WebAssembly and to take it beyond the browser. “WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers,” explained Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly. “This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way.” What is the Bytecode Alliance This year we saw many initiatives by Mozilla pushing WebAssembly beyond the web. The Bytecode Alliance is also a step forward in that direction with an aim to make it safe for developers to use untrusted code, no matter where they’re running it—whether on the cloud, desktop or IoT devices. This common, reusable set of foundations can be used on their own or embedded in other libraries and applications. As developers run untrusted code in many new places, from the cloud to IoT devices, they open up security and portability challenges. With WebAssembly and emerging related standards such as WASI and WebAssembly Interface Types, the Alliance plans to address challenges that occur when you try to run the same code across these different systems (server, edge, browser, mobile, and more platforms). Bytecode will combine Wasmtime, a runtime for WebAssembley and WASI, Fastly’s Lucet, WebAssembly compiler, and runtime, Intel’s WebAssembly Micro Runtime and code generator Cranelift. It will also include cargo-wasi, a lightweight Cargo subcommand that compiles Rust code to target WebAssembly as well as wat and wasmparser.  These set of projects are likely to expand as the Alliance grows. Once the Alliance is formalized, an open governance model consistent with open-source best practices and strong community norms will be established. Individual developers can also participate in an open-source project in the Bytecode Alliance. Each project is governed by its own committer group. Developers who are very active in shaping a project are eligible for nomination to the project’s committer group. Introducing WebAssembly nanoprocesses Apart from these existing projects, the Alliance is also focusing on a new architectural pattern emerging in WebAssembly. This pattern called the "WebAssembly nanoprocess", gives you most of the benefits of a process (the tool that operating systems use to protect programs from each other), but with much less overhead and much faster communication between (nano)processes. This process, Mozilla thinks, can make massively-modular code (like the ones you see in npm, crates, and PyPI) secure by default. This means developers won't need to worry as much about vulnerabilities in dependencies, or attackers sneaking malicious code into their codebases. Nanoprocess can also provide memory isolation that fits anywhere. It can handle requests for tens of thousands of customers on the same machine. It can also provide software fault isolation for individual libraries in native applications. The blog mentions, “With wasm, we can replace microservices with nanoprocesses and get the same security and language independence benefits. It gives us the composability of microservices without the weight." This alliance was well appreciated by people and developers posting their reactions on Twitter. https://twitter.com/tschneidereit/status/1194382954858000384 https://twitter.com/curioman2/status/1194475559012556800 https://twitter.com/fabianfranz/status/1194408360608698370 Learn more about this alliance on Mozilla’s blog. Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust Wasmer’s first Postgres extension to run WebAssembly is here! Mozilla proposes WebAssembly Interface Types to enable language interoperability Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 3350

article-image-rust-1-39-releases-with-stable-version-of-async-await-syntax-better-ergonomics-for-match-guards-attributes-on-function-parameters-and-more
Vincy Davis
08 Nov 2019
4 min read
Save for later

Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more

Vincy Davis
08 Nov 2019
4 min read
Less than two months after announcing Rust 1.38, the Rust team announced the release of Rust 1.39 yesterday. The new release brings the stable version of the async-await syntax, which will allow users to not only define async functions, but also block and .await them. The other improvements in Rust 1.39 include shared references to by-move bindings in match guards and attributes on function parameters. The stable version of async-await syntax The stable async function can be utilized (by writing async fn instead of fn) to return a Future when called. A Future is a suspended computation which is used to drive a function to conclusion “by .awaiting it.” Along with async fn, the async { ... } and async move { ... } blocks can also be used to define async literals. According to Nicholas D. Matsakis, a member of the release team, the first stable support of async-await kicks-off the commencement of a “Minimum Viable Product (MVP)”, as the Rust team will now try to improve the syntax by polishing and extending it for future operations. “With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async /.await, which we'll tell you more about in the future,” states the official Rust blog. Some of the major developments in the async ecosystem The tokio runtime will be releasing a number of scheduler improvements with support to async-await syntax in this month. The async-std runtime library will be releasing their first stable release in a few days. The async-await support has already started to become available in higher-level web frameworks and other applications like the futures_intrusive crate. Other improvements in Rust 1.39 Better ergonomics for match guards In the earlier versions, Rust would disallow taking shared references to by-move bindings in the if guards of match expressions. Starting from Rust 1.39, the compiler will allow binding in the following two ways- by-reference: either immutably or mutably which can be achieved through ref my_var or ref mut my_var respectively. by-value: either by-copy, if the bound variable's type implements Copy or otherwise by-move. The Rust team hopes that this feature will give developers a smoother and consistent experience with expressions. Attributes on function parameters Unlike the previous versions, Rust 1.39 will enable three types of attributes on parameters of functions, closures, and function pointers. Conditional compilation: cfg and cfg_attr Controlling lints: allow, warn, deny, and forbid Helper attributes which are used for procedural macro attributes Many users are happy with the Rust 1.39 features and are especially excited about the stable version of async-await syntax. A user on Hacker News comments, “Async/await lets you write non-blocking, single-threaded but highly interweaved firmware/apps in allocation-free, single-threaded environments (bare-metal programming without an OS). The abstractions around stack snapshots allow seamless coroutines and I believe will make rust pretty much the easiest low-level platform to develop for.” Another comment read, “This is big! Turns out that syntactic support for asynchronous programming in Rust isn't just syntactic: it enables the compiler to reason about the lifetimes in asynchronous code in a way that wasn't possible to implement in libraries. The end result of having async/await syntax is that async code reads just like normal Rust, which definitely wasn't the case before. This is a huge improvement in usability.” Few have already upgraded to Rust 1.39 and shared their feedback on Twitter. https://twitter.com/snoyberg/status/1192496806317481985 Check out the official announcement for more details. You can also read the blog on async-await for more information. AWS will be sponsoring the Rust Project A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency Fastly announces the next-gen edge computing services available in private beta Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database Yubico reveals Biometric YubiKey at Microsoft Ignite
Read more
  • 0
  • 0
  • 5041
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-releases-typescript-3-7-with-much-awaited-features-like-optional-chaining-assertion-functions-and-more
Savia Lobo
06 Nov 2019
3 min read
Save for later

Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more

Savia Lobo
06 Nov 2019
3 min read
Yesterday, Microsoft announced the release of TypeScript 3.7 with new tooling features, optional chaining, nullish coalescing, assertion functions, and much more. This release also includes breaking features; a few changes in the DOM where the types in lib.dom.d.ts have been updated; the typeArguments property has been removed from the TypeReference interface. Also, TypeScript 3.7 emits get/set accessors in .d.ts files which can cause breaking changes for consumers on older versions of TypeScript like 3.5 and prior. TypeScript 3.6 users will not be impacted as the version was future-proofed for this feature. Let us have a look at other new features in TypeScript 3.7. What’s new in TypeScript 3.7? Optional Chaining TypeScript 3.7 implements Optional Chaining, one of the most highly-demanded ECMAScript features that was filed 5 years ago. Optional chaining lets one write code that can immediately stop running some expressions if it is run into a null or undefined. The star of the show in optional chaining is the new ?. operator for optional property accesses. Optional chaining also includes two other operations; optional element access, which acts similarly to optional property accesses, but allows us to access non-identifier properties (e.g. arbitrary strings, numbers, and symbols). The second one is an optional call, which allows to conditionally call expressions if they’re not null or undefined. Assertion Functions Assertion functions are a specific set of functions that throw an error if something unexpected happens. Assertions in JavaScript are often used to guard against improper types being passed in. Unfortunately in TypeScript, these checks could never be properly encoded. For loosely-typed code, this meant TypeScript was checking less, and for slightly conservative code it often forced users to use type assertions. Another alternative was to rewrite the code such that the language could analyze it. However, this was not convenient. To solve this, TypeScript 3.7 introduces a new concept called “assertion signatures” which models these assertion functions. The first type of assertion signature ensures that whatever condition is being checked must be true for the remainder of the containing scope. The other type of assertion signature doesn’t check for a condition but instead tells TypeScript that a specific variable or property has a different type. Build-Free Editing with Project References In TypeScript 3.7, when opening a project with dependencies, TypeScript will automatically use the source .ts/.tsx files instead. This means projects using project references will now see an improved editing experience where semantic operations are up-to-date. Website and Playground Updates TypeScript playground now includes awesome new features like quick fixes to fix errors, dark/high-contrast mode, and automatic type acquisition so you can import other packages. Each feature here is explained through interactive code snippets under the “what’s new” menu. Many users and developers are excited to try out TypeScript 3.7. https://twitter.com/kmsaldana1/status/1191768934648729600 https://twitter.com/mgechev/status/1191769805952438272 To know more about other new features in TypeScript 3.7, read the official release notes. Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 3841

article-image-mongodb-is-partnering-with-alibaba
Richard Gall
30 Oct 2019
3 min read
Save for later

MongoDB is partnering with Alibaba

Richard Gall
30 Oct 2019
3 min read
Tensions between the U.S. and China have been frosty at best where trade is concerned. But MongoDB, based partly in Palo Alto in the heart of Silicon Valley, and with a HQ in New York, today announced that it was partnering with Chinese conglomerate AliBaba to bring Alibaba cloud users MongoDB-as-a-Service. While it's probably not going to bring Trump's ongoing trade war to an end, it could help MongoDB to position itself as the leading NoSQL database on the planet. What does the MongoDB and Alibaba partnership actually mean? In practical terms, it means that Alibaba's cloud customers will now have access to a fully supported version of MongoDB on Alibaba's data centers. That means complete access to all existing features of MongoDB, and Alibaba's support in escalating issues that may arise when they're using MongoDB. With MongoDB 4.2.0 released back in August, Alibaba users will also have the ability to take advantages of some of the database's new features, such as distributed transactions, and client-side field-level encryption. But that's just for Alibaba users - from MongoDB's perspective, this partnership cements its already impressive position in the Chinese market. "Over the past four years the most downloads of MongoDB have been from China" said Dev Ittycheria, MongoDB's President and CEO. For Alibaba, meanwhile, the partnership will likely only strengthen their position within the cloud market. Feifei Li, Vice President of the Alibaba Group spoke of supporting "a wide range of customer needs from open-source developers to enterprise IT teams of all sizes." Li didn't say anything much more revealing than that, choosing instead to focus on Alibaba's pitch to users: "Combined with Alibaba Cloud's native data analytics capabilities, working with partners like MongoDB will empower our customers to generate more business insights from their daily operations." A new direction for MongoDB? The partnership is particularly interesting in the context of MongoDB's licensing struggles over the last 12 months. Initially putting forward its Server Side Public License, the project later withdrew its application to the Open Source Foundation over what CTO Eliot Horowitz described as a lack of "community consensus." The SSPL was intended to protect MongoDB - and other projects like it - from  "large cloud vendors... [that] capture all of the value but contribute nothing back to the community." It would appear that MongoDB is trying a new approach to this problem: instead of trying to outflank the vendors, its joining them. Explore Packt's newest MongoDB eBooks and videos.
Read more
  • 0
  • 0
  • 4892

article-image-react-conf-2019-concurrent-mode-preview-out-css-in-js-react-docs-in-40-languages-and-more
Bhagyashree R
29 Oct 2019
9 min read
Save for later

React Conf 2019: Concurrent Mode preview out, CSS-in-JS, React docs in 40 languages, and more

Bhagyashree R
29 Oct 2019
9 min read
React Conf 2019 wrapped up last week. It was kick-started with a keynote by Tom Occhino and Yuhi Zheng from the React team who both talked about Concurrent Mode and Suspense. Then followed by Frank Yan also from the React team, who explained how they are building the “new Facebook” with React and Relay. One of the major highlights of his talk was the CSS-in-JS library that will be open-sourced once ready. Sophie Alpert, former manager of the React team gave a talk on building a custom React renderer. To demonstrate that, she implemented a small version of ReactDOM in just 30 minutes. There were many other lightning talks and presentations on translated React, building inclusive apps by improving their accessibility, and much more. React Conf 2019 is a two-day event that took place from Oct 24-25 at Lake Las Vegas, Nevada. This conference brought together front-end and full-stack developers to “share knowledge, skills, to network, and just to have fun.” React's long-term goal: "Making it easier to build great user experiences" Tom Occhino, Engineering Director of the React group, took to the stage to talk about the goals for React and the community. He says that React’s long-term goal is to make it easier for developers to build great user experiences. “Easier to build” means improving the developer experience. The three factors that contribute to a great developer experience are a low barrier to entry, developer productivity, and ability to scale. React is constantly working towards improving the developer experience by introducing new features. Two such features are: Concurrent Mode and Suspense. Concurrent Mode Concurrent Mode is a set of features to make React apps more responsive by rendering component trees without blocking the main thread. It gives React the ability to interrupt big blocks of low-priority work in order to focus on higher priority work like responding to user input. This will enable React to work on several state updates concurrently and removing jarring and too frequent DOM updates. The team also released the first early community preview of Concurrent Mode last week. https://twitter.com/reactjs/status/1187411505001746432 Suspense Suspense was introduced as an improvement to the developer experience when dealing with asynchronous data fetching within React apps. It suspends your component rendering and shows a fallback until some condition is met. Occhino describes Suspense as a “React system for orchestrating asynchronous loading of code, data, and resources.” He adds, “Suspense lets the component wait for something before they render. This helps consolidate nested dependencies and nested spinners and things behind the single simple loading experience.” Towards the end of his keynote, Occhino also touched upon how the team plans to make the React community more inclusive and diverse. He said, “Over the past 10 years, I have learned that diverse teams build better products and make better decisions. Everyone working on React shares my conviction about this.” He adds, “Up until recently we have taken a pretty passive stance to building and shaping the React community. We have a responsibility to you all and I feel like we let many of you down. We are committed to doing better!” As a first step, the team has now replaced the React code of conduct with the contributor covenant. Read also: #Reactgate forces React leaders to confront community’s toxic culture head on What’s new the React team is working on Yuzi Zheng, Engineering Manager for React and Relay team at Facebook gave an insight into what projects the core teams are working on. She started off by giving a recap of hooks, which was one of the most-awaited React features announced at React Conf 2018. “Hooks are designed for the future of React in the way that it naturally encourages code that is compatible with all the plumbing features such as accessibility, server-side rendering, suspense, and concurrent mode. Since its release, the reception of Hooks has been really positive,” she shared. If you want to understand the fundamentals of React Hooks and use them for implementing responsive design and more, check out our book, Learning Hooks. Another long-term project that the team is focusing on is providing developers a way to easily build accessibility features in React. Currently, developers can create accessible websites using standard HTML techniques, but it does have some limitations. To help building accessibility directly into React the team is working on two areas: managing focus and input interfaces. For managing focus, the team plans to add primitives that provide “a more structured way of making sure component flows well” for cases like React portals and Suspense fallback and are accessible by default. For input interfaces, they plan to add support for rich gestures that work across platforms and are accessible by default. The team is also focusing on improving the initial render times. Server-side rendering helps in reducing the amount of CPU usage on the client for the initial render to some extent, but it does have some limitations. To meet these limitations, the team plans to add built-in support for server-side rendering. This will work with lazily loaded components to reduce the bytes needed on the client, support streaming down markups in chunks, and be fully-compatible with Concurrent Mode and Suspense. The CSS-in-JS library Frank Yan, Engineering Manager in the React group at Facebook talked about how the team has rebuilt and redesigned the Facebook website and the key lessons they have learned along the way. The new Facebook website is a single-page app with React organizing the HTML and JavaScript into components from the top down and with GraphQL and Relay colocating the queries declaratively in the components. The only key part that the team did not reorganize was CSS. They instead created a new library to embed styles in components called CSS-in-JS. It aims to make the styles easier to read, understand, and update. Its syntax is inspired by React Native and other frameworks. Since it enables you to embed styles inside JavaScript files, you can also use JavaScript tooling like type checkers and linters. React docs translated into 40 languages Nat Alison is a freelance front-end developer who helped the React team coordinate translations of reactjs.org into 40 languages. She shared why and how they were able to translate the docs for this massively popular library. She shared, “More than 80% of the world’s population does not know English. If we restrict React, one of the most popular JavaScript frameworks, we restrict who gets to create and shape the web.” Providing the officially translated docs will make it easier for several non-English speaking React developers to understand and use it in their projects. This will also prevent users from creating unofficial translations, which can be incorrect, outdated, or difficult to find. Initially, they thought of integrating a SaaS platform that allows users to submit translations, but this was not a feasible solution. Then they decided to check out the solution used by Vue, which is maintaining separate repositories for each language forked from the original repo. Similar to Vue, the team also created a bot that periodically tracks for changes in the English repo and submits pull requests whenever there is a change. If you want to contribute to translating React docs in your language, check out the IsReactTranslatedYet website. Developing accessible apps Brittany Feenstra, a developer at Formidable, took to the stage to talk about why accessibility is important and how you can approach it. Accessibility or a11y is making your apps and websites usable for everyone, including people with any kind of disabilities.  There are four types of disabilities that developers need to design for: visual, auditory, motor, and cognitive. Feenstra mentioned that though we all are aware of the importance of accessibility, we often “end up saving it for later” because of tight deadlines. Feenstra, however, compares accessibility with marathons. It is not something that you can achieve in just one sprint, she says. You should instead look at it as a training program that you will follow when participating in a marathon. You need to take a step-by-step approach to make an accessible app. If we do that “we will be way less fatigued and well-equipped,” she adds. Sharing some starting tips she said that we need to focus on three areas. First, learn to run, or in accessibility context, understand the HTML semantics then explore reference patterns, navigation, and focus traps. Second, improve nutritional habits, or in accessibility context, use environments and tools that help us write sturdier code. She recommends using axe, an accessibility checker for WCAG 2 and Section 508 accessibility. Also, check out the tools that basically simulate how people with visual impairment will see your UI such as NoCoffee and I want to see like the colour blind. She emphasizes on linting and testing your code for accessibility with the help of eslint-plugin-jsx-a11y and accessibility assessment automation tools. Third, cross-train and stretch, or in accessibility context, learn to “interact with the UI in ways that let us understand the update we are making to our code.” “React is Fiction” This was a talk by Jenn Creighton, a Frontend Architect at The Wing, who comes from a creative writing background. “Writing React to me felt like coming home. It was really familiar in a way that I could not pinpoint,” she said. Then she realized that writing React reminded her of fiction and merging the two disciplines helped her write better components. Creighton drew the similarities between developing in React and creative writing. One of the key principles of creative writing is “Show, don’t tell” that advises authors to describe a situation instead of just telling it. This will help engage the readers as they will be able to picture the situation in their heads. According to Creighton, React also has a similar principle: “Declarative, not imperative.”  React is declarative, which allows developers to describe what the final state should be, instead of listing all the steps to reach that state. There were many other exciting talks about progressive web animations, building React-Select, and more. Check out the live streams to watch the full talks: Day1: https://www.youtube.com/watch?v=RCiccdQObpo Day2: https://www.youtube.com/watch?v=JDDxR1a15Yo&t=2376s Ionic React released; Ionic Framework pivots from Angular to a native React version ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more React Native 0.61 introduces Fast Refresh for reliable hot reloading
Read more
  • 0
  • 0
  • 5709

article-image-3-programming-languages-some-people-think-are-dead-but-definitely-arent
Richard Gall
24 Oct 2019
11 min read
Save for later

3 programming languages some people think are dead but definitely aren’t

Richard Gall
24 Oct 2019
11 min read
Recently I looked closely at what it really means when a certain programming language, tool, or trend is declared to be ‘dead’. It seems, I argued, that talking about death in respect of different aspects of the tech industry is as much a signal about one’s identity and values as a developer as it is an accurate description of a particular ‘thing’s’ reality. To focus on how these debates and conversations play out in practice I decided to take a look at 3 programming languages, each of which has been described as dead or dying at some point. What I found might not surprise you, but it nevertheless highlights that the different opinions a certain person or community has about a language reflects their needs and challenges as software engineers. Is Java dead? One of the biggest areas of debate in terms of living, thriving or dying, is Java. There are a number of reasons for this. The biggest is the simple fact that it’s so widely used. With so many developers using the language for a huge range of reasons, it’s not surprising to find such a diversity of opinion across its developer community. Another reason is that Java is so well-established as a programming language. Although it’s a matter of debate whether it’s declining or dying, it certainly can’t be said to be emerging or growing at any significant pace. Java is part of the industry mainstream now. You’d think that might mean it’s holding up. But when you consider that this is an industry that doesn’t just embrace change and innovation, but one that depends on it for its value, you can begin to see that Java has occupied a slightly odd space for some time. Why do people think Java is dead? Java has been on the decline for a number of years. If you look at the TIOBE index from the mid to late part of this decade it has been losing percentage points. From May 2016 to May 2017, for example, the language declined 6% - this indicates that it’s losing mindshare to other languages. A further reason for its decline is the rise of Kotlin. Although Java has for a long time been the defining language of Android development, in recent years its reputation has taken a hit as Kotlin has become more widely adopted. As this Medium article from 2018 argues, it’s not necessarily a great idea to start a new Android project with Java. The threat to Java isn’t only coming from Kotlin - it’s coming from Scala too. Scala is another language based on the JVM (Java Virtual Machine). It supports both object oriented and functional programming, offering many performance advantages over Java, and is being used for a wide range of use cases - from machine learning to application development. Reasons why Java isn’t dead Although the TIOBE index has shown Java to be a language in decline, it nevertheless remains comfortably at the top of the table. It might have dropped significantly between 2016 and 2017, but more recently its decline has slowed: it has dropped only 0.92% between October 2018 and October 2019. From this perspective, it’s simply bizarre to suggest that Java is ‘dead’ or ‘dying’: it’s de facto the most widely used programming language on the planet. When you factor in everything else that that entails - the massive community means more support, an extensive ecosystem of frameworks, libraries and other tools (note Spring Boot’s growth as a response to the microservice revolution). So, while Java’s age might seem like a mark against it, it’s also a reason why there’s still a lot of life in it. At a more basic level, Java is ubiquitous; it’s used inside a massive range of applications. Insofar as it’s inside live apps it’s alive. That means Java developers will be in demand for a long time yet. The verdict: is Java dead or alive? Java is very much alive and well. But there are caveats: ultimately, it’s not a language that’s going to help you solve problems in creative or innovative ways. It will allow you to build things and get projects off the ground, but it’s arguably a solid foundation on which you will need to build more niche expertise and specialisation to be a really successful engineer. Is JavaScript dead? Although Java might be the most widely used programming language in the world, JavaScript is another ubiquitous language that incites a diverse range of opinions and debate. One of the reasons for this is that some people seriously hate JavaScript. The consensus on Java is a low level murmur of ‘it’s fine’, but with JavaScript things are far more erratic. This is largely because of JavaScript’s evolution. For a long time it was playing second fiddle to PHP in the web development arena because it was so unstable - it was treated with a kind of stigma as if it weren’t a ‘real language.’ Over time that changed, thanks largely to HTML5 and improved ES6 standards, but there are still many quirks that developers don’t like. In particular, JavaScript isn’t a nice thing to grapple with if you’re used to, say, Java or C. Unlike those languages its an interpreted not a compiled programming language. So, why do people think it’s dead? Why do people think JavaScript is dead? There are a number of very different reasons why people argue that JavaScript is dead. On the one hand, the rise of templates, and out of the box CMS and eCommerce solutions mean the use of JavaScript for ‘traditional’ web development will become less important. Essentially, the thinking goes, the barrier to entry is lower, which means there will be fewer people using JavaScript for web development. On the other hand people look at the emergence of Web Assembly as the death knell for JavaScript. Web Assembly (or Wasm) is “a binary instruction format for a stack-based virtual machine” (that’s from the project’s website), which means that code can be compiled into a binary format that can be read by a browser. This means you can bring high level languages such as Rust to the browser. To a certain extent, then, you’d think that Web Assembly would lead to the growth of languages that at the moment feel quite niche. Read next: Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust Reasons why JavaScript isn’t dead First, let’s counter the arguments above: in the first instance, out of the box solutions are never going to replace web developers. Someone needs to build those products, and even if organizations choose to use them, JavaScript is still a valuable language for customizing and reshaping purpose-built solutions. While the barrier to entry to getting a web project up and running might be getting lower, it’s certainly not going to kill JavaScript. Indeed, you could even argue that the pool is growing as you have people starting to pick up some of the basic elements of the web. On the Web Assembly issue: this is a slightly more serious threat to JavaScript, but it’s important to remember that Web Assembly was never designed to simply ape the existing JavaScript use case. As this useful article explains: “...They solve two different issues: JavaScript adds basic interactivity to the web and DOM while WebAssembly adds the ability to have a robust graphical engine on the web. WebAssembly doesn’t solve the same issues that JavaScript does because it has no knowledge of the DOM. Until it does, there’s no way it could replace JavaScript.” Web Assembly might even renew faith in JavaScript. By tackling some of the problems that many developers complain about, it means the language can be used for problems it is better suited to solve. But aside from all that, there are a wealth of other reasons that JavaScript is far from dead. React continues to grow in popularity, as does Node.js - the latter in particular is influential in how it has expanded what’s possible with the language, moving from the browser to the server. The verdict: Is JavaScript dead or alive? JavaScript is very much alive and well, however much people hate it. With such a wide ecosystem of tools surrounding it, the way that it’s used might change, but the language is here to stay and has a bright future. Is C dead? C is one of the oldest programming languages around (it’s approaching its 50th birthday). It’s a language that has helped build the foundations of the software world as we know it today, including just about every operating system. But although it’s a fundamental part of the technology landscape, there are murmurs that it’s just not up to the job any more… Why do people think that C is dead? If you want to get a sense of the division of opinion around C you could do a lot worse than this article on TechCrunch. “C is no longer suitable for this world which C has built,” explains engineer Jon Evans. “C has become a monster. It gives its users far too much artillery with which to shoot their feet off. Copious experience has taught us all, the hard way, that it is very difficult, verging on ‘basically impossible,’ to write extensive amounts of C code that is not riddled with security holes.” The security concerns are reflected elsewhere, with one writer arguing that “no one is creating new unsafe languages. It’s not plausible to say that this is because C and C++ are perfect; even the staunchest proponent knows that they have many flaws. The reason that people are not creating new unsafe languages is that there is no demand. The future is safe languages.” Added to these concerns is the rise of Rust - it could, some argue, be an alternative to C (and C++) for lower level systems programming that is more modern, safer and easier to use. Reasons why C isn’t dead Perhaps the most obvious reason why C isn’t dead is the fact that it’s so integral to so much software that we use today. We’re not just talking about your standard legacy systems; C is inside the operating systems that allow us to interface with software and machines. One of the arguments often made against C is that ‘the web is taking over’, as if software in general is moving up levels of abstraction that make languages at a machine level all but redundant. Aside from that argument being plain stupid (ie. what’s the web built on?), with IoT and embedded computing growing at a rapid rate, it’s only going to make C more important. To return to our good friend the TIOBE Index: C is in second place, the same position it held in October 2018. Like Java, then, it’s holding its own in spite of rumors. Unlike Java, moreover, C’s rating has actually increased over the course of a year. Not a massive amount admittedly - 0.82% - but a solid performance that suggests it’s a long way from dead. Read next: Why does the C programming language refuse to die? The verdict: Is C dead or alive? C is very much alive and well. It’s old, sure, but it’s buried inside too much of our existing software infrastructure for it to simply be cast aside. This isn’t to say it isn’t without flaws. From a security and accessibility perspective we’re likely to see languages like Rust gradually grow in popularity to tackle some of the challenges that C poses. But an equally important point to consider is just how fundamental C is for people that want to really understand programming in depth. Even if it doesn’t necessarily have a wide range of use cases, the fact that it can give developers and engineers an insight into how code works at various levels of the software stack means it will always remain a language that demands attention. Conclusion: Listen to multiple perspectives on programming languages before making a judgement The obvious conclusion to draw from all this is that people should just stop being so damn opinionated. But I don't actually think that's correct: people should keep being opinionated and argumentative. There's no place for snobbery or exclusion, but anyone that has a view on something's value then they should certainly express it. It helps other people understand the language in a way that's not possible through documentation or more typical learning content. What's important is that we read opinions with a critical eye: what's this persons agenda? What's their background? What are they trying to do? After all, there are things far more important than whether something is dead or alive: building great software we can be proud of being one of them.
Read more
  • 0
  • 0
  • 22945
article-image-firefox-70-released-with-better-security-css-and-javascript-improvements
Savia Lobo
23 Oct 2019
6 min read
Save for later

Firefox 70 released with better security, CSS, and JavaScript improvements

Savia Lobo
23 Oct 2019
6 min read
Mozilla team announced the much-awaited release of Firefox 70 yesterday with amazing new features like secure password generation with Lockwise and the new Firefox Privacy Protection Report. Firefox 70 also includes a plethora of additions for developers such as DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators, and much more. Firefox 70 centers around enhanced privacy and security The new Firefox 70 includes an Enhanced Tracking Protection, which includes a Firefox Privacy Protection Report that gives additional details and more visibility into how you’re being tracked online so you can better combat it. The Enhanced Tracking Protection was set up as default by the browser in September this year. The report highlights how ETP prevents third-party trackers from building a user’s profile based on their online activity. The report also includes the number of cross-site and social media trackers, finger-printers and crypto-miners Mozilla blocked. The report also helps users to keep themselves updated with Firefox Monitor and Firefox Lockwise. Firefox Monitor helps users to get a summary of the number of unsafe passwords that may have been used in a breach so that you can take action to update and change those passwords. Firefox Lockwise helps users to manage passwords and different synced devices. Firefox Lockwise includes a button where users can click to view their logins and updates. They can also have the ability to quickly view and manage how many devices they syncing and sharing passwords with. To know more about security in Firefox 70, read Mozilla’s blog. What’s new in Firefox 70 Updated HTML forms and secure passwords To generate secure passwords, the team has updated HTML input elements. Here, any input element of type password will have an option to generate a secure password available in the context menu, which can then be stored in Lockwise. In addition, any type="password" field with autocomplete=”new-password” set on it will have an autocomplete UI to generate a new password in-context. New CSS improvements Firefox 70 includes some CSS improvements like new options for styling underlines and new set of two-keyword values. Options for styling underlines include three new properties for text-decoration (underline): text-decoration-thickness: sets the thickness of lines added via text-decoration. text-underline-offset: sets the distance between a text-decoration and the text it is set on. Bear in mind that this only works on underlines. text-decoration-skip-ink: sets whether underlines and overlines are drawn if they cross descenders and ascenders. The default value, auto, causes them to only be drawn where they do not cross over a glyph. To allow underlines to cross glyphs, set the value to none. Two-keyword display values Until now, the display property has taken a single value. However, the team says that “the boxes on a page have an outer display type, which determines how the box is laid out in relation to other boxes on the page, and an inner display type, which determines how the box’s children will behave.” The two-keyword values allow you to explicitly specify the outer and inner display values. In supporting browsers (which currently includes only Firefox), the single keyword values will map to new two-keyword values, for example: display: flex; is equivalent to display: block flex; display: inline-flex; is equivalent to display: inline flex; JavaScript improvements Firefox 70 now supports numeric separators for JavaScript. Underscores can now be used as separators in large numbers so that they are more readable. Other improvements in JavaScript include: Intl improvements Firefox 70 includes improved JavaScript i18n (internationalization), starting with the implementation of the Intl.RelativeTimeFormat.formatToParts() method. This is a special version of Intl.RelativeTimeFormat.format() that returns an array of objects, each one representing a part of the value, rather than returning a string of the localized time value. Also,  Intl.NumberFormat.format() and Intl.NumberFormat.formatToParts() now accept BigInt values. Performance Improvements The inclusion of the new baseline interpreter has speeded up JavaScript. The code for the new interpreter includes shared code from the existing Baseline JIT. You can read more about the interpreter on The Baseline Interpreter: a faster JS interpreter in Firefox 70. New Developer tools The Developer Tools Accessibility panel now includes an audit for keyboard accessibility and a color deficiency simulator for systems with WebRender enabled. Pause option in DOM Mutation in Debugger DOM Mutation Breakpoints (aka DOM Change Breakpoints) let you pause scripts that add, remove, or change specific elements. Once a DOM mutation breakpoint is set, you’ll see it listed under “DOM Mutation Breakpoints” in the right-hand pane of the Debugger; this is also where you’ll see breaks reported. Source: Mozilla Hacks Color contrast information in the color picker! In the CSS Rules view, you can click foreground colors with the color picker to determine if their contrast with the background color meets accessibility guidelines. Accessibility inspector: keyboard checks The Accessibility inspector‘s Check for issues dropdown now includes keyboard accessibility checks: Selecting this option causes Firefox to go through each node in the accessibility tree and highlight all that have a keyboard accessibility issue: Hovering over or clicking each one will reveal information about what the issue is, along with a “Learn more” link for more details on how to fix it. Web socket inspector In Firefox DevEdition, the Network monitor now has a new “Messages” panel, which appears when you are monitoring a web socket connection (i.e. a 101 response). This can be used to inspect web socket frames sent and received through the connection. This functionality was originally supposed to be in Firefox 70 general release, but the team had a few more bugs to resolve, so expect it in Firefox 71! For now, users can explore it in the DevEdition. Fixed issues in Firefox 70 Built-in Firefox pages now follow the system dark mode preference Aliased theme properties have been removed, which may affect some themes Passwords can now be imported from Chrome on macOS in addition to existing support for Windows Readability is now greatly improved on under- or overlined texts, including links. The lines will now be interrupted instead of crossing over a glyph. Improved privacy and security indicators A new crossed-out lock icon will indicate sites delivered via insecure HTTP The formerly green lock icon is now grey The Extended Validation (EV) indicator has been moved to the identity popup that appears when clicking the lock icon To know more about other improvements and bug fixes in Firefox 70 in detail read Mozilla’s official blog. Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020
Read more
  • 0
  • 0
  • 4424

article-image-node-js-13-releases-upgraded-v8-full-icu-support-stable-worker-threads-api
Fatema Patrawala
23 Oct 2019
4 min read
Save for later

Node.js 13 releases with an upgraded V8, full ICU support, stable Worker Threads API and more

Fatema Patrawala
23 Oct 2019
4 min read
Yesterday was a super exciting day for Node.js developers as Node.js foundation announced of Node.js 12 transitions to Long Term Support (LTS) with the release of Node.js 13. As per the team, Node.js 12 becomes the newest LTS release along with version 10 and 8. This release marks the transition of Node.js 12.x into LTS with the codename 'Erbium'. The 12.x release line now moves into "Active LTS" and will remain so until October 2020. Then it will move into "Maintenance" until the end of life in April 2022. The new Node.js 13 release will deliver faster startup and better default heap limits. It includes updates to V8, TLS and llhttp and new features like diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API, and more. Key features in Node.js 13 Let us take a look at the key features included in Node.js 13. V8 gets an upgrade to V8 7.8 This release is compatible with the new version V8 7.8. This new version of the V8 JavaScript engine brings performance tweaks and improvements to keep Node.js up with the ongoing improvements in the language and runtime. Full ICU enabled by default in Node.js 13 As of Node.js 13, full-icu is now available as default, which means hundreds of other local languages are now supported out of the box. This will simplify development and deployment of applications for non-English deployments. Stable workers API Worker Threads API is now a stable feature in both Node.js 12 and Node.js 13. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. New compiler and platform support Node.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 13, the codebase will now require a minimum of version 10 for the OS X development tools and version 7.2 of the AIX operating system. In addition to this there has been progress on supporting Python 3 for building Node.js applications. Systems that have Python 2 and Python 3 installed will still be able to use Python 2, however, systems with only Python 3 should now be able to build using Python 3. Developers discuss pain points in Node.js 13 On Hacker News, users discuss various pain-points in Node.js 13 and some of the functionalities missing in this release. One of the users commented, “To save you the clicks: Node.js 13 doesn't support top-level await. Node includes V8 7.8, released Sep 27. Top-level await merged into V8 on Sep 24, but didn't make it in time for the 7.8 release.” Response on this comment came in from V8 team, they say, “TLA is only in modules. Once node supports modules, it will also have TLA. We're also pushing out a version with 7.9 fairly soonish.” Other users discussed how Node.js performs with TypeScript, “I've been using node with typescript and it's amazing. VERY productive. The key thing is you can do a large refactoring without breaking anything. The biggest challenge I have right now is actually the tooling. Intellij tends to break sometimes. I'm using lerna for a monorepo with sub-modules and it's buggy with regular npm. For example 'npm audit' doesn't work. I might have to migrate to yarn…” If you are interested to know more about this release, check out the official Node.js blog post as well as the GitHub page for release notes. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Google is planning to bring Node.js support to Fuchsia
Read more
  • 0
  • 0
  • 8361

article-image-what-do-we-really-mean-when-we-say-that-software-is-dead-or-dying
Richard Gall
22 Oct 2019
12 min read
Save for later

What do we really mean when we say that software is ‘dead’ or ‘dying’?

Richard Gall
22 Oct 2019
12 min read
“The report of my death was an exaggeration,” Mark Twain once wrote in a letter to journalist Frank Marshall White. Twain's quip is a fitting refrain for much of the software industry. Year after year there is a new wave of opinion from experts declaring this or that software or trend to be dead or, if it’s lucky, merely dying. I was inclined to think this was a relatively new phenomenon, but the topic comes up on Jeff Atwood’s blog Coding Horror as far back as 2009. Atwood quotes from an article from influential software engineer Tom DeMarco in which DeMarco writes that “software engineering is an idea whose time has come and gone.” (The hyperlink that points to DeMarco’s piece is, ironically, now dead). So, it’s clearly not something new to the software industry. In fact, this rhetorical trope can tell us a lot about identity, change, and power in tech. Declaring something to be dead is an expression of insecurity, curiosity and sometimes plain obnoxiousness. Consider these questions from Quora - all of them appear to reflect an odd combination of status anxiety and technical interest: Is DevOps dead? Is React.js dead? Is Linux dead? Why? (yes, really) Is web development a dying career? (It’s also something I wrote about last year.) These questions don’t come out of a vacuum. They’re responses to existing opinions and discussions that are ongoing in different areas. To a certain extent they’re valuable: asking questions like those above and using these sorts of metaphors are ways of assessing the value and relevance of different technologies. That being said, they probably should be taken with a pinch of salt. Although they can be an indicator of community feeling towards a particular approach or tool (the plural of anecdote isn’t exactly data, but it’s not as far off as many people pretend it is), they often tell you as much about the person doing the talking as the thing they’re talking about. To be explicit: saying something is dead or dying is very often a mark of perspective or even identity. This might be frustrating, but it nevertheless provides an insight on how different technologies are valued or being used at any given time. What’s important, then, is to be mindful about why someone would describe this or that technology as dead. What might they be really trying to say? “X software is dead because companies just aren’t hiring for it” One of the reasons you might hear people proclaim a certain technology to be dead is because companies are, apparently, no longer hiring for those skills. It stops appearing on job boards; it’s no longer ‘in-demand’. While this undoubtedly makes sense, and it’s true that what’s in demand will shift and change over time, all too often assertions that ‘no one is hiring x developers any more’ is anecdotal. For example, although there have been many attempts to start rumors that JavaScript is ‘dead’ or that its time has come - like this article from a few years back in which the writer claims that JavaScript developers have been “mind-fucked into thinking that JavaScript is a good programming language” - research done earlier this year shows that 70% of companies are searching for JavaScript developers. So, far from dead. In the same survey Java came out as another programming language that is eagerly sought out by companies: 48% were on the lookout for Java developers. And while this percentage has almost certainly decreased over the last decade (admittedly I couldn’t find any research to back this up), that’s still a significant chunk to debunk the notion that Java is ‘dead’ or ‘dying.’ Software ecosystems don't develop chronologically This write up of the research by Free Code Camp argues that it shows that variation can be found “not between tech stacks but within them.” This suggests that the tech world isn’t quite as Darwinian as it’s often made out to be. It’s not a case of life and death, but instead different ecosystems all evolving in different ways and at different times. So, yes, maybe there is some death (after all, there aren’t as many people using Ember or Backbone as there were in 2014), but, as with many things, it’s actually a little more complicated… “X software is dead because no one’s learning it” If no one’s learning something, it’s presumably a good sign that X technology is dead or dying, right? Well, to a certain extent. It might give you an indication of how wider trends are evolving and even the types of problems that engineers and their employers are trying to solve, but again, it’s important to be cautious. It sounds obvious but just because no one seems to be learning something, it doesn’t mean that people aren’t. So yes, in certain developer communities it might be weird to consider people are learning Java. But with such a significant employer demand there are undoubtedly thousands of people trying to get to grips with it. Indeed, they might well be starting out in their career - but you can’t overlook the importance of established languages as a stepping stone into more niche and specialised roles and into more ‘exclusive’ communities. That said, what people are learning can be instructive in the context of the argument made in the Free Code Camp article mentioned above. If variation inside tech stacks is where change and fluctuation is actually happening, then the fact that people are learning a given library or framework will give a clear indication as to how that specific ecosystem is evolving. Software only really dies when their use cases do But even then, it’s still a leap to say that something’s dead. It’s also somewhat misleading and unhelpful. So, although it might be the case that more people are learning React or Kotlin than other related technologies, that doesn’t cancel out the fact that those other technologies may still have a part to play in a particular use case. Another important aspect to consider when thinking about what people are learning is that it’s often part of the whole economics of the hype cycle. The more something gets talked about, the more individual developers might be intrigued about how it actually works. This doesn’t, of course, mean they would necessarily start using it for professional projects or that we’d start to see adoption across large enterprises. There is a mesh of different forces at play when thinking about tech life cycles - sometimes our metaphors don’t really capture what’s going on. “It’s dead because there are better options out there” There’s one thing I haven’t really touched on that’s important when thinking about death and decline in the context of software: the fact that there are always options. You use one tool, language, framework, library, whatever, because it’s the best suited to your needs. If one option comes to supersede another for whatever reason it’s only natural to regard what came before as obsolete in some way. You can see this way of thinking on a large scale in the context of infrastructure - from virtual machines, to containers, to serverless, the technologies that enable each of those various phases might be considered ‘dead’ as we move from one to the other. Except that just isn’t the case. While containerized solutions might be more popular than virtual machines, and while serverless hints at an alternative to containers, each of these different approaches are still very much in play. Indeed, you might even see these various approaches inside the same software architecture - virtual machines might make sense here, but for this part of an application over there serverless functions are the best option for the job. With this in mind, throwing around the D word is - as mentioned above - misleading. In truth it’s really just a way for the speaker to signal that X just doesn’t work for them anymore and that Y is a much better option for what they’re trying to do. The vanity of performed expertise And that’s fine - learning from other people’s experiences is arguably the best way to learn when it comes to technology (far better than, say, a one dimensional manual or bone dry documentation). But when we use the word ‘dead’ we hide what actually might still be interesting or valuable about a given technology. In our bid to signal our own expertise and knowledge we close down avenues of exploration. And vanity only feeds the potentially damaging circus of hype cycles and burn out even more. So, if Kotlin really is a better option for you then that’s great. But it doesn’t mean Java is dead or dying. Indeed, it’s more likely the case that what we’re seeing are use cases growing and proliferating, with engineering teams and organizations requiring a more diverse set of options for an increasingly diverse and multi faceted range of options. If software does indeed die, then it’s not really a linear process. Various use cases will all evolve and over time they will start to impact one another. Maybe eventually we’ll see Kotlin replace Java as the language evolves to tackle a wider range of use cases. “X software pays more money, so Y software must be dying” The idea that certain technologies are worth more than others feeds into the narrative that certain technologies and tools are dying. But it’s just a myth - and a dangerous one at that. Although there is some research on which technologies are the most high paying, much of it lacks context. So, although this piece on Forbes might look insightful (wow, HBase engineers earn more than $120K!) it doesn’t really give you the wider picture of why these technologies command certain salaries. And, more to the point, it ignores the fact that these technologies are just tools used by people in certain job roles. Indeed, it’s more accurate to say that big data engineers and architects are commanding high salaries than to think anything as trite as Kafka developers are really well-respected by their employers! Talent gaps and industry needs It’s probably more useful to look at variation within specific job roles. By this I mean look at what tools the highest earning full-stack developers or architects are using. At least that would be a little more interesting and instructive. But even then it wouldn’t necessarily tell you whether something has ‘died’. It would simply hint at two things: where the talent gaps are, and what organizations are trying to do. That might give you a flavor of how something is evolving - indeed, it might be useful if you’re a developer or engineer looking for a new job. However, it doesn’t mean that something is dead. Java developers might not be paid a great deal but that doesn’t mean the language is dead. If anything, the opposite is true. It’s alive and well with a massive pool of programmers from which employers can choose. The hype cycle might give us an indication of new opportunities and new solutions, but it doesn’t necessarily offer the solutions we need right now. But what about versioning? And end of life software? Okay, these are important issues. In reality, yes, these are examples of when software really is dead. Well, not quite - there are still complications. For example, even as a new version of a particular technology is released it still takes time for individual projects and wider communities to make the move. Even end of life software that's no longer supported by vendors or maintainers can still have an afterlife in poorly managed projects (the existence of this articles like this suggest that this is more common than you’d hope). In a sense this is zombie software that keeps on living years after debates about whether its alive or dead have ceased. In theory, versioning should be the formal way through which we manage death in the software industry. But the fact that even then our nicely ordered systems still fail to properly bury and retire editions of software packages, languages, and tools, highlights that in reality it's actually really hard to properly kill off software. For all the communities that want to kill software, there are always other groups, whether through force of will or plain old laziness, that want to keep it alive. Conclusion: Software is hard to kill Perhaps that’s why we like to say that certain technologies are dead: not only do such proclamations help to signify how we identify (ie. the type of developer we are), it’s also a rhetorical trick that banishes nuance and complexity. If we are managing complexity and solving tricky and nuanced problems every day, the idea that we can simplify our own area of expertise into something that is digestible - quotable, even - is a way of establishing some sort of control in a field where it feels we have anything but. So, if you’re tempted to ever say that one piece of software product is ‘dead,’ ask yourself what you really mean. And if you overhear someone obnoxiously proclaiming a framework, library or language to be dying, consider what they’re trying to say. Are they just trying to make a point about themselves? What’s the other side of the story? And even if they’re almost correct, to what extent aren’t they correct?
Read more
  • 0
  • 0
  • 6837
article-image-microsoft-launches-open-application-model-oam-and-dapr-to-ease-developments-in-kubernetes-and-microservices
Vincy Davis
17 Oct 2019
5 min read
Save for later

Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices

Vincy Davis
17 Oct 2019
5 min read
Yesterday, Microsoft announced the launch of two new open-source projects- Open Application Model (OAM) and DAPR. OAM, developed by Microsoft and Alibaba Cloud under the Open Web Foundation, is a specification that enables the developer to define a coherent model to represent an application. The Dapr project, on the other hand, will allow developers to build portable microservice applications using any language and framework for a new or existing code. Open Application Model (OAM) In OAM, an application is made of many components like a MySQL database or a replicated PHP server with a corresponding load balancer. These components are further used to build an application, thus enabling the platform architects to utilize reusable components for the easy building of reliable applications. OAM will also empower the application developers to separate the application description from the application deployment details, allowing them to focus on the key elements of their application, instead of its operational details. Microsoft also asserted that OAM consists of unique characteristics like platform agnostic. The official blog states, “While our initial open implementation of OAM, named Rudr, is built on top of Kubernetes, the Open Application Model itself is not tightly bound to Kubernetes. It is possible to develop implementations for numerous other environments including small-device form factors, like edge deployments and elsewhere, where Kubernetes may not be the right choice. Or serverless environments where users don’t want or need the complexity of Kubernetes.” Another important feature of OAM is its design extensibility. OAM also enables the platform providers to expose the unique characteristics of their platform through the trait system which will help them to build cross-platform apps wherever the necessary traits are supported. In an interview with TechCrunch, Microsoft Azure CTO Mark Russinovich said that currently, Kubernetes is “infrastructure-focused” and does not provide any resource to build a relationship between the objects of an application. Russinovich believes that OAM will solve the problem that many developers and ops teams are facing today. Commenting on the cooperation with Alibaba Cloud on this specification, Russinovich observed that both the companies encountered the same problems when they talked to their customers and internal teams. He further said that over time, Alibaba Cloud will launch a managed service based on OAM, and chances are that Microsoft will do the same over time. The Dapr project for building microservice applications This is an alpha release of Dapr with an event-driven runtime to help developers build resilient, microservice stateless and stateful applications for the cloud and edge. It also allows the application to be built using any programming language and developer framework. “In addition, through the open source project, we welcome the community to add new building blocks and contribute new components into existing ones. Dapr is completely platformed agnostic, meaning you can run your applications locally, on any Kubernetes cluster, and other hosting environments that Dapr integrates with. This enables developers to build microservice applications that can run on both the cloud and edge with no code changes,” stated the official blog. Image Source: Microsoft APIs in Dapr are exposed as a sidecar architecture (either as a container or as a process) and does not require the application code to include any Dapr runtime code. This simplifies Dapr integration from other runtimes, as well as provides separate application logic for improved supportability. Image Source: Microsoft Building blocks of Dapr Resilient service-to-service invocation: It enables method calls, including retries, on remote services wherever they are running in the supported hosting environment. State management for key/value pairs: This allows long-running, highly available, stateful services to be easily written, alongside stateless services in the same application. Publish and subscribe messaging between services: It enables event-driven architectures to simplify horizontal scalability and makes them resilient to failure. Event-driven resource bindings: This helps in building event-driven architectures for scale and resiliency by receiving and sending events to and from any external resources such as databases, queues, file systems, blob stores, webhooks, etc. Virtual actors: This is a pattern for stateless and stateful objects that makes concurrency simple with method and state encapsulation. Dapr also provides state, life-cycle management for actor activation/deactivation and timers and reminders to wake up actors. Distributed tracing between services: It enables easy diagnose and inter-service calls in production using the W3C Trace Context standard. It also allows push events for tracing and monitoring systems. Users have liked both the opensource projects, especially Dapr. A user on Hacker News comments, “I'm excited by Dapr! If I understand it correctly, it will make it easier for me to build applications by separating the "plumbing" (stateful & handled by Dapr) from my business logic (stateless, speaks to Dapr over gRPC). If I build using event-driven patterns, my business logic can be called in response to state changes in the system as a whole. I think an example of stateful "plumbing" is a non-functional concern such as retrying a service call or a write to a queue if the initial attempt fails. Since Dapr runs next to my application as a sidecar, it's unlikely that communication failures will occur within the local node.” https://twitter.com/stroker/status/1184810311263629315 https://twitter.com/ThorstenHans/status/1184513427265523712 The new WebSocket Inspector will be released in Firefox 71 Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia What to expect from D programming language in the near future An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements
Read more
  • 0
  • 0
  • 6154

article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 4162