Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Programming

1083 Articles
article-image-5-reasons-you-should-learn-node-js
Richard Gall
22 Feb 2019
7 min read
Save for later

5 reasons you should learn Node.js

Richard Gall
22 Feb 2019
7 min read
Open source software in general, and JavaScript in particular, can seem like a place where boom and bust is the rule of law: rapid growth before everyone moves on to the next big thing. But Node.js is different. Although it certainly couldn’t be described as new, and it's growth hasn't been dramatic by any measure, over the last few years it has managed to push itself forward as one of the most widely used JavaScript tools on the planet. Do you want to learn Node.js? Popularity, however can only tell you so much. The key question, if you’re reading this, is whether you should learn Node.js. So, to help you decide if it’s time to learn the JavaScript library, here’s a list of the biggest reasons why you should start learning Node.js... Learn everything you need to know about Node.js with Packt's Node.js Complete Reference Guide Book Learning Path. Node.js lets you write JavaScript on both client and server Okay, let’s get the obvious one out of the way first: Node.js is worth learning because it allows you to write JavaScript on the server. This has arguably transformed the way we think about JavaScript. Whereas in the past it was a language specifically written on the client, backed by the likes of PHP and Java, it’s now a language that you can use across your application. Read next: The top 5 reasons Node.js could topple Java This is important because it means teams can work much more efficiently together. Using different languages for backend and frontend is typically a major source of friction. Unless you have very good polyglot developers, a team is restricted to their core skills, while tooling is also more inflexible. If you’re using JavaScript across the stack, it’s easier to use a consistent toolchain. From a personal perspective, learning Node.js is a great starting point for full stack development. In essence, it's like an add-on that immediately expands what you can do with JavaScript. In terms of your career, then, it could well make you an invaluable asset to a development team. Read next: How is Node.js changing web development? Node.js allows you to build complex and powerful applications without writing complex code Another strong argument for Node.js is that it is built for performance. This is because of 2 important things - Node.js' asynchronous-driven architecture, and the fact that it uses the V8 JavaScript engine. The significance of this is that V8 is one of the fastest implementations of JavaScript, used to power many of Google’s immensely popular in-browser products (like Gmail). Node.js is powerful because it employs an asynchronous paradigm for handling data between client and server. To clarify what this means, it’s worth comparing to the typical application server model that uses blocking I/O - in this instance, the application has to handle each request sequentially, suspending threads until they can be processed. This can add complexity to an application and, of course, slows an application down. In contrast, Node.js allows you to use non-blocking I/O in which threads (in this case sequential, not concurrent), which can manage multiple requests. If one can’t be processed, it’s effectively ‘withheld’ as a promise, which means it can be executed later without holding up other threads. This means Node.js can help you build applications of considerable complexity without adding to the complexity of your code. Node.js is well suited to building microservices Microservices have become a rapidly growing architectural style that offer increased agility and flexibility over the traditional monolith. The advantages of microservices are well documented, and whether or not they’re right for you now, it’s likely that they’re going to dominate the software landscape as the world moves away from monolithic architecture. This fact only serves to strengthen the argument that you should learn Node.js because the library is so well suited to developing in this manner. This is because it encourages you to develop in a modular and focused manner, quite literally using specific modules to develop an application. This is distinct and almost at odds with the monolithic approach to software architecture. At this point, it’s probably worth highlighting that it’s incredibly easy to package and publish the modules you build thanks to npm (node package manager). So, even if you haven’t yet worked with microservices, learning Node.js is a good way to prepare yourself for a future where they are going to become even more prevalent. Node.js can be used for more than just web development We know by now that Node.js is flexible. But it’s important to recognise that its flexibility means it can be used for a wide range of different purposes. Yes, the library's community are predominantly building applications for the web, but it’s also a useful tool for those working in ops or infrastructure. This is because Node.js is a great tool for developing other development tools. If you’re someone working to support a team of developers, or, indeed, to help manage an entire distributed software infrastructure, it could be vital in empowering you to get creative and build your own support tools. Even more surprisingly, Node.js can be used in some IoT projects. As this post from 2016 suggests, the two things might not be quite such strange bedfellows. Node.js is a robust project that won't be going anywhere As I’ve already said, in the JavaScript world frameworks and tools can appear and disappear quickly. That means deciding what to learn, and, indeed, what to integrate into your stack, can feel like a bit of a gamble. However, you can be sure that Node.js is here to stay. There are a number of reasons for this. For starters, there’s no other tool that brings JavaScript to the server. But more than that, with Google betting heavily on V8 - which is, as we’ve seen, such an important part of the project - you can be sure it’s only going to go from strength to strength. It’s also worth pointing out that Node.js went through a small crisis when io.js broke away from the main Node.js project. This feud was as much personal as it was technical, but with the rift healed, and the Node.js Foundation now managing the whole project, helping to ensure that the software is continually evolving with other relevant technological changes and that the needs of the developers who use it continue to be met. Conclusion: spend some time exploring Node.js before you begin using it at work That’s just 5 reasons why you should learn Node.js. You could find more, but broadly speaking these all underline its importance in today’s development world. If you’re still not convinced, there’s a caveat. If Node.js isn’t yet right for you, don’t assume that it’s going to fix any technological or cultural issues that have been causing you headaches. It probably won’t. In fact, you should probably tackle those challenges before deciding to use it. But that all being said, even if you don’t think it’s the right time to use Node.js professionally, that doesn’t mean it isn’t worth learning. As you can see, it’s well worth your time. Who knows where it might take you? Ready to begin learning? Purchase Node.js Complete Reference Guide or read it for free with a subscription free trial.
Read more
  • 0
  • 0
  • 15420

article-image-5-blog-posts-that-could-make-you-a-better-python-programmer
Sam Wood
11 Feb 2019
2 min read
Save for later

5 blog posts that could make you a better Python programmer

Sam Wood
11 Feb 2019
2 min read
Python is one of the most important languages to master. It’s top rated, fast growing, and in demand by businesses around the globe. There’s a host of excellent insight across the web about how to become a better programmer with Python. Here’s five blogs we think you need to read to upgrade your skills and knowledge. 1. A Brief History of Python Did you know Python is actually older than Java, R and JavaScript? If you want to be a better Python programmer, it pays to know your history. This quick blog post takes you through the language's journey from Christmas hobby project to its modern ascendancy with version 3. 2. Do you write Python Code or Pythonic Code? Are you writing code in Python, or code for Python? When people talk about Pythonic code they mean that the code uses Python idioms well, that is natural or displays fluency in the language. Are you writing code like you would write Java or C++? This 4-minute blog post gives quick tips on how to make your code Pythonic. 3. The Singleton Python Design Pattern in Depth The singleton pattern is a powerful design pattern that allows you to create only one instance of data. You’d generally use it for things like the logging class and its subclasses, managing a connection to a database, or use read-only singletons to store some global states. This in-depth blog post takes you through the three principle ways to implement singletons, for better Python code. 4. Why is Python so good for artificial intelligence and machine learning? 5 Experts Explain. Python is the breakout language of data, zooming ahead of rival R to be dominant in the field of artificial intelligence and machine learning. But what is it about the programming language that makes it so well suited for this fast-growing field? In this blog post, five artificial intelligence experts all weigh in on what they think makes Python perfect for AI and machine learning. 5. Top 7 Python Programming Books You Need To Read That’s right - we put a list in our list. But if you really want to become a better Python programmer, you’ll want to get to grips with this stack of amazing Python books. Whether you’re a complete beginner or more experienced, these seven Python titles are the perfect way to upgrade your knowledge.
Read more
  • 0
  • 0
  • 7059

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 5505
Banner background image

article-image-why-dont-you-have-a-monorepo
Viktor Charypar
01 Feb 2019
27 min read
Save for later

Why don't you have a monorepo?

Viktor Charypar
01 Feb 2019
27 min read
You’ve probably heard that Facebook, Twitter, Google, Microsoft, and other tech industry behemoths keep their entire codebase, all services, applications and tools in a single huge repository - a monorepo. If you’re used to the standard way most teams manage their codebase - one application, service or tool per repository - this sounds very strange. Many people conclude it must only solve problems the likes of Google and Facebook have. This is a guest post by Viktor Charypar, Technical Director at Red Badger. But monorepos are not only useful if you can build a custom version control system to cope. They actually have many advantages even at a smaller scale that standard tools like Git handle just fine. Using a monorepo can result in fewer barriers in the software development lifecycle. It can allow faster feedback loops, less time spent looking for code,, and less time reporting bugs and waiting for them to be fixed. It also makes it much easier to analyze a huge treasure trove of interesting data about how your software is actually built and where problem areas are. We’ve used a monorepo at one of our clients for almost three years and it’s been great. I really don’t see why you wouldn’t. But roughly every two months I tend to have a conversation with someone who’s not used to working in this way and the entire idea just seems totally crazy to them. And the conversation tends to always follow the same path, starting with the sheer size and quickly moving on to dependency management, testing and versioning strategies. It gets complicated. It’s time I finally wrote down a coherent reasoning behind why I believe monorepos should be the default way we manage a codebase. Especially if you’re building something even vaguely microservices based, you have multiple teams and want to share common code. What do you mean “just one repo”? Just so we’re all thinking about the same thing, when I say monorepo, I’m talking about a strategy of storing all the code you as an organization are responsible for. This could be a project, a programme of work, or the entirety of a product and infrastructure code of your company in a single repository, under one revision history. Individual components (libraries, services, custom tools, infrastructure automation, ...) are stored alongside each other in folders. It’s analogous to the UNIX file tree which has a single root, as opposed to multiple, device based roots in Windows operating systems. People not familiar with the concept typically have a fairly strong reaction to the idea. One giant repo? Why would anyone do that? That cannot possibly scale! Many different objections come out, most of them often only tangentially related to storing all the code together. Occasionally, people get almost religious about it (I am talking about engineers after all). Despite being used by some of the largest tech companies, it is a relatively foreign concept and on the surface goes against everything you’ve been taught about not making huge monolithic things. It also seems like we’re fixing things that are not broken: everyone in the world is doing multiple repos, building and sharing artifacts (npm modules, JARs, ruby gems…), using SemVer to version and manage dependencies and long running branches to patch bugs in older versions of code, right? Surely if it’s industry standard it must be the right thing to do. Well, I don’t believe so. I personally think almost every single one of those things is harder, more laborious, more brittle, harder to test and generally just more wasteful than the equivalent approach you get as a consequence of a monorepo. And a few of the capabilities a monorepo enables can’t be replicated in a multi-repo situation even if you build a lot of infrastructure around it, basically because you introduce distributed computing problems and get on the bad side of CAP theorem (we’ll look at this closer below). Apart from letting you make dependency management easier and testing more reliable than it can get with multiple repos, monorepo will also give you a few simple, important, but easy to underestimate advantages. The biggest advantages of using a monorepo It's easier to find and discover your code in a monorepo With a monorepo, there is no question about where all the code is and when you have access to some of it, you can see all of it. It may come as a surprise, but making code visible to the rest of the organization isn’t always the default behavior. Human insecurities get in the way and people create private repositories and squirrel code away to experiment with things “until they are ready”. Typically, when the thing does get “ready”, it now has a Continuous Integration (CI) service attached, many hyperlinks lead to it from emails, chat rooms and internal wikis, several people have cloned the repo and it’s now quite a hassle to move the code to a more visible, obvious place and so it stays where it started. As a consequence, it is often quite hard work to find all the code for the project and gain access to it, which is hard and expensive for new joiners and hurts collaboration in general. You could say this is a matter of discipline and I will agree with you, but why leave to individual discipline what you can simply prevent by telling everyone that all the code belongs in the one repo, it’s completely ok to put even little experiments and pet projects there. You never know what they will grow into and putting them in the repo has basically no cost attached. Visibility aids understanding of how to use internal APIs (internal in the sense of being designed and built by your organisation). The ability to search the entire codebase from within your editor and find usages of the call you’re considering to use is very powerful. Code editors and languages can also be set up for cross-references to work, which means you can follow references into shared libraries and find usages of shared code across the codebase. And I mean the entire codebase. This also enables all kind of analyses to be done on the codebase and its history. Knowing the totality of the codebase and having a history of all the code lets you see dependencies, find parts of the codebase only committed to by a very limited group of people, hotspots changing suspiciously frequently or by a large number of people… Your codebase is the source of truth about what your engineering organization is producing, it contains an incredible amount of interesting information we typically just ignore. Monorepos give you more flexibility when moving code Conway’s Law famously states that “organizations which design systems (...) are constrained to produce designs which are copies of the communication structures of these organisations”. This is due to the level of communication necessary to produce a coherent piece of software. The further away in the organisation an owner of a piece of software is, the harder it is to directly influence it, so you design strict interfaces to insulate yourself from the effect of “their” changes. This typically affects the repository structure as well. There are two problems with this: the structure is generally chosen upfront, before we know what the right shape of the software is, and changing the structure has a cost attached. With each service and library being in a separate repository, the module boundaries are quite a lot stronger than if they are all in one repository. Extracting common pieces of code into a shared library becomes more difficult and involves a setup of a whole new repository - full with CI integration, pull request templates and labels, access control setup… hard work. In a monorepo, these boundaries are much more fluid and flexible: moving code between services and libraries, extracting new ones or inlining libraries back into their consumers all become as easy as general refactoring. There is no reason to use a completely different set of tools to change the small-scale and the large-scale structure of your codebase. The only real downside is tooling support for access control and declaring ownership. However, as monorepos get more popular, this support is getting better. GitHub now supports codeowners, for example. We will get there. A monorepo gives you a single history timeline While visibility and flexibility are quite convenient, the one feature of a monorepo which is very hard (if not impossible) to replicate is the single history timeline. We’ll go into why it’s so hard further below, but for now let’s look at the advantages it brings. Single history timelines gives us a reliable total order of changes to the codebase over time. This means that for any two contributions to the codebase, we can definitively and reliably decide which came first and which came second. It should never be ambiguous. It also means that each commit in a monorepo is a snapshot of the system as it was at that given moment. This enables a really interesting capability: it means cross-cutting changes can be made atomically, safely, in one go. Atomic cross-cutting commits Atomic cross-cutting commits make two specific scenarios much easier to achieve. First, externally forced global migrations are much easier and quicker. Let’s say multiple services use a common database and need the password and we need to rotate it. The password itself is (hopefully!) stored in a secure credential store, but at least the reference to it will be in several different places within the codebase. If the reference changes (let’s say the reference is generated every time), we can update every specific the mention of it at once, in one commit, with a search & replac. This will get everything working again. Second, and more important, we can change APIs and update both producer and all consumers at the same time, atomically. For example, we can add an endpoint to an API service and migrate consumers to use the new endpoint. In the next commit, we can remove the old API endpoint as it’s no longer needed. If you're trying to do this across multiple repositories with their own histories, the change will have to be split into several parallel commits. This leaves the potential for the two changes to overlap and happen in the wrong order. Some consumers can get migrated, then the endpoint gets removed, then the rest of the consumers get migrated. The mid-stage is an inconsistent state and an attempt to use the not-yet-migrated consumers will fail attempting to call an API endpoint that no longer exists. Monorepos remove inconsistencies in your dependencies Inconsistencies between dependent modules are the entire reason why dependency management and versioning exist. In a monorepo, the above scenario simply can’t happen. And that’s how the conversation about storing code in one place ends up being about versioning and dependency management. Monorepos essentially make the problem go away (which is my favourite kind of problem solving). Okay, this isn’t entirely true. There are still consequences to making breaking changes to APIs. For one, you need to update all the consumers, which is work, but you also need to build all of them, test everything works and deploy it all. This is quite hard for (micro)services that get individually deployed: making coordinated deployment of multiple services atomic is possible but not trivial. You can use a blue-green strategy, for example, to make sure there is no moment in time where only some services changed but not others. It gets harder for shared libraries. Building and publishing artifacts of new versions and updating all consumers to use the new version are still at least two commits, otherwise you’d be referring to versions that won’t exist until the builds finish. Now, things are getting inconsistent again and the view of what is supposed to work together is getting blurred in time again. And what if someone sneaks some changes in between the two commits. We are, once again, in a race. Unless... Building from the latest source Yes. What if instead of building and publishing shared code as prebuilt artifacts (binaries, jars, gems, npm modules), we build each deployable service completely from source. Every time a service changes, it is entirely rebuilt, including all dependencies. This is a fair bit of work for some compiled languages. However, it can be optimised with incremental build tools which skip work that’s already been done and cached. Some, like Go, solve it by simply having designed for a fast compiler. For dynamic languages, it’s just a matter of setting up include paths correctly and bundling all the relevant code. The added benefit here is you don’t need to do anything special if you’re working on a set of interdependent projects locally. No more `npm link`. The more interesting consequence is how this affects changing a shared dependency. When building from source, you have to make sure every time that happens, all the consumers get rebuilt using it. This is great, everyone gets the latest and greatest things immediately. ...right? Don’t worry, I can hear the alarm bells ringing in your head all the way from here. Your service depends on tens, if not hundreds of libraries. Any time anyone makes a mistake and breaks any of them, it breaks your code? Hell no. But hear me out. This is a problem of understanding dependencies and testing consumers. The important consequence of building from source is you now have a single handle on what is supposed to work together. There are no separate versions, you know what to test and it’s just one thing at any one time. Push dependency management In manual dependency update management - I will call it “pull” dependency management - you as a consumer are responsible for updating your dependencies as you see fit and making sure everything still works. If you find a bug, you simply don’t upgrade. Instead you report the bug to the maintainer and expect them to fix it. This can be months after the bug was introduced and the bug may have already been fixed in a newer version you haven’t yet upgraded to because things have moved on quite a bit while you were busy hitting a deadline and it would now be a sizable investment to upgrade. Now you’re a little stuck and all the ways out are a significant amount of work for someone, all that because the feedback loop is too long. Normally, as a library maintainer, you’re never quite certain how to make sure you’re not breaking anything. Even if you could run your consumers’ test suites, which consumers at what versions do you test against? And as a DevOps team doing 24/7 support for a system, how do you know which version or versions of a library is used across your services. What do you need to update to roll out that important bug fix to your customers? In push dependency management, quite a few things are the other way round. As a consumer, you’re not responsible for updating, it is done for you - effectively, you depended on the “latest” version of everything. Every time a maintainer of the library makes a change you are responsible for testing for regressions. No, not manually! You do have unit tests, right? Right?? Please have a solid regression test suite you trust, it’s 2019. So with your unit test suite, all you need to do is run it. Actually no, let’s let the maintainer run it. If they introduce a problem, they get immediate feedback from you, before their change ever hits the master branch. And this is the contract agreement in push dependency management: If you make a change and break anyone, you are responsible for fixing them. They are responsible for supplying a good enough, by their own standard, automated mechanism for you to verify things still work. The definition of “works” is the tests pass. Seriously though, you need to have a decent regression test suite! Continuous integration for push dependencies: the final piece of the monorepo puzzle The main missing piece of tooling around monorepos is support for push dependencies in CI systems. It’s quite straightforward to implement the strategy yourself, but it’s still hard enough to be worth some shared tooling. Unfortunately, the existing build tools geared towards monorepos like Bazel and Buck take over the entire build process from more familiar tools (like Maven or Babel) and you need to switch to them. Although to be fair, in exchange, you get very performant incremental builds. A lighter tooling, which lets you express dependencies between components in a monorepo, in a language agnostic way, and is only responsible for deciding which build jobs needs to be triggered given a set of changed components in a commit seems to be missing. So I built one. It’s far from perfect, but it should do the trick. Hopefully, someone with more time on their hands will eventually come up with something similarly cheap to introduce into your build system and the community will adopt it more widely. The main takeaway is that if we build from source in a monorepo we can set up a central Continuous Integration system responsible for triggering builds for all projects potentially affected by a change, intended to make sure you didn’t break anything with the work you did, whether it belongs to you or someone else. This is next to impossible in a multi-repo codebase because of the blurriness of history mentioned above. It’s interesting to me that we have this problem today in the larger ecosystem. Everywhere. And we stumble forward and somewhat successfully live with upstream changes occasionally breaking us, because we don’t really have a better choice. We don’t have the ability to test all the consumers in the world and fix things when we break them. But if we can, at least for our own codebase, why wouldn’t we do that? Along with a “you broke it you fix it” policy. Building from source in a monorepo allows that. It also makes it significantly easier to make breaking changes much harder to make. That said... About breaking changes There are two kinds of changes that break the consumer - the ones you introduce by accident, intending the keep backwards compatibility, and then the intentional ones. The first kind should not be too laborious to fix: once you find out what’s wrong, fix it in one place, make sure you didn’t break anything else, done. The second kind is harder. If you absolutely have to make an intentional breaking change, you will need to update all the consumers. Yes. That’s the deal. And that’s also fair. I’m not sure why we’re okay with breaking changes being intentionally introduced upstream on a whim. In any other area of human endeavour, a breach of contract makes people angry and they will expect you to make good by them. Yet we accept breaking changes in software as a fact of life. “It’s fine, I bumped the major version!” Semantic versioning: a bad idea It’s not fine. In fact, semantic versioning is just a bad idea. I know that’s a bold claim and this is a whole separate article (which I promise to write soon), but I’ll try to do it at least some justice here. Semantic versioning is meant to convey some meaning with the version number, but the chosen meanings it expresses are completely arbitrary. First of all, semver only talks about API contract, not behaviour. Adding side effects or changing performance characteristics of an API call for worse while keeping the data interface the same is a completely legal patch level change. And I bet you consider that a breaking change, because it will break your app. Second, does anyone really care about minor vs. patch? The promise is the API doesn’t break. So really we only care about major or other. Major is a dealbreaker, otherwise we’re ok. From a consumer perspective a major version bump spells trouble and potentially a lot of work. Making breaking changes is a mean thing to do to your consumers and you can and should avoid them. Just keep the old API around and working and add the new one next to it. As for version numbers, the most important meaning to convey seems to be “how old?” because code tends to rot, and so versioning by date might be a good choice. But, you say, I’ll have more and more code to maintain! Well yes, of course. And that’s the other problem with semver, the expectation that even old versions still get patches and fixes. It’s not very explicitly stated but it’s there. And because we typically maintain old versions on long-running branches in version control, it’s not even very visible in the codebase. What if you kept older APIs around, but deprecated them and the answer to bugs in them would be to migrate to the newer version of a particular call, which doesn’t have the bug? Would you care about having that old code? It just sits there in the codebase, until nobody uses it. It would also be much less work for the consumer, it’s just one particular call. Also, the bug is typically deeper inside your code, so it’s actually more likely you can fix it in one go for all the API surfaces, old or new. Doing the same thing in the branching model is N times the work (for N maintenance branches). There are technologies that follow this model out of necessity. One example is GraphQL, which was built to solve (among other things) the problem of many old API consumers in people’s hands and the need to support all of them for at least some time. In GraphQL, you deprecate data fields in your API and they become invisible in documentation and introspection calls, but they still work as they used to. Possibly forever. Or at least until barely anyone uses them. The other option you have if you want to keep an older version of a library around and maintain it in a monorepo is to make a copy of the folder and work on the two separately. It’s the same thing as cutting a long running branch, you’re just making a copy in “file space” not “branch space”. And it’s more visible and representative of reality - both versions exist as first-class components being maintained. There are many different versioning and maintenance strategies you could adopt, but in my opinion the preference should be to invest effort into the latest version, making breaking changes only when absolutely inevitable (and at that point, isn’t the new version just a new thing? Like Luxon, the next version of Moment.js) and making updates trivial for your consumers. And if it’s trivial you can do it for them. Ultimately it was your decision to break the API, so you should also do the work, it’s only fair and it makes you evaluate the cost-benefit trade-off of the change. In a monorepo with building from source, this versioning strategy happens naturally. You can, however, adopt others. You just lose some of the guarantees and make feedback loops longer. Versioning strategy is really an orthogonal concept to storing code in a single repository, but the relative costs do change if you use one. Versioning by using a single version that cuts across the system becomes a lot cheaper, which means breaking changes becomes more expensive. This tends to lead to more discussions about versioning. This is actually true for most of the things we covered above. You can, but don’t have to adopt these strategies with a monorepo. Pay as you go monorepos It’s totally possible to just store everything in a single repo and not do anything else. You’ll get the visibility of what exists and flexibility of boundaries and ownership. You can still publish individual build artifacts and pull-manage dependencies (but you’d be missing out). Add building from source and you get the single snapshot benefit - you now know what code runs in a particular version of your system and to an extent, you can think about it as a monolith, despite being formed of many different independent modules and services. Add dependency aware continuous integration and the feedback loop around issues introduced while working on the codebase gets much much shorter, allowing you to go faster and waste less time on carefully managing versions, reporting bugs, making big forklift upgrades, etc. Things tend to get out of control much less. It’s simpler. Best of all, you can mix and match strategies. If you have a hugely popular library in your monorepo and each change in it triggers a build of hundreds of consumers, it only takes a couple of those builds being flakey and it will make it very hard to get builds for the changes in the library to pass. This is really a CI problem to fix (and there are so many interesting strategies out there), but sometimes you can’t do that easily. You could also say the feedback loop is now too tight for the scale and start versioning the library’s intentional releases. This still doesn’t mean you have to publish versioned artifacts. You can have a stable copy of the library in the repo, which consumers depend on, and a development copy next to it which the maintainers work on. Releasing then means moving changes from the development folder to the release one and getting its builds to pass. Or, if you wish, you can publish artifacts and let consumers pull them on their own time and report bugs to you. And you still don’t need to promise fixes for older versions without upgrading to the latest (Libraries should really have a published “code of maintenance” outlining the promises and setting maintenance expectations). And if you have to, I would again recommend making a copy, not branching. In fact, in a monorepo, branching might just not be a very good idea. Temporary branches are still useful to work on proposed changes, but long-running branches just hide the full truth about the system. And so does relying on old commits. The copies of code being used exist either way, the are still relevant and you still need to consider them for testing and security patching, they are just hidden in less apparent dimensions of the codebase “space” - branch dimension or time dimension. These are hard to think about and visualise, so maybe it’s not a good idea to use them to keep relevant and current versions of the code and stick to them as change proposal and “time travel” mechanisms. Hopefully you can see that there’s an entire spectrum of strategies you can follow but don’t have to adopt wholesale. I’m sold, but... can’t we do all this with a multi-repo? Most of the things discussed above are not really strictly dependent on monorepos, they are more a natural consequence of adopting one. You can follow versioning strategies other than semver outside of a monorepo. You can probably implement an automated version bumping system which will upgrade all the dependents of a library and test them, logging issues if they don’t pass. What you can’t do outside of a monorepo, as far as I can tell, is the atomic snapshotting of history to have a clear view of the system. And have the same view of the system a year ago. And be able to reproduce it. As soon as multiple parallel version histories are established, this ability goes away and you introduce distributed state. It’s impossible to update all the “heads” in this multi history at the same time, consistently. In version control, like git, the histories are ordered by the “follows” relationship. Later version follows - points to - its predecessor. To get a consistent, canonical view of time, there needs to be a single entrypoint. Without this central entry point, it’s impossible to define a consistent order across the entire set, it depends on where we start looking. Essentially, you already chose Partition from the three CAP properties. Now you can pick either Consistency or Availability. Typically, availability is important and so you lose consistency. You could choose consistency instead, but that would mean you can’t have availability - in order to get a consistent snapshot of the state of all of the repos, write access would need to be stopped while the snapshot is taken. In a monorepo, you don’t have partitioning, and can therefore have consistency and availability. From a physics perspective, multiple repositories with their own history effectively create a kind of spacetime, where each repository is a place and references across repos represent information propagating across space. The speed of that propagation isn’t infinite - it’s not instant. If changes happen in two places close enough in time, from the perspective of those two places, they happen in a globally inconsistent order, first the local change, then the remote change. Neither of the views is better and more true and it’s impossible to decide which of the changes came first. Unless, that is, we introduce an agreed upon central point which references all the repositories that exist and every time one of them updates, the reference in this master gets updated and a revision is created. Congratulations, we created a monorepo, well done us. The benefits of going all-in when it comes to monorepos As I said at the beginning, adopting the monorepo approach fully will result in fewer barriers in the software development lifecycle. You get faster feedback loops - the ability to test consumers of libraries before checking in a change and immediate feedback. You will spend less time looking for code and working out how it gets assembled together. You won’t need to set up repositories or ask for permissions to contribute. You can spend more time-solving problems to help your customers instead of problems you created for yourself. It takes some time to get the tooling setup right, but you only do it once, all the later projects get the setup for free. Some of the tooling is a little lacking, but in our experience there are no show stoppers. A stable, reliable CI is an absolute must, but that’s regardless of monorepos. Monorepos should also help make builds repeatable. The repo does eventually get big, but it takes years and years and hundreds of people to get to a size where it actually becomes a real problem. The Linux kernel is a monorepo and it’s probably still at least an order of magnitude bigger than your project (it is bigger than ours anyway, despite having hundreds of engineers involved at this point). Basically, you’re not Google or Microsoft. And when you are, you’ll be able to afford optimising your version control system. The UX of your code review and source hosting tooling is probably the first thing that will break, not the underlying infrastructure. For smoother scaling, the one recommendation I have is to set a file size limit - accidentally committed large files are quite hard to remove, at least in git. After using a monorepo for over two years we’re still yet to have any big technical issues with it (plenty of political ones, but that’s a story for another day) and we see the same benefits as Google reported in their recent paper. I honestly don’t really know why you would start with any other strategy. Viktor Charypar is Technical Director at Red badger, a digital consultancy based in London. You can follow him on Twitter @charypar or read more of his writing on the Red Badger blog here.
Read more
  • 0
  • 0
  • 9575

article-image-7-things-java-programmers-need-to-watch-for-in-2019
Prasad Ramesh
24 Jan 2019
7 min read
Save for later

7 things Java programmers need to watch for in 2019

Prasad Ramesh
24 Jan 2019
7 min read
Java is one of the most popular and widely used programming languages in the world. Its dominance of the TIOBE index ranking is unmatched for the most part, holding the number 1 position for almost 20 years. Although Java’s dominance is unlikely to waver over the next 12 months, there are many important issues and announcements that will demand the attention of Java developers. So, get ready for 2019 with this list of key things in the Java world to watch out for. #1 Commercial Java SE users will now need a license Perhaps the most important change for Java in 2019 is that commercial users will have to pay a license fee to use Java SE from February. This move comes in as Oracle decided to change the support model for the Java language. This change currently affects Java SE 8 which is an LTS release with premier and extended support up to March 2022 and 2025 respectively. For individual users, however, the support and updates will continue till December 2020. The recently released Java SE 11 will also have long term support with five and extended eight-year support from the release date. #2 The Java 12 release in March 2019 Since Oracle changed their support model, non-LTS version releases will be bi-yearly and probably won’t contain many major changes. JDK 12 is non-LTS, that is not to say that the changes in it are trivial, it comes with its own set of new features. It will be generally available in March this year and supported until September which is when Java 13 will be released. Java 12 will have a couple of new features, some of them are approved to ship in its March release and some are under discussion. #3 Java 13 release slated for September 2019, with early access out now So far, there is very little information about Java 13. All we really know at the moment is that it’s’ due to be released in September 2019. Like Java 12, Java 13 will be a non-LTS release. However, if you want an early insight, there is an early access build available to test right now. Some of the JEP (JDK Enhancement Proposals) in the next section may be set to be featured in Java 13, but that’s just speculation. https://twitter.com/OpenJDK/status/1082200155854639104 #4 A bunch of new features in Java in 2019 Even though the major long term support version of Java, Java 11, was released last year, releases this year also have some new noteworthy features in store. Let’s take a look at what the two releases this year might have. Confirmed candidates for Java 12 A new low pause time compiler called Shenandoah is added to cause minimal interruption when a program is running. It is added to match modern computing resources. The pause time will be the same irrespective of the heap size which is achieved by reducing GC pause times. The Microbenchmark Suite feature will make it easier for developers to run existing testing benchmarks or create new ones. Revamped switch statements should help simplify the process of writing code. It essentially means the switch statement can also be used as an expression. The JVM Constants API will, the OpenJDK website explains, “introduce a new API to model nominal descriptions of key class-file and run-time artifacts”. Integrated with Java 12 is one AArch64 port, instead of two. Default CDS Archives. G1 mixed collections. Other features that may not be out with Java 12 Raw string literals will be added to Java. A Packaging Tool, designed to make it easier to install and run a self-contained Java application on a native platform. Limit Speculative Execution to help both developers and operations engineers more effectively secure applications against speculative-execution vulnerabilities. #5 More contributions and features with OpenJDK OpenJDK is an open source implementation of Java standard edition (Java SE) which has contributions from both Oracle and the open-source community. As of now, the binaries of OpenJDK are available for the newest LTS release, Java 11. Even the life cycles of OpenJDK 7 and 8 have been extended to June 2020 and 2023 respectively. This suggests that Oracle does seem to be interested in the idea of open source and community participation. And why would it not be? Many valuable contributions come from the open source community. Microsoft seems to have benefitted from open sourcing with the incoming submissions. Although Oracle will not support these versions after six months from initial release, Red Hat will be extending support. As the chief architect of the Java platform, Mark Reinhold said stewards are the true leaders who can shape what Java should be as a language. These stewards can propose new JEPs, bring new OpenJDK problems to notice leading to more JEPs and contribute to the language overall. #6 Mobile and machine learning job opportunities In the mobile ecosystem, especially Android, Java is still the most widely used language. Yes, there’s Kotlin, but it is still relatively new. Many developers are yet to adopt the new language. According to an estimated by Indeed, the average salary of a Java developer is about $100K in the U.S. With the Android ecosystem growing rapidly over the last decade, it’s not hard to see what’s driving Java’s value. But Java - and the broader Java ecosystem - are about much more than mobile. Although Java’s importance in enterprise application development is well known, it's also used in machine learning and artificial intelligence. Even if Python is arguably the most used language in this area, Java does have its own set of libraries and is used a lot in enterprise environments. Deeplearning4j, Neuroph, Weka, OpenNLP, RapidMiner, RL4J etc are some of the popular Java libraries in artificial intelligence. #7 Java conferences in 2019 Now that we’ve talked about the language, possible releases and new features let’s take a look at the conferences that are going to take place in 2019. Conferences are a good medium to hear top professionals present, speak, and programmers to socialize. Even if you can’t attend, they are important fixtures in the calendar for anyone interested in following releases and debates in Java. Here are some of the major Java conferences in 2019 worth checking out: JAX is a Java architecture and software innovation conference. To be held in Mainz, Germany happening May 6–10 this year, the Expo is from May 7 to 9. Other than Java, topics like agile, Cloud, Kubernetes, DevOps, microservices and machine learning are also a part of this event. They’re offering discounts on passes till February 14. JBCNConf is happening in Barcelona, Spain from May 27. It will be a three-day conference with talks from notable Java champions. The focus of the conference is on Java, JVM, and open-source technologies. Jfokus is a developer-centric conference taking place in Stockholm, Sweden. It will be a three-day event from February 4-6. Speakers include the Java language architect, Brian Goetz from Oracle and many other notable experts. The conference will include Java, of course, Frontend & Web, cloud and DevOps, IoT and AI, and future trends. One of the biggest conferences is JavaZone attracting thousands of visitors and hundreds of speakers will be 18 years old this year. Usually held in Oslo, Norway in the month of September. Their website for 2019 is not active at the time of writing, you can check out last year’s website. Javaland will feature lectures, training, and community activities. Held in Bruehl, Germany from March 19 to 21 attendees can also exhibit at this conference. If you’re working in or around Java this year, there’s clearly a lot to look forward to - as well as a few unanswered questions about the evolution of the language in the future. While these changes might not impact the way you work in the immediate term, keeping on top of what’s happening and what key figures are saying will set you up nicely for the future. 4 key findings from The State of JavaScript 2018 developer survey Netflix adopts Spring Boot as its core Java framework Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 8168

article-image-github-wants-to-improve-open-source-sustainability-invites-maintainers-to-talk-about-their-oss-challenges
Sugandha Lahoti
18 Jan 2019
4 min read
Save for later

Github wants to improve Open Source sustainability; invites maintainers to talk about their OSS challenges

Sugandha Lahoti
18 Jan 2019
4 min read
Open Source Sustainability is an essential and special part of free and open software development. Open source contributors and maintainers build tools and technologies for everyone, but they’ don’t get enough resources, tools, and environment. If anything goes wrong with the project, it is generally the system contributors who are responsible for it. In reality, however, contributors, and maintainers together are equally responsible. Yesterday, Devon Zuegel, the open source product manager at GitHub penned a blog post talking about open source sustainability and what are the issues current open source maintainers face while trying to contribute to open source. The major thing holding back OSS is the work overload that maintainers face. The OS community generally consists of maintainers who are working at some other organization while also maintaining the open source projects mostly in their free time. This leaves little room for software creators to have economic gain from their projects and compensate for costs and people required to maintain their projects. This calls for companies and individuals to donate to these maintainers on GitHub. As a hacker news user points out, “ I think this would be a huge incentive for people to continue their work long-term and not just "hand over" repositories to people with ulterior motives.” Another said, “Integrating bug bounties and donations into GitHub could be one of the best things to happen to Open Source. Funding new features and bug fixes could become seamless, and it would sway more devs to adopt this model for their projects.” Another major challenge is the abuse and frustration that maintainers have to go on a daily basis. As Devon writes on her blog, “No one deserves abuse. OSS contributors are often on the receiving end of harassment, demands, and general disrespect, even as they volunteer their time to the community.”  What is required is to educate people and also build some kind of moderation for trolls like a small barrier to entry. Apart from that maintainers should also be given expanded visibility into how their software is used. Currently, they are only given access to download statistics. There should be a proper governance model that should be regularly updated based on the what decisions team makes, delegates, and communicates. As Adam Jacob founder of SFOSC (Sustainable Free and Open Source Communities) points out, “I believe we need to start talking about Open Source, not in terms of licensing models, or business models (though those things matter): instead, we should be talking about whether or not we are building sustainable communities. What brings us together, as people, in this common effort around the software? What rights do we hold true for each other? What rights are we willing to trade in order to see more of the software in the world, through the investment of capital?” SFOSC is established to discuss the principles that lead to sustainable communities, to develop clear social contracts communities can use, and educate Open Source companies on which business models can create true communities. As with SFOSC, Github also wants to better understand the woes of maintainers from their own experiences and hence the blog. Devon wants to support the people behind OSS at Github inviting people to have an open dialogue with the GitHub community solving nuanced and unique challenges that the current OSS community face. She has created a contact form asking open source contributors or maintainer to join the conversation and share their problems. Open Source Software: Are maintainers the only ones responsible for software sustainability? We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview] EU to sponsor bug bounty programs for 14 open source projects from January 2019
Read more
  • 0
  • 0
  • 2801
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-18-people-in-tech-every-programmer-and-software-engineer-needs-to-follow-in-2019
Richard Gall
02 Jan 2019
9 min read
Save for later

18 people in tech every programmer and software engineer needs to follow in 2019

Richard Gall
02 Jan 2019
9 min read
After a tumultuous 2018 in tech, it's vital that you surround yourself with a variety of opinions and experiences in 2019 if you're to understand what the hell is going on. While there are thousands of incredible people working in tech, I've decided to make life a little easier for you by bringing together 18 of the best people from across the industry to follow on Twitter. From engineers at Microsoft and AWS, to researchers and journalists, this list is by no means comprehensive but it does give you a wide range of people that have been influential, interesting, and important in 2018. (A few of) the best people in tech on Twitter April Wensel (@aprilwensel) April Wensel is the founder of Compassionate Coding, an organization that aims to bring emotional intelligence and ethics into the tech industry. In April 2018 Wensel wrote an essay arguing that "it's time to retire RTFM" (read the fucking manual). The essay was well received by many in the tech community tired of a culture of ostensible caustic machismo and played a part in making conversations around community accessibility an important part of 2018. Watch her keynote at NodeJS Interactive: https://www.youtube.com/watch?v=HPFuHS6aPhw Liz Fong-Jones (@lizthegrey) Liz Fong-Jones is an SRE and Dev Advocate at Google Cloud Platform, but over the last couple of years she has become an important figure within tech activism. First helping to create the NeverAgain pledge in response to the election of Donald Trump in 2016, then helping to bring to light Google's fraught internal struggle over diversity, Fong-Jones has effectively laid the foundations for the mainstream appearance of tech activism in 2018. In an interview with Fast Company, Fong-Jones says she has accepted her role as a spokesperson for the movement that has emerged, but she's committed to helping to "equipping other employees to fight for change in their workplaces–whether at Google or not –so that I’m not a single point of failure." Ana Medina (@Ana_M_Medina) Ana Medina is a chaos engineer at Gremlin. Since moving to the chaos engineering platform from Uber (where she was part of the class action lawsuit against the company), Medina has played an important part in explaining what chaos engineering looks like in practice all around the world. But she is also an important voice in discussions around diversity and mental health in the tech industry - if you get a chance to her her talk, make sure you take the chance, and if you don't, you've still got Twitter... Sarah Drasner (@sarah_edo) Sarah Drasner does everything. She's a Developer Advocate at Microsoft, part of the VueJS core development team, organizer behind Concatenate (a free conference for Nigerian developers), as well as an author too. https://twitter.com/sarah_edo/status/1079400115196944384 Although Drasner specializes in front end development and JavaScript, she's a great person to follow on Twitter for her broad insights on how we learn and evolve as software developers. Do yourself a favour and follow her. Mark Imbriaco (@markimbriaco) Mark Imbriaco is the technical director at Epic Games. Given the company's truly, er, epic year thanks to Fortnite, Imbriaco can offer an insight on how one of the most important and influential technology companies on the planet are thinking. Corey Quinn (@QuinnyPig) Corey Quinn is an AWS expert. As the brain behind the Last Week in AWS newsletter and the voice behind the Screaming in the Cloud podcast (possibly the best cloud computing podcast on the planet), he is without a doubt the go-to person if you want to know what really matters in cloud. The range of guests that Quinn gets on the podcast is really impressive, and sums up his online persona: open, engaged, and always interesting. Yasmine Evjen (@YasmineEvjen) Yasmine Evjen is a Design Advocate at Google. That means that she is not only one of the minds behind Material Design, she is also someone that is helping to demonstrate the importance of human centered design around the world. As the presenter of Centered, a web series by the Google Design team about the ways human centered design is used for a range of applications. If you haven't seen it, it's well worth a watch. https://www.youtube.com/watch?v=cPBXjtpGuSA&list=PLJ21zHI2TNh-pgTlTpaW9kbnqAAVJgB0R&index=5&t=0s Suz Hinton (@noopkat) Suz Hinton works on IoT programs at Microsoft. That's interesting in itself, but when she's not developing fully connected smart homes (possibly), Hinton also streams code tutorials on Twitch (also as noopkat). Chris Short (@ChrisShort) If you want to get the lowdown on all things DevOps, you could do a lot worse than Chris Short. He boasts outstanding credentials - he's a CNCF ambassador, has experience with Red Hat and Ansible - but more importantly is the quality of his insights. A great place to begin is with DevOpsish, a newsletter Short produces, which features some really valuable discussions on the biggest issues and talking points in the field. Dan Abramov (@dan_abramov) Dan Abramov is one of the key figures behind ReactJS. Along with @sophiebits,@jordwalke, and @sebmarkbage, Abramov is quite literally helping to define front end development as we know it. If you're a JavaScript developer, or simply have any kind of passing interest in how we'll be building front ends over the next decade, he is an essential voice to have on your timeline. As you'd expect from someone that has helped put together one of the most popular JavaScript libraries in the world, Dan is very good at articulating some of the biggest challenges we face as developers and can provide useful insights on how to approach problems you might face, whether day to day or career changing. Emma Wedekind (@EmmaWedekind) As well as working at GoTo Meeting, Emma Wedekind is the founder of Coding Coach, a platform that connects developers to mentors to help them develop new skills. This experience makes Wedekind an important authority on developer learning. And at a time when deciding what to learn and how to do it can feel like such a challenging and complex process, surrounding yourself with people taking those issues seriously can be immensely valuable. Jason Lengstorf (@jlengstorf) Jason Lengstorf is a Developer Advocate at GatsbyJS (a cool project that makes it easier to build projects with React). His writing - on Twitter and elsewhere - is incredibly good at helping you discover new ways of working and approaching problems. Bridget Kromhout (@bridgetkromhout) Bridget Kromhout is another essential voice in cloud and DevOps. Currently working at Microsoft as Principal Cloud Advocate, Bridget also organizes DevOps Days and presents the Arrested DevOps podcast with Matty Stratton and Trevor Hess. Follow Bridget for her perspective on DevOps, as well as her experience in DevRel. Ryan Burgess (@burgessdryan) Netflix hasn't faced the scrutiny of many of its fellow tech giants this year, which means it's easy to forget the extent to which the company is at the cutting edge of technological innovation. This is why it's well worth following Ryan Burgess - as an engineering manager he's well placed to provide an insight on how the company is evolving from a tech perspective. His talk at Real World React on A/B testing user experiences is well worth watching: https://youtu.be/TmhJN6rdm28 Anil Dash (@anildash) Okay, so chances are you probably already follow Anil Dash - he does have half a million followers already, after all - but if you don't follow him, you most definitely should. Dash is a key figure in new media and digital culture, but he's not just another thought leader, he's someone that actually understands what it takes to actually build this stuff. As CEO of Glitch, a platform for building (and 'remixing') cool apps, he's having an impact on the way developers work and collaborate. 6 years ago, Dash wrote an essay called 'The Web We Lost'. In it, he laments how the web was becoming colonized by a handful of companies who built the key platforms on which we communicate and engage with one another online. Today, after a year of protest and controversy, Dash's argument is as salient as ever - it's one of the reasons it's vital that we listen to him. Jessie Frazelle (@jessfraz) Jessie Frazelle is a bit of a superstar. Which shouldn't really be that surprising - she's someone that seems to have a natural ability to pull things apart and put them back together again and have the most fun imaginable while doing it. Formerly part of the core Docker team, Frazelle now works at GitHub, where her knowledge and expertise is helping to develop the next Microsoft-tinged chapter in GitHub's history. I was lucky enough to see Jessie speak at ChaosConf in September - check out her talk: https://youtu.be/1hhVS4pdrrk Rachel Coldicutt (@rachelcoldicutt) Rachel Coldicutt is the CEO of Doteveryone, a think tank based in the U.K. that champions responsible tech. If you're interested in how technology interacts with other aspects of society and culture, as well as how it is impacting and being impacted by policymakers, Coldicutt is a vital person to follow. Kelsey Hightower (@kelseyhightower) Kelsey Hightower is another superstar in the tech world - when he talks, you need to listen. Hightower currently works at Google Cloud, but he spends a lot of time at conferences evangelizing for more effective cloud native development. https://twitter.com/mattrickard/status/1073285888191258624 If you're interested in anything infrastructure or cloud related, you need to follow Kelsey Hightower. Who did I miss? That's just a list of a few people in tech I think you should follow in 2019 - but who did I miss? Which accounts are essential? What podcasts and newsletters should we subscribe to?
Read more
  • 0
  • 0
  • 9367

article-image-why-moving-from-a-monolithic-architecture-to-microservices-is-so-hard-gitlabs-jason-plum-breaks-it-down-kubeconcnc-talk
Amrata Joshi
19 Dec 2018
12 min read
Save for later

Why moving from a monolithic architecture to microservices is so hard, Gitlab’s Jason Plum breaks it down [KubeCon+CNC Talk]

Amrata Joshi
19 Dec 2018
12 min read
Last week, at the KubeCon+CloudNativeCon North America 2018, Jason Plum, Sr. software engineer, distribution at GitLab spoke about GitLab, Omnibus, and the concept of monolith and its downsides. He spent the last year working on the cloud native helm charts and breaking out a complicated pile of code. This article highlights few insights from Jason Plum’s talk on Monolith to Microservice: Pitchforks Not Included at the KubeCon + CloudNativeCon. Key takeaways “You could not have seen the future that you live in today, learn from what you've got in the past, learn what's available now and work your way to it.” - Jason Plum GitLab’s beginnings as the monolithic project provided the means for focused acceleration and innovation. The need to scale better and faster than the traditional models caused to reflect on our choices, as we needed to grow beyond the current architecture to keep up. New ways of doing things require new ways of looking at them. Be open minded, and remember your correct choices in the past could not see the future you live in. “So the real question people don't realize is what is GitLab?”- Jason Plum Gitlab is the first single application to have the entire DevOps lifecycle in a single Interface. Omnibus - The journey from a package to a monolith “We had a group of people working on a single product to binding that and then we took that, we bundled that. And we shipped it and we shipped it and we shipped it and we shipped it and all the twenties every month for the entire lifespan of this company we have done that, that's not been easy. Being a monolith made that something that was simple to do at scale.”- Jason Plum In the beginning it was simple as Ruby on Rails was on a single codebase and users had to deploy it from source. Just one gigantic code was used but that's not the case these days. Ruby on Rails is still used for the primary application but now a shim proxy called workhorse is used that takes the heavy lifting away from Ruby. It ensures the users and their API’s are are responsive. The team at GitLab started packaging this because doing everything from source was difficult. They created the Omnibus package which eventually became the gigantic monolith. Monoliths make sense because... Adding features is simple It’s easy as everything is one bundle Clear focus for Minimum Viable Product (MVP) Advantages of Omnibus Full-stack bundle provides all components necessary to use every feature of GitLab. Simple to install. Components can be individually enabled/disabled. East to distribute. Highly controlled, version locked components. Guaranteed configuration stability. The downsides of monoliths “The problem is this thing is massive” - Jason Plum The Omnibus package can work on any platform, any cloud and under any distribution. But the question is how many of us would want to manage fleets of VMs? This package has grown so much that it is 1.5 gigabytes and unpacked. It has all the features and is still usable. If a user downloads 500 megabytes as an installation package then it unpacks almost a gigabyte and a half. This package contains everything that is required to run the SaaS but the problem is that this package is massive. “The trick is Git itself is the reason that moving to cloud native was hard.” - Jason Plum While using Git, the users run a couple of commands, they push them and deploy the app. But at the core of that command is how everything is handled and how everything is put together. Git works with snapshots of the entire file. The number of files include, every file the user has and every version the user had. It also involves all the indexes and references and some optimizations. But the problem is the more the files, the harder it gets. “Has anybody ever checked out the Linux tree? You check out that tree, get your coffee, come back check out, any branch I don't care what it is and then dip that against current master. How many files just got read on the file system?” - Jason Plum When you come back you realize that all the files that are marked as different and between the two of them when you do diff, that information is not stored, it's not greeting and it is not even cutting it out. It is running differently on all of those files. Imagine how bad that gets when you have 10 million lines of code in a repository that's 15 years old ?  That’s expensive in terms of performance.  - Jason Plum Traditional methods - A big problem “Now let's actually go and make a branch make some changes and commit them right. Now you push them up to your fork and now you go into add if you on an M R. Now it's my job to do the thing that was already hard on your laptop, right? Okay cool, that's one of you, how about 10,000 people a second right do you see where this is going? Suddenly it's harder but why is this the problem?” - Jason Plum The answer is traditional methods, as they are quite slow. If we have hundreds of things in the fleet, accessing tens of machines that are massive and it still won’t work because the traditional methods are a problem. Is NFS a solution to this problem? NFS (Network File System) works well when there are just 10 or 100 people. But if a user is asked to manage an NFS server for 5,000 people, one might rather choose pitchfork. NFS is capable but it can’t work at such a scale. The Git team now has a mount that has to be on every single node, as the API code and web code and other processes which needs to be functional enough to read the files. The team has previously used Garrett, Lib Git to read the files on the file system. Every time, one reads the file, the whole file used to get pulled. This gave rise to another problem, disk i/o problems. Since, everybody tries to read the disparate set of files, the traffic increases. “Okay so we have definitely found a scaling limit now we can only push the traditional methods of up and out so far before we realize that that's just not going to work because we don't have big enough pipes, end of line. So now we've got all of this and we've just got more of them and more of them and more of them. And all of a sudden we need to add 15 nodes to the fleet and another 15 nodes to the fleet and another 15 nodes to the fleet to keep up with sudden user demand. With every single time we have to double something the choke points do not grow - they get tighter and tighter” - Jason Plum The team decided to take a second look at the problem and started working on a project called Gitaly. They took the API calls that the users would make to live Git. So the Git mechanics was sent over a GRPC and then Gitaly was put on the actual file servers. Further the users were asked to call for a diff on whatever they want and then Gitaly was asked for the response. There is no need of NFS now. “I can send a 1k packet get a 4k response instead of NFS and reading 10,000 files. We centralized everything across and this gives us the ability to actually meet throughput because that pipe that's not getting any bigger suddenly has 1/10 of the traffic going through it.” - Jason Plum This leaves more space for users to easily get to the file servers and further removes the need of NFS mounts for everything. Incase one node is lost then half of the fleet is not lost in an instant. How is Gitaly useful? With Gitaly the throughput requirement significantly reduced. The service nodes no more need disk access. It provides optimization for specific problems. How to solve Git’s performance related issue? For better optimization and performance it is important to treat it like a service or like a database. The file system is still in use and all of the accesses to the files are on the node where we have the best performance and best caching and there is no issue with regards to the network. “To take the monolith and rip a chunk out make it something else and literally prop the thing up, but how long are we going to be able to do this?” - Jason Plum If a user plans to upload something then he/she has to use a file system and which means that NFS hasn't gone away. Do we really need to have NFS because somebody uploaded a cat picture? Come on guys we can do better than that right?- Jason Plum The next solution was to take everything as a traditional file that does not get and move into object store as an option. This matters because there is no need to have a file system locally. The files can be handed over to a service that works well. And it could run on Prem in a cloud and can be handled by any number of men and service providers. Pets cattle is a popular term by CERN which means anything that can be replaced easily is cattle and anything that you have to care and feed for on a regular basis is a pet. The pet could be the stateful information, for example, database. The problem can be better explained with configuring the Omnibus at scale. If there are  hundreds of the VM’s and they are getting installed, further which the entire package is getting installed. So now there are 20 gigabytes per VM. The package needs to be downloaded for all the VM’s which means almost 500 megabytes. All the individual components can be configured out of the Omnibus. But even the load gets spreaded, it will still remain this big. And each of the nodes will at least take two minutes to come up from. So to speed up this process, the massive stack needs to be broken down into chunks and containers so they can be treated as individualized services. Also, there is no need of NFS as the components are no longer bound to the NFS disk. And this process would now take just five seconds instead of two minutes. A problem called legacy debt, a shared file system expectation which was a bugger. If there are separate containers and there is no shared disk then it could again give rise to a problem. “I can't do a shared disk because if we do shared disk through rewrite many. What's the major provider that will do that for us on every platform, anybody remember another three-letter problem.” - Jason Plum Then there came an interesting problem called workhorse, a smart proxy that talks to the UNIX sockets and not TCP. Though this problem got fixed. Time constraints - another problem “We can't break existing users and we can't have hiccups we have to think about everything ahead of time plan well and execute.” - Jason Plum Time constraints is a serious problem for a project’s developers, the development resources milestones, roadmaps deliverables. The new features would keep on coming into the project. The project would keep on functioning in the background but the existing users can’t be kept waiting. Is it possible to define individual component requirements? “Do you know how much CPU you need when idle versus when there's 10 people versus literally some guy clicking around and if files because he's one to look at what the kernel would like in 2 6 2 ?”- Jason Plum Monitoring helps to understand the component requirements. Metrics and performance data are few of the key elements for getting the exact component requirements. Other parameters like network, throughput, load balance, services etc also play an important role. But the problem is how to deal with throughput? How to balance the services? How to ensure that those services are always up? Then the other question comes up regarding the providers and load balancers as everyone doesn’t want to use the same load balancers or the same services. The system must support all the load balancers from all the major cloud providers and which is difficult. Issues with scaling “Maybe 50 percent for the thing that needs a lot of memory is a bad idea. I thought 50 percent was okay because when I ran a QA test against it, it didn't ever use more than 50 percent of one CPU. Apparently when I ran three more it now used 115 percent and I had 16 pounds and it fell over again.” - Jason Plum It's important to know what things needs to be scaled horizontally and which ones needs to be scaled vertically. To go automated or manual is also a crucial question. Also, it is equally important to understand which things should be configurable and how to tweak them as the use cases may vary from project to project. So, one should know how to go about a test and how to document a test. Issues with resilience “What happens to the application when a node, a whole node disappears off the cluster? Do you know how that behaves?” - Jason Plum It is important to understand which things shouldn't be on the same nodes. But the problem is how to recover it. These things are not known and by the time one understands the problem and the solution, it is too late. We need new ways of examining these issues and for planning the solution. Jason’s insightful talk on Monolith to Microservice gives a perfect end to the KubeCon + CloudNativeCon and is a must watch for everyone. Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNative RedHat contributes etcd, a distributed key-value store project, to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon Oracle introduces Oracle Cloud Native Framework at KubeCon+CloudNativeCon 2018
Read more
  • 0
  • 0
  • 4839

article-image-8-programming-languages-to-learn-in-2019
Richard Gall
19 Dec 2018
9 min read
Save for later

8 programming languages to learn in 2019

Richard Gall
19 Dec 2018
9 min read
Learning new skills takes time - that's why, before learning something, you need to know that what you're learning is going to be worthwhile. This is particularly true when deciding which programming language to learn. As we approach the new year, it's a good time to reflect on our top learning priorities for 2019. But which programming should you learn in 2019? We’ve put together a list of the top programming languages to learn in the new year - as well as reasons you should learn them, and some suggestions for how you can get started. This will help you take steps to expand your skill set in 2019 in the way that’s right for you. Want to learn a new programming language? We have thousands of eBooks and videos in our $5 sale to help you get to grips with everything from Python to Rust. Python Python has been a growing programming language for some time and it shows no signs of disappearing. There are a number of reasons for this, but the biggest is the irresistible allure of artificial intelligence. Once you know Python, performing some relatively sophisticated deep learning tasks becomes relatively easy, not least because of the impressive ecosystem of tools that surround it, like TensorFlow. But Python’s importance isn’t just about machine learning. It’s flexibility means it has a diverse range of applications. If you’re a full-stack developer, for example, you might find Python useful for developing backend services and APIs; equally, if you’re in security or SRE, Python can be useful for automating aspects of your infrastructure to keep things secure and reliable. Put simply, Python is a useful addition to your skill set. Learn Python in 2019 if... You’re new to software development You want to try your hand at machine learning You want to write automation scripts Want to get started? Check out these titles: Clean Code in Python Learning Python Learn Programming in Python with Cody Jackson Python for Everyday Life [Video]       Go Go isn’t quite as popular as Python, but it is growing rapidly. And its fans are incredibly vocal about why they love it: it’s incredibly simple, yet also amazingly powerful. The reason for this is its creation: it was initially developed by Google that wanted a programming language that could handle the complexity of the systems they were developing, without adding to complexity in terms of knowledge and workflows. Combining the best aspects of functional and object oriented programming, as well as featuring a valuable set of in-built development tools, the language is likely to only go from strength to strength over the next 12 months. Learn Go in 2019 if… You’re a backend or full-stack developer looking to increase your language knowledge You’re working in ops or SRE Looking for an alternative to Python Learn Go with these eBooks and videos: Mastering Go Cloud Native programming with Golang Hands-On Go Programming Hands-On Full-Stack Development with Go       Rust In Stack Overflow’s 2018 developer survey Rust was revealed to be the best loved language among the developers using it. 80% of respondents said they loved using it or wanted to use it. Now, while Rust lacks the simplicity of Go and Python, it does do what it sets out to do very well - systems programming that’s fast, efficient, and secure. In fact, developers like to debate the merits of Rust and Go - it seems they occupy the minds of very similar developers. However, while they do have some similarities, there are key differences that should make it easier to decide which one you learn. At a basic level, Rust is better for lower level programming, while Go will allow you to get things done quickly. Rust does have a lot of rules, all of which will help you develop incredibly performant applications, but this does mean it has a steeper learning curve than something like Go. Ultimately it will depend on what you want to use the language for and how much time you have to learn something new. Learn Rust in 2019 if… You want to know why Rust developers love it so much You do systems programming You have a bit of time to tackle its learning curve Learn Rust with these titles: Rust Quick Start Guide Building Reusable Code with Rust [Video] Learning Rust [Video] Hands-On Concurrency with Rust       TypeScript TypeScript has quietly been gaining popularity over recent years. But it feels like 2018 has been the year that it has really broke through to capture the imagination of the wider developer community. Perhaps it’s just Satya Nadella’s magic... More likely, however, it’s because we’re now trying to do too much with plain old JavaScript. We simply can’t build applications of the complexity we want without drowning in lines of code. Essentially, TypeScript bulks up JavaScript, and makes it suitable for building applications of the future. It’s no surprise that TypeScript is now fundamental to core JavaScript frameworks - even Google decided to use it in Angular. But it’s not just for front end JavaScript developers - there are examples of Java and C# developers looking closely at TypeScript, as it shares many features with established statically typed languages. Learn TypeScript in 2019 if… You’re a JavaScript developer You’re a Java or C# developer looking to expand their horizons Learn TypeScript in 2019: TypeScript 3.0 Quick Start Guide TypeScript High Performance Introduction to TypeScript [Video]         Scala Scala has been around for some time, but its performance gains over Java have seen it growing in popularity in recent years. It isn’t the easiest language to learn - in comparison with other Java-related languages, like Kotlin, which haven’t strayed far from its originator, Scala is almost an attempt to rewrite the rule book. It’s a good multi-purpose programming language that brings together functional programming principles and the object oriented principles you find in Java. It’s also designed for concurrency, giving you a scale of power that just isn’t possible. The one drawback of Scala is that it doesn’t have the consistency in its ecosystem in the way that, say, Java does. This does mean, however, that Scala expertise can be really valuable if you have the time to dedicate time to really getting to know the language. Learn Scala in 2019 if… You’re looking for an alternative to Java that’s more scalable and handles concurrency much better You're working with big data Learn Scala: Learn Scala Programming Professional Scala [Video] Scala Machine Learning Projects Scala Design Patterns - Second Edition       Swift Swift started life as a replacement for Objective-C for iOS developers. While it’s still primarily used by those in the Apple development community, there are some signs that Swift could expand beyond its beginnings to become a language of choice for server and systems programming. The core development team have consistently demonstrated they have a sense of purpose is building a language fit for the future, with versions 3 and 4 both showing significant signs of evolution. Fast, relatively easy to learn, and secure, not only has Swift managed to deliver on its brief to offer a better alternative to Objective-C, it also looks well-suited to many of the challenges programmers will be facing in the years to come. Learn Swift in 2019 if… You want to build apps for Apple products You’re interested in a new way to write server code Learn Swift: Learn Swift by Building Applications Hands-On Full-Stack Development with Swift Hands-On Server-Side Web Development with Swift         Kotlin It makes sense for Kotlin to follow Swift. The parallels between the two are notable; it might be crude, but you could say that Kotlin is to Java what Swift is to Objective-C. There are, of course, some who feel that the comparison isn’t favorable, with accusations that one language is merely copying the other, but perhaps the similarities shouldn’t really be that surprising - they’re both trying to do the same things: provide a better alternative to what already exists. Regardless of the debates, Kotlin is a particularly compelling language if you’re a Java developer. It works extremely well, for example, with Spring Boot to develop web services. Certainly as monolithic Java applications shift into microservices, Kotlin is only going to become more popular. Learn Kotlin in 2019 if… You’re a Java developer that wants to build better apps, faster You want to see what all the fuss is about from the Android community Learn Kotlin: Kotlin Quick Start Guide Learning Kotlin by building Android Applications Learn Kotlin Programming [Video]         C Most of the languages on this list are pretty new, but I’m going to finish with a classic that refuses to go away. C has a reputation for being complicated and hard to learn, but it remains relevant because you can find it in so much of the software we take for granted. It’s the backbone of our operating systems, and used in everyday objects that have software embedded in them. Together, this means C is a language worth learning because it gives you an insight into how software actually works on machines. In a world where abstraction and accessibility rules the software landscape, getting beneath everything can be extremely valuable. Learn C in 2019 if… You’re looking for a new challenge You want to gain a deeper understanding of how software works on your machine You’re interested in developing embedded systems and virtual reality projects Learn C: Learn and Master C Programming For Absolute Beginners! [Video]
Read more
  • 0
  • 0
  • 18167

article-image-python-governance-vote-results-are-here-the-steering-council-model-is-the-winner
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Python governance vote results are here: The steering council model is the winner

Prasad Ramesh
18 Dec 2018
3 min read
The election to select the governance model for Python following the stepping down of Guido van Rossum as the BDFL earlier this year has ended and PEP 8016 was selected as the winner. PEP 8016 is the steering council model that has a focus on providing a minimal and solid foundation for governance decisions. The vote has chosen a governance PEP that will be implemented on the Python project. The winner: PEP 8016 the steering council model Authored by Nathaniel J. Smith, and Donald Stufft, this proposal involves a model for Python governance based on a steering council. The council has vast authority, which they intend to use as rarely as possible, instead, they plan to use this power to establish standard processes. The steering council committee consists of five people. A general philosophy is followed—it's better to split up large changes into a series of small changes to be reviewed independently. As opposed to trying to do everything in one PEP, the focus is on providing a minimal and solid foundation for future governance decisions. This PEP was accepted on December 17, 2018. Goals of the steering council model The main goals of this proposal are: Sticking to the basics aka ‘be boring’. The authors don't think Python is a good place to experiment with new and untested governance models. Hence, this proposal sticks to mature, well-known, processes that have been tested previously. A high-level approach where the council does not involve much very common in large successful open source projects. The low-level details are directly derived from Django's governance. Being simple enough for minimum viable governance. The proposal attempts to slim things down to the minimum required, just enough to make it workable. The trimming includes the council, the core team, and the process for changing documentation. A goal is to ‘be comprehensive’. The things that need to be defined are covered well for future use. Having a clear set of rules will also help minimize confusion. To ‘be flexible and light-weight’. The authors are aware that to find the best processes for working together, it will take time and experimentation. Hence, they keep the document as minimal as possible, for maximal flexibility to adjust things later. The need for heavy-weight processes like whole-project votes is also minimized. The council will work towards maintaining the quality of and stability of the Python language and the CPython interpreter. Make contribution process easy, maintain relations with the core team, establish a decision-making process for PEPs, and so on. They have powers to make decisions on PEPs, enforce project code of conduct, etc. To know more about the election to the committee visit the Python website. NumPy drops Python 2 support. Now you need Python 3.5 or later. NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs Python 3.7.2rc1 and 3.6.8rc1 released
Read more
  • 0
  • 0
  • 2946
article-image-11-predictions-for-the-future-of-programming
Guest Contributor
19 Nov 2018
12 min read
Save for later

11 predictions for the future of programming

Guest Contributor
19 Nov 2018
12 min read
It’s been over five decades since programming pushed the boundaries of digital craftsmanship, and it is still doing so with no signs of stopping or slowing down. There is a new tool, framework, add-on, functionality, technology, or a programming language breaking the Internet every now and then. Any adept programmer not only needs to be good at coding but also has to stay abreast with the ongoing and upcoming happenings in the programming world. Just learning to code does not a give you a big edge over others. By having a good idea of what’s coming ahead,  present steps can be planned effectively. Obviously, no one can perfectly forecast the future of computer programming, but that won’t stop us from speculating, right! Here are 11  predictions for the future of programming that we think programmers should keep an eye on. #1 Cloud native as the new default Do you know that in order to cater to a single search query, Google Search uses more than 1000s of servers? All this is done in order to serve the right results. Cloud has been popular for past one decade but it’s destined to grow immensely in the future as more and more developers intend to use cloud for faster go to market. Tinkering in the cloud to build an app is so much easier as compared to managing your own servers as you don’t have to buy new servers, maintain them, upgrade them, or add new servers as and when the demand fluctuates. Web users are an impatient lot these days; so making web pages faster is the main goal for developers. 40% of people abandon a website that takes more than 3 seconds to load. More efficient algorithms save a few microseconds whereas additional impetus is provided by the rapidly developing enhanced servers. #2 IoT security concerns will escalate IoT is a growing technological concept these days. The promising piece of tech has already made it to the market, although in a limited form. Any smart device is just like a computer or machine that can be hacked by means of feeding some simple malicious lines of code. So, security of IoT devices is as important as their deployment. Or else, we will have to face dire consequences, as experienced recently in the form of a North Korean hacker charged for WannaCry ransomware and a 16 year old hacking into Apple’s servers to access customer data. Programmers need to develop suspicious-activity-proof algorithms for IoT devices. Failing to do so will not only make the devices vulnerable to unintended use but also put the entire system at risk. Hence, with the growth in the IoT market, concern about its safety will also mushroom. #3 Video Content will continue to dominate the Web In order to solve the dire glitches caused by plugins, the HTML standards committee started embedding video tags into HTML. Videos tags are programmable by virtue of the fact that basic video tags respond to JavaScript commands. Earlier video content was fixed. If you watch a video about dogs fighting cats, then you will be recommended just that. Nothing more, nothing less. However, this is not the case anymore. It is the time of seamless canvas design, in which web designers figure out clever ways to deploy different video content. Doing so allows the user to steer the way in which a narrative is unfolded and it opens up new ways of interacting with the video content. Now machine learning can deliver higher-quality streaming experiences that do not buffer as much as many existing systems. More efficient codecs and better video compression are also playing a role in making video a better digital consumption medium. Again, programming makes it feasible, as video tags and iframe are part of the programming code. #4 Consoles, consoles everywhere Thanks to the groundbreaking progress in video game console technology, PCs are continuously being rejected in favor of gaming consoles. Living room consoles are just the start. With the concept of intelligent devices, makers of other household items are also looking to make their offerings smarter. Our hairdryers and toasters are already boasting digital memory, allowing for remembering our preferences. However, the time when these, and other household units as well, will start communicating with each other i.e. exchanging information on their own is yet to come. All of these scenarios are only made possible by programming. As several programmers have already embarked on the journey for achieving results in the same direction, we might not be that far away from a time when the aforementioned scenario would be a day-to-day reality. #5 Data is important, data will be important Data is the backbone of the network of networks i.e. the Internet. What we see, read, and hear over the gigantic web is data, loads and loads of it. However, data collection is not something new for humanity. Since antiquity, humans have collected and stored large chunks of data for churning out important information at some later time. With the passage of time, enriching and protecting data have become important. While the former is achieved by presenting data in the form of videos, pictures, pie charts, etc., the latter is accomplished by adding SSL to the website and using better encryption techniques. Data processing has become equally important just like the digital ecosphere itself. In the enterprise community, data gathering will branch out more elaborately into storing, curating, and parsing. Simply said, data is and data will be the undisputed champion in the Digital World. #6 Machine Learning dominance Machine Learning is already flourishing and seeping into everyday enterprise and life. For example, machine learning algorithms are already finding a place in important automation code for big businesses. They are used for heaping big data projects. Languages like the R programming language and Python have enabled this proliferation of machine learning, so far. What’s amazing about machine learning is that it is slowly being integrated into modern life. It will soon become a common entity in a person’s life, just like smartphones and IoT. Again, machine learning also requires services of programming and code, of course. No code, no machine learning. At least for now. There is the rise of machine learning as a service trend which aims to remove or minimize programming. However, if we ever learning anything from the history of web development, even as drag and drop web design tools grow, professional web developers also grow in demand. We can expect to see a similar trend with machine learning as it continues down the path of democratization. #7 User Interface design will continue gaining popularity The time when an Internet user was expected to use a keyboard and mouse is long gone. With each passing day, using a PC is preferred less and less. Apart from offices and college laboratories, PCs are gradually being replaced by other smart devices. As smartphones, tablets, living room consoles, etc. take on the world, the emphasis on UI has heightened. A touch and a click on the screen is different. With the advancement in technology, the former is given preference. This is because it’s quick and convenient at the same time. Furthermore, face and fingerprint recognition are the new cool. Research on voice control is also advancing. Many brands have already incepted their very own virtual assistants, such as Amazon Alexa, Siri and Google Assistant, which can recognize the demands of their users with mere voice commands and interaction. For example, Android 9 Pie comes with a number of UI alterations to stay relevant with the present UI scenario, including a new position for the volume controls and Material Theming. The latter is a built-in Android toolset meant for customizing the Material Design supported by the Android. Again, designing a powerful user interface is dependent on great programming. A user interface needs not only to be robust only but also show signs of intuitiveness and interactivity. The stress on UI designing will continue growing in the future. Some of the upcoming UI trends forecasted for 2019 are the overlapping effect, functional animations, and contrast of fonts. #8 Open Source vs. Closed Development Nearly all laptops run on proprietary software but Smartphones with Android leading the race are mostly open source. iOS is still closed but it has a robust set of APIs on which developers can build their own empires. While open source software is something that anyone can tinker with, closed development environment restricts 3rd-party from accessing and toying with such a system. Among other differences between the two, a significant difference is in the quality of support. This is, obviously, better offered by closed source software. Open source is rocking the world with new developers entering into programming by tinkering with open source whereas closed environment is also growing tremendously because of personalization and security features. This is one hell of a competition. #9 Autonomous Transportation Another industry that requires services of programming is the autonomous vehicles. Just yesterday, Waymo announced that their first driverless cars will be on the road commercially next month. So far, we have only seen some of the many accomplishments that a driverless mode of transportation can achieve. Though we have only cars, for now, that is making use of autonomous transportation algorithms, soon other transportation means will also join the parade. There are already crowdfunding projects for autonomous skateboards. Known as XTND Board, it is a lightweight electric vehicle meant to redefine commuting. Autonomous aircrafts are  already being used in the military. However, pilotless airplane transportation may just be around the corner. All it requires is an excellent programming code to allow a vehicle to know that what route should it chose. So, maybe flights might become autonomous after rides. #10 The Law will redefine new limits Writing code is like fixing something, setting up protocols. What the program will do and what it won’t, depends entirely on the coding. However, there are several ways to manipulate harmful programming code. There’s a subtle analogy between programming code and law and both have their own jurisdictions. Though there is a bright, sunny side to the technological advancement, there’s also a darker side of the same that needs to be reviewed and regulated. As years will pass from this point in time, programmers will face real-world challenges to assist the Law & Order to sustain the malicious content of the society, both on the digital front and the real-world front. We have already seen how adding technology to law works. However, the other side is that it can also act as a tool to break the law(s). Cyberattacks, identity theft, and data laundering are some of the notable examples made possible by technology. This is a question which is also its own solution. In order to prevent such insincere acts, security personnel need to think like bypassers. This is where ethical hacking comes in. It is simply thinking and operating like a malicious hacker but doing so for the right cause. #11 Containers will continue to rule Theoretically, there isn’t a need for the so-called containers, which are heavily deployed in the modern-day programming. In theory, the executable files can run anywhere and various requisite permissions, such as using hardware, are given by the OS. Hence, there is, theoretically, no requirements for a container. However, because of being theoretical, all executables are considered the same. Obviously, this is not the general case. What happens is that executables are different and each one of them requires specific libraries to run. For instance, the WORA (Write Once, Run Anywhere) chant of Java fails owing to the virtue that there are several different versions of virtual machines (VMs). Though using a comprehensive VM might solve the issue, the solution lacks practicality. On the other hand, the sleek and lightweight containers win the preference. Containers are the solution to the issue of reliability caused by a software when it is to be migrated from one computing environment to another. A container is simply a complete package that contains an entire runtime environment with the application, its dependencies and libraries, other required binaries, configuration files, etc. So, when a container of a specific application has everything in it that it requires to operate, the container becomes independent of the platform. The containers will continue to rule in the future up ahead. If you are new to programming, you can check out programming terms for beginners to kickstart your coding journey. These were the future predictions that we can think of. Do want to add anything else? Please feel free to do so in the comments below. Author Bio Saurabh has worked globally for telecom and finance giants in various capacities. After working for a decade in Infosys and Sapient, he started his first startup, Leno, to solve a hyperlocal book-sharing problem. He is interested in product marketing, and analytics. His latest venture Hackr.io recommends the best online programming and design courses for every programming language. All the tutorials are submitted and voted by the programming community. What we learned from IBM Research’s ‘5 in 5’ predictions presented at Think 2018. “Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca. Why does the C programming language refuse to die?  
Read more
  • 0
  • 0
  • 10853

article-image-observability-as-code-secrets-as-a-service-and-chaos-katas-thoughtworks-outlines-key-engineering-techniques-to-trial-and-assess
Richard Gall
14 Nov 2018
5 min read
Save for later

Observability as code, secrets as a service, and chaos katas: ThoughtWorks outlines key engineering techniques to trial and assess

Richard Gall
14 Nov 2018
5 min read
ThoughtWorks has just published vol. 19 of its essential radar report. As always, it's a vital insight into what's beginning to emerge in the technology field. In the techniques quadrant of its radar, there were some really interesting new entries. Let's take a look at some of them now, so you can better plan and evaluate your roadmap and skill set for 2019. 8 of the best new techniques you should be trialling (according to ThoughtWorks) 1% canary: a way to build better feedback loops This sounds like a weird one, but the concept is simple. It's essentially about building a quick feedback loop to a tiny segment of customers - say, 1%. This can allow engineering teams to learn things quickly and make changes on other aspects of the project as it evolves. Bounded buy: a smarter way to buy out-of-the-box software solutions Bounded buy mitigates the scope creep that can cause headaches for businesses dealing with out-of-the-box software. It means those responsible for purchasing software focus only on solutions that are modular, with each 'piece' directly connecting into a particular department's needs or workflow. Crypto shredding: securing sensitive data Crypto shredding is a method of securing data that might otherwise be easily replicated or copied. Essentially, it overwrites sensitive data with encryption keys which can easily be removed or deleted. It adds an extra layer of control over a large data set - a technique that could be particularly useful in a field like healthcare. Four key metrics - focus on what's most important to build a high performance team Building a high performance team, can be challenging. Accelerate, the team behind the State of DevOps report, highlighted key drivers that engineers and team leaders should focus on: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. According to ThoughtWorks "each metric creates a virtuous cycle and focuses the teams on continuous improvement." Observability as code - breaking through the limits of traditional monitoring tools Observability has emerged as a bit of a buzzword over the last 12 months. But in the context of microservices, and increased complexity in software architecture, it is nevertheless important. However, the means through which you 'do' observability - a range of monitoring tools and dashboards - can be limiting in terms of making adjustments and replicating dashboards. This is why treating observability as code is going to become increasingly more important. It makes sense - if infrastructure as code is the dominant way we think about building software, why shouldn't it be the way we monitor it too? Run cost as architecture fitness function There's a wide assumption that serverless can save you money. This is true when you're starting out, or want to do something quickly, but it's less true as you scale up. If you're using serverless functions repeatedly, you're likely to be paying a lot - more than if you has a slightly less fashionable cloud or on premise server. To combat this complacency, you should instead watch how much services cost against the benefit delivered by them. Seems obvious, but easy to miss if you've just got excited about going serverless. Secrets as a service Without wishing to dampen what sounds incredibly cool, secrets as a service are ultimately just elaborate password managers. They can help organizations more easily decouple credentials, API keys from their source code, a move which should ensure improved security - and simplicity. By using credential rotation, organizations can be much better prepared at tackling and mitigating any security issues. AWS has 'Secrets Manager' while HashiCorp's Vault offers similar functionality. Security chaos engineering In the last edition of Radar, security chaos engineering was in the assess phase - which means ThoughtWorks thinks it's worth looking at, but maybe too early to deploy. With volume 19, security chaos engineering has moved into trial. Clearly, while chaos engineering more broadly has seen slower adoption, it would seem that over the last 12 months the security field has taken chaos engineering to heart. 2 new software engineering techniques to assess Chaos katas If chaos engineering is finding it hard to gain mainstream adoption, perhaps chaos katas is the way forward. This is essentially a technique that helps engineers deploy chaos practices in their respective domains using the training approach known as kata - a Japanese word that simply refers to a set of choreographed movements. In this context, the 'katas' are a set of code patterns that implement failures in a structured way, which engineers can then identify and explore. This is essentially a bottom up way of doing chaos engineering that also gives engineers a deeper insight into their software infrastructure. Infrastructure configuration scanner The question of who should manage your infrastructure is still a tricky one, with plenty of conflicting perspectives. However, from a productivity and agility perspective, putting the infrastructure in the hands of engineers makes a lot of sense. Of course, this could feel like an extra burden - but with an infrastructure configuration scanner, like Scout2 or Watchmen, engineers can ensure that everything is configured correctly. Software engineering techniques need to maintain simplicity as complexity increases There's clearly a diverse range of techniques on the ThoughtWorks Radar. Ultimately, however, the picture that emerges is one where efficiency and observability are key. A crucial part of software engineering will managing increased complexity and developing new tools and processes to instil some degree of simplicity and clarity. Was there anything ThoughtWorks missed?
Read more
  • 0
  • 0
  • 5312

article-image-why-does-the-c-programming-language-refuse-to-die
Kunal Chaudhari
23 Oct 2018
8 min read
Save for later

Why does the C programming language refuse to die?

Kunal Chaudhari
23 Oct 2018
8 min read
As a technology research analyst, I try to keep up the pace with the changing world of technology. It seems like every single day, there is a new programming language, framework, or tool emerging out of nowhere. In order to keep up, I regularly have a peek at the listicles on TIOBE, PyPL, and Stackoverflow along with some twitter handles and popular blogs, which keeps my FOMO (fear of missing out) in check. So here I was, strolling through the TIOBE index, to see if a new programming language is making the rounds or if any old timer language is facing its doomsday in the lower half of the table. The first thing that caught my attention was Python, which interestingly broke into the top 3 for the first time since it was ranked by TIOBE. I never cared to look at Java, since it has been claiming the throne ever since it became popular. But with my pupils dilated, I saw something which I would have never expected, especially with the likes of Python, C#, Swift, and JavaScript around. There it was, the language which everyone seemed to have forgotten about, C, sitting at the second position, like an old tower among the modern skyscrapers in New York. A quick scroll down shocked me even more: C was only recently named the language of 2017 by TIOBE. The reason it won was because of its impressive yearly growth of 1.69% and its consistency - C has been featured in the top 3 list for almost four decades now. This result was in stark contrast to many news sources (including Packt’s own research) that regularly place languages like Python and JavaScript on top of their polls. But surely this was an indicator of something. Why would a language which is almost 50 years old still hold its ground against the ranks of newer programming language? C has a design philosophy for the ages A solution to the challenges of UNIX and Assembly The 70s was a historic decade for computing. Many notable inventions and developments, particularly in the area of networking, programming, and file systems, took place. UNIX was one such revolutionary milestone, but the biggest problem with UNIX was that it was programmed in Assembly language. Assembly was fine for machines, but difficult for humans. Watch now: Learn and Master C Programming For Absolute Beginners So, the team working on UNIX, namely Dennis Ritchie, Ken Thompson, and Brian Kernighan decided to develop a language which could understand data types and supported data structures. They wanted C to be as fast as the Assembly but with the features of a high-level language. And that’s how C came into existence, almost out of necessity. But the principles on which the C programming language was built were not coincidental. It compelled the programmers to write better code and strive for efficiency rather than being productive by providing a lot of abstractions. Let’s discuss some features which makes C a language to behold. Portability leads to true ubiquity When you try to search for the biggest feature of C, almost instantly, you are bombarded with articles on portability. Which makes you wonder what is it about portability that makes C relevant in the modern world of computing. Well, portability can be defined as the measure of how easily software can be transferred from one computer environment or architecture to another. One can also argue that portability is directly proportional to how flexible your software is. Applications or software developed using C are considered to be extremely flexible because you can find a C compiler for almost every possible platform available today. So if you develop your application by simply exercising some discipline to write portable code, you have yourself an application which virtually runs on every major platform. Programmer-driven memory management It is universally accepted that C is a high-performance language. The primary reason for this is that it works very close to the machine, almost like an Assembly language. But very few people realize that versatile features like explicit memory management makes C one of the better-performing languages out there. Memory management allows programmers to scale down a program to run with a small amount of memory. This feature was important in the early days because the computers or terminals as they used to call it, were not as powerful as they are today. But the advent of mobile devices and embedded systems has renewed the interest of programmers in C language because these mobile devices demand that the programmers keep memory requirement to a minimum. Many of the programming languages today provide functionalities like garbage collection that takes care of the memory allocation. But C calls programmers’ bluff by asking them to be very specific. This makes their programs and its memory efficient and inherently fast. Manual memory management makes C one of the most suitable languages for developing other programming languages. This is because even in a garbage collector someone has to take care of memory allocation - that infrastructure is provided by C. Structure is all I got As discussed before, Assembly was difficult to work with, particularly when dealing with large chunks of code. C has a structured approach in its design which allows the programmers to break down the program into multiple blocks of code for execution, often called as procedures or functions. There are, of course, multiple ways in which software development can be approached. Structural programming is one such approach that is effective when you need to break down a problem into its component pieces and then convert it into application code. Although it might not be quite as in vogue as object-oriented programming is today, this approach is well suited to tasks like database scripting or developing small programs with logical sequences to carry out specific set of tasks. As one of the best languages for structural programming, it’s easy to see how C has remained popular, especially in the context of embedded systems and kernel development. Applications that stand the test of time If Beyoncé would have been a programmer, she definitely might have sang “Who runs the world? C developers”. And she would have been right. If you’re using a digital alarm clock, a microwave, or a car with anti-lock brakes, chances are that they have been programmed using C. Though it was never developed specifically for embedded systems, C has become the defacto programming language for embedded developers, systems programmers, and kernel development. C: the backbone of our operating systems We already know that the world famous UNIX system was developed in C, but is it the only popular application that has been developed using C? You’ll be astonished to see the list of applications that follows: The world desktop operating market is dominated by three major operating systems: Windows, MAC, and Linux. The kernel of all these OSes has been developed using the C programming language. Similarly, Android, iOS, and Windows are some of the popular mobile operating systems whose kernels were developed in C. Just like UNIX, the development of Oracle Database began on Assembly and then switched to C. It’s still widely regarded as one of the best database systems in the world. Not only Oracle but MySQL and PostgreSQL have also been developed using C - the list goes on and on. What does the future hold for C? So far we discussed the high points of C programming, it’s design principle and the applications that were developed using it. But the bigger question to ask is, what its future might hold. The answer to this question is tricky, but there are several indicators which show positive signs. IoT is one such domain where the C programming language shines. Whether or not beginner programmers should learn C has been a topic of debate everywhere. The general consensus says that learning C is always a good thing, as it builds up your fundamental knowledge of programming and it looks good on the resume. But IoT provides another reason to learn C, due to the rapid growth in the IoT industry. We already saw the massive number of applications built on C and their codebase is still maintained in it. Switching to a different language means increased cost for the company. Since it is used by numerous enterprises across the globe the demand for C programmers is unlikely to vanish anytime soon. Read Next Rust as a Game Programming Language: Is it any good? Google releases Oboe, a C++ library to build high-performance Android audio apps Will Rust Replace C++?
Read more
  • 0
  • 0
  • 14160
article-image-mozilla-shares-plans-to-bring-desktop-applications-games-to-webassembly-and-make-deeper-inroads-for-the-future-web
Prasad Ramesh
23 Oct 2018
10 min read
Save for later

Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web

Prasad Ramesh
23 Oct 2018
10 min read
WebAssembly defines an Abstract Syntax Tree (AST) in a binary format and a corresponding assembly-like text format for executable code in Web pages. It can be considered as a new language or a web standard. You can create and debug code in plain text format. It appeared in browsers last year, but that was just a barebones version. Many new features are to be added that could transform what you can do with WebAssembly. The minimum viable product (MVP) WebAssembly started with Emscripten, a toolchain. It made C++ code run on the web by transpiling it to JavaScript. But the automatically generated JS was still significantly slower than the native code. Mozilla engineers found a type system hidden in the generated JS. They figured out how to make this JS run really fast, which is now called asm.js. This was not possible in JavaScript itself, and a new language was needed, designed specifically to be compiled to. Thus was born WebAssembly. Now we take a look at what was needed to get the MVP of WebAssembly running. Compile target: A language agnostic compile target to support more languages than just C and C++. Fast execution: The compiler target had to be designed fast in order to keep up with user expectations of smooth interactions. Compact: A compact compiler target to be able to fit and quickly load pages. Pages with large code bases of web apps/desktop apps ported to the web. Linear memory: A linear model is used to give access to specific parts of memory and nothing else. This is implemented using TypedArrays, similar to a JavaScript array except that it only contains bytes of memory. This was the MVP vision of WebAssembly. It allowed many different kinds of desktop applications to work on your browser without compromising on speed. Heavy desktop applications The next achievement is to run heavyweight desktop applications on the browser. Something like Photoshop or Visual Studio. There are already some implementations of this, Autodesk AutoCAD and Adobe Lightroom. Threading: To support the use of multiple cores of modern CPUs. A proposal for threading is almost done. SharedArrayBuffers, an important part of threading had to be turned off this year due to Spectre vulnerability, they will be turned on again. SIMD: Single instruction multiple data (SIMD) enables to take a chunk of memory and split it up across different execution units/cores. It is under active development. 64-bit addressing: 32-bit memory addresses only allow 4GB of linear memory to store addresses. 64-bit gives 16 exabytes of memory addresses. The approach to incorporate this will be similar to how x86 or ARM added support for 64-bit addressing. Streaming compilation: Streaming compilation is to compile a WebAssembly file while still being downloaded. This allows very fast compilation resulting in faster web applications. Implicit HTTP caching: The compiled code of a web page in a web application is stored in HTTP cache and is reused. So compiling is skipped for any page already visited by you. Other improvements: These are upcoming discussion on how to even better the load time. Once these features are implemented, even heavier apps can run on the browser. Small modules interoperating with JavaScript In addition to heavy applications and games, WebAssembly is also for regular web development. Sometimes, small modules in an app do a lot of the work. The intent is to make it easier to port these modules. This is already happening with heavy applications, but for widespread use a few more things need to be in place. Fast calls between JS and WebAssembly: Integrating a small module will need a lot of calls between JS and WebAssembly, the goal is to make these calls faster. In the MVP the calls weren’t fast. They are fast in Firefox, other browsers are also working on it. Fast and easy data exchange: With calling JS and WebAssembly frequently, data also needs to be passed between them. The challenge being WebAssembly only understand numbers, passing complex values is difficult currently. The object has to be converted into numbers, put in linear memory and pass WebAssembly the location in linear memory. There are many proposals underway. The most notable being the Rust ecosystem that has created tools to automate this. ESM integration: The WebAssembly module isn’t actually a part of the JS module graph. Currently, developers instantiate a WebAssembly module by using an imperative API. Ecmascript module integration is necessary to use import and export with JS. The proposals have made progress, work with other browser vendors is initiated. Toolchain integration: There needs to be a place to distribute and download the modules and the tools to bundle them. While there is no need for a seperate ecosystem, the tools do need to be integrated. There are tools like the wasm-pack to automatically run things. Backwards compatibility: To support older versions of browsers, even the versions that were present before WebAssembly came into picture. This is to help developers avoid writing another implementation for adding support to an old browser. There’s a wasm2js tool that takes a wasm file and outputs JS, it is not going to be as fast, but will work with older versions. The proposal for Small modules in WebAssembly is close to being complete, and on completion it will open up the path for work on the following areas. JS frameworks and compile-to-JS languages There are two use cases: To rewrite large parts of JavaScript frameworks in WebAssembly. Statically-typed compile-to-js languages being compiled to WebAssembly instead of JS For this to happen, WebAssembly needs to support high-level language features. Garbage collector: Integration with the browser’s garbage collector. The reason is to speed things up by working with components managed by the JS VM. Two proposals are underway, should be incorporated sometime next year. Exception handling: Better support for exception handling to handle the exceptions and actions from different languages. C#, C++, JS use exceptions extensively. It is under the R&D phase. Debugging: The same level of debugging support as JS and compile-to-JS languages. There is support in browser devtools, but is not ideal. A subgroup of the WebAssembly CG are working on it. Tail calls: Functional languages support this. It allows calling a new function without adding a new stack frame to the stack. There is a proposal underway. Once these are in place, JS frameworks and many compile-to-JS languages will be unlocked. Outside the browser This refers to everything that happens in systems/places other than your local machine. A really important part is the link, a very special kind of link. The special thing about this link is that people can link to pages without having to put them in any central registry, with no need of asking who the person is, etc. It is this ease of linking that formed global communities. However, there are two unaddressed problems. Problem #1: How does a website know what code to deliver to your machine depending on the OS device you are using? It is not practical to have different versions of code for every device possible. The website has only one code, the source code which is translated to the user’s machine. With portability, you can load code from unknown people while not knowing what kind of device are they using. This brings us to the second problem. Problem #2: If the people whose web pages you load are not known, there comes the question of trust. The code from a web page can contain malicious code. This is where security comes into picture. Security is implemented at the browser level and filters out malicious content if detected. This makes you think of WebAssembly as just another tool in the browser toolbox which it is. Node.js WebAssembly can bring full portability to Node.js. Node gives most of the portability of JavaScript on the web. There are cases where performance needs to be improved which can be done via Node’s native modules. These modules are written in languages such as C. If these native modules were written in WebAssembly, they wouldn’t need to be compiled specifically for the target architecture. Full portability in Node would mean the exact same Node app running across different kinds of devices without needing to compile. But this is not possible currently as WebAssembly does not have direct access to the system’s resources. Portable interface The Node core team would have to figure out the set of functions to be exposed and the API to use. It would be nice if this was something standard, not just specific to Node. If done right, the same API could be implemented for the web. There is a proposal called package name maps providing a mechanism to map a module name to a path to load the module from. This looks likely to happen and will unlock other use cases. Other use cases of outside the browser Now let’s look at the other use cases of outside the browser. CDNs, serverless, and edge computing The code to your website resides in a server maintained by a service provider. They maintain the server and make sure the code is close to all the users of your website. Why use WebAssembly in these cases? Code in a process doesn’t have boundaries. Functions have access to all memory in that process and they can call any functions. On running different services from different people, this is a problem. To make this work, a runtime needs to be created. It takes time and effort to do this. A common runtime that could be used across different use cases would speed up development. There is no standard runtime for this yet, however, some runtime projects are underway. Portable CLI tools There are efforts to get WebAssembly used in more traditional operating systems. When this happens, you can use things like portable CLI tools used across different kinds of operating systems. Internet of Things Smaller IoT devices like wearables etc are small and have resource constraints. They have small processors and less memory. What would help in this situation is a compiler like Cranelift and a runtime like wasmtime. Many of these devices are also different from one another, portability would address this issue. Clearly, the initial implementation of WebAssembly was indeed just an MVP and there are many more improvements underway to make it faster and better. Will WebAssembly succeed in dominating all forms of software development? For in depth information with diagrams, visit the Mozilla website. Ebiten 1.8, a 2D game library in Go, is here with experimental WebAssembly support and newly added APIs Testing WebAssembly modules with Jest [Tutorial] Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls
Read more
  • 0
  • 0
  • 5387

article-image-is-mozilla-the-most-progressive-tech-organization-on-the-planet-right-now
Richard Gall
16 Oct 2018
7 min read
Save for later

Is Mozilla the most progressive tech organization on the planet right now?

Richard Gall
16 Oct 2018
7 min read
2018, according to The Economist, has been the year of the techlash. scandals, protests, resignations, congressional testimonies - many of the largest companies in the world have been in the proverbial dock for a distinct lack of accountability. Together, these stories have created a narrative where many are starting to question the benefits of unbridled innovation. But Mozilla is one company that seems to have bucked that trend. In recent weeks there have been a series of news stories that suggest Mozilla is a company thinking differently about its place in the world, as well as the wider challenges technology poses society. All of these come together to present Mozilla in a new light. Cynics might suggest that much of this is little more than some smart PR work, but it's a little unfair to dismiss what some impressive work. So much has been happening across the industry that deserves scepticism at best and opprobrium at worst. To see a tech company stand out from the tiresome pattern of stories this year can only be a good thing. Mozilla on education: technology, ethical code, and the humanities Code ethics has become a big topic of conversation in 2018. And rightly so - with innovation happening at an alarming pace, it has become easy to make the mistake of viewing technology as a replacement for human agency, rather than something that emerges from it. When we talk about code ethics it reminds us that technology is something built from the decisions and actions of thousands of different people. It’s for this reason that last week’s news that Mozilla has teamed up with a number of organizations, including the Omidyar Network to announce a brand new prize for computer science students feels so important. At a time when the likes of Mark Zuckerberg dance around any notion of accountability, peddling a narrative where everything is just a little bit beyond Facebook’s orbit of control, the ‘Responsible Computer Science Challenge’ stands out. With $3.5 million up for grabs for smart computer science students, it’s evidence that Mozilla is putting its money where its mouth is and making ethical decision making something which, for once, actually pays. Mitchell Baker on the humanities and technology Mitchell Baker’s comments to the Guardian that accompanied the news also demonstrate a refreshingly honest perspective from a tech leader. “One thing that’s happened in 2018,” Baker said, “is that we’ve looked at the platforms, and the thinking behind the platforms, and the lack of focus on impact or result. It crystallised for me that if we have STEM education without the humanities, or without ethics, or without understanding human behaviour, then we are intentionally building the next generation of technologists who have not even the framework or the education or vocabulary to think about the relationship of STEM to society or humans or life.” Baker isn’t, however, a crypto-luddite or an elitist that wants full stack developer classicists. Instead she’s looking forward at the ways in which different disciplines can interact and inform one another. It’s arguably an intellectual form of DevOps. It is a way of bridging the gap between STEM skills and practices, and those rooted in the tradition of the humanities. The significance of this intervention shouldn’t be understated. It opens up a dialogue within society and the tech industry that might get us to a place where ethics is simply part and parcel of what it means to build and design software, not an optional extra. Mozilla’s approach to internal diversity: dropping meritocracy The respective cultures of organizations and communities across tech has been in the spotlight over the last few months. Witness the bitter furore over Linux change to its community guidelines to see just how important definitions and guidelines are to the people within them. That’s why Mozilla’s move to drop meritocracy from its guidelines of governance and leadership structures was a small yet significant move. It’s simply another statement of intent from a company eager to ensure it helps develop a culture more open and inclusive than the tech world has been over the last thirty decades. In a post published on the Mozilla blog at the start of October, Emma Irwin (D&I Strategy, Mozilla Contributors and Communities) and Larissa Shapiro (Head of Global Diversity & Inclusion at Mozilla) wrote that “Meritocracy does not consider the reality that tech does not operate on a level playing field.” The new governance proposal actually reflects Mozilla’s apparent progressiveness pretty well. In it, it states that “the project also seeks to debias this system of distributing authority through active interventions that engage and encourage participation from diverse communities.” While there has been some criticism of the change, it’s important to note that the words used by organizations of this size does have an impact on how we frame and focus problems. From this perspective, Mozilla’s decision could well be a vital small step in making tech more accessible and diverse. The tech world needs to engage with political decision makers Mozilla isn't just a 'progressive' tech company because of the content of its political beliefs. Instead, what's particularly important is how it appears to recognise that the problems that technology faces and engages with are, in fact, much bigger than technology itself. Just consider the actions of other tech leaders this year. Sundar Pichai didn't attend his congressional hearing, Jack Dorsey assured us that Twitter has safety at its heart while verifying neo-Nazis, while Mark Zuckerberg suggested that AI can fix the problems of election interference and fake news. The hubris has been staggering. Mozilla's leadership appears to be trying hard to avoid the same pitfalls. We shouldn’t be surprised that Mozilla actually embraced the idea of 2018’s ‘techlash.' The organization used the term in the title of a post directed at G20 leaders in August. Written alongside The Internet Society and the Web Foundation, it urged global leaders to “reinject hope back into technological innovation.” Implicit in the post is an acknowledgement that the aims and goals of much of the tech industry - improving people’s lives, making infrastructure more efficient - can’t be purely solved by the industry itself. It is a subtle stab at what might be considered hubris. Taking on government and regulation But this isn’t to say Mozilla is completely in thrall to government and regulation. Most recently (16 October), Mozilla voiced its concerns about current decryption laws being debated in Australian Parliament. The organization was clear, saying "this is at odds with the core principles of open source, user expectations, and potentially contractual license obligations.” At the beginning of September Mozilla also spoke out against EU copyright reform. The organization argued that “article 13 will be used to restrict the freedom of expression and creative potential of independent artists who depend upon online services to directly reach their audience and bypass the rigidities and limitations of the commercial content industry.”# While opposition to EU copyright reform came from a range of voices - including those huge corporations that have come under scrutiny during the ‘techlash’ - Mozilla is, at least, consistent. The key takeaway from Mozilla: let’s learn the lessons of 2018’s techlash The techlash has undoubtedly caused a lot of pain for many this year. But the worst thing that could happen is for the tech industry to fail to learn the lessons that are emerging. Mozilla deserve credit for trying hard to properly understand the implications of what’s been happening and develop a deliberate vision for how to move forward.
Read more
  • 0
  • 0
  • 5442