Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 6092

article-image-eric-evans-at-domain-driven-design-europe-2019-explains-the-different-bounded-context-types-and-their-relation-with-microservices
Bhagyashree R
17 Dec 2019
9 min read
Save for later

Eric Evans at Domain-Driven Design Europe 2019 explains the different bounded context types and their relation with microservices

Bhagyashree R
17 Dec 2019
9 min read
The fourth edition of the Domain-Driven Design Europe 2019 conference was held early this year from Jan 31-Feb 1 at Amsterdam. Eric Evans, who is known for his book Domain-Driven Design: Tackling Complexity in Software kick-started the conference with a great talk titled "Language in Context". In his keynote, Evans explained some key Domain-driven design concepts including subdomains, context maps, and bounded context. He introduced some new concepts as well including bubble context, quaint context, patch on patch context, and more. He further talked about the relationship between the bounded context and microservices. Want to learn domain-driven design concepts in a practical way? Check out our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will guide you in involving business stakeholders when choosing the software you are planning to build for them. By figuring out the temporal nature of behavior-driven domain models, you will be able to build leaner, more agile, and modular systems. What is a bounded context? Domain-driven design is a software development approach that focuses on the business domain or the subject area. To solve problems related to that domain, we create domain models which are abstractions describing selected aspects of a domain. The terminology and concepts related to these models only make sense within a context. In domain-driven design, this is called bounded context.  Bounded context is one of the most important concepts in domain-driven design. Evans explained that bounded context is basically a boundary where we eliminate any kind of ambiguity. It is a part of the software where particular terms, definitions, and rules apply in a consistent way. Another important property of the bounded context is that a developer and other people in the team should be able to easily see that “boundary.” They should know whether they are inside or outside of the boundary.  Within this bounded context, we have a canonical context in which we explore different domain models, refine our language and develop ubiquitous language, and try to focus on the core domain. Evans says that though this is a very “tidy” way of creating software, this is not what we see in reality. “Nothing is that tidy! Certainly, none of the large software systems that I have ever been involved with,” he says. He further added that though the concept of bounded context has grabbed the interest of many within the community, it is often “misinterpreted.” Evans has noticed that teams often confuse between bounded context and subdomain. The reason behind this confusion is that in an “ideal” scenario they should coincide. Also, large corporations are known for reorganizations leading to changes in processes and responsibilities. This could result in two teams having to work in the same bounded contexts with an increased risk of ending up with a “big ball of mud.” The different ways of describing bounded contexts In their paper, Big Ball of Mud, Brian Foote and Joseph Yoder describe the big ball of mud as “a haphazardly structured, sprawling, sloppy, duct-tape and baling wire, spaghetti code jungle.” Some of the properties that Evans uses to describe it are incomprehensible interdependencies, inconsistent definitions, incomplete coverage, and risky to change. Needless to say, you would want to avoid the big ball of mud by all accounts. However, if you find yourself in such a situation, Evans says that building the system from the ground up is not an ideal solution. Instead, he suggests going for something called bubble context in which you create a new model that works well next to the already existing models. While the business is run by the big ball of mud, you can do an elegant design within that bubble. Another context that Evans explained was the mature productive context. It is the part of the software that is producing value but probably is built on concepts in the core domain that are outdated. He explained this particular context with an example of a garden. A “tidy young garden” that has been recently planted looks great, but you do not get much value from it. It is only a few months later when the plants start fruition and you get the harvest. Along similar lines, developers should plant seeds with the goal of creating order, but also embrace the chaotic abundance that comes with a mature system. Evans coined another term quaint context for a context that one would consider "legacy". He describes it as an old context that still does useful work but is implemented using old fashioned technology or is not aligned with the current domain vision. Another name he suggests is patch on patch context that also does something useful as it is, but its numerous interdependency “makes change risky and expensive.” Apart from these, there are many other types of context that we do not explicitly label. When you are establishing a boundary, it is good practice to analyze different subdomains and check the ones that are generic and ones that are specific to the business. Here he introduced the generic subdomain context. “Generic here means something that everybody does or a great range of businesses and so forth do. There’s nothing special about our business and we want to approach this is a conventional way. And to do that the best way I believe is to have a context, a boundary in which we address that problem,” he explains. Another generic context Evans mentioned was generic off the shelf (OTS), which can make setting the boundary easier as you are getting something off the shelf. Bounded context types in the microservice architecture Evans sees microservices as the biggest opportunity and risks the software engineering community has had in a long time. Looking at the hype around microservices it is tempting to jump on the bandwagon, but Evans suggests that it is important to see the capabilities microservices provide us to meet the needs of the business. A common misconception people have is that microservices are bounded context, which Evans calls oversimplification. He further shared four kinds of context that involve microservices: Service internal The first one is service internal that describes how a service actually works. Evans believes that this is the type of context that people think of when they say microservice is a bounded context. In this context, a service is isolated from other services and handled by an autonomous team. Though this definitely fits the definition of a bounded context, it is not the only aspect of microservices, Evans notes. If we only use this type, we would end up with a bunch of services that don't know how to interact with each other.  API of Service  The API of service context describes how a service talks to other services. In this context as well, an API is built by an autonomous team and anyone consuming their API is required to conform to them. This implies that all the development decisions are pretty much dictated by the data flow direction, however, Evans think there are other alternatives. Highly influential groups may create an API that other teams must conform to irrespective of the direction data is flowing. Cluster of codesigned services The cluster of codesigned services context refers to the cluster of services designed in close collaboration. Here, the bounded context consists of a cluster of services designed to work with each other to accomplish some tasks. Evans remarks that the internals of the individual services could be very different from the models used in the API. Interchange context The final type is interchange context. According to Evans, the interaction between services must also be modeled. The model will describe messages and definitions to use when services interact with other services. He further notes that there are no services in this context as it is all about messages, schemas, and protocols. How legacy systems can participate in microservices architecture Coming back to legacy systems and how they can participate in a microservices environment, Evans introduced a new concept called Exposed Legacy Asset. He suggests creating an interface that looks like a microservice and interacts with other microservices, but internally interacts with a legacy system. This will help us avoid corrupting the new microservices built and also keeps us from having to change the legacy system. In the end, looking back at 15 years of his book, Domain-Driven Design, he said that we now may need a new definition of domain-driven design. A challenge that he sees is how tight this definition should be. He believes that a definition should share a common vision and language, but also be flexible enough to encourage innovation and improvement. He doesn’t want the domain-driven design to become a club of happy members. He instead hopes for an intellectually honest community of practitioners who are “open to the possibility of being wrong about things.” If you tried to take the domain-driven design route and you failed at some point, it is important to question and reexamine. Finally, he summarized by defining domain-driven design as a set of guiding principles and heuristics. The key principles are focussing on the core domain, exploring models in a creative collaboration of domain experts and software experts, and speaking a ubiquitous language within a bounded context. [box type="shadow" align="" class="" width=""] “Let's practice DDD together, shake it up and renew,” he concludes. [/box] If you want to put these and other domain-driven design principles into practice, grab a copy of our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will help you discover and resolve domain complexity together with business stakeholders and avoid common pitfalls when creating the domain model. You will further study the concept of bounded context and aggregate, and much more. Gabriel Baptista on how to build high-performance software architecture systems with C# and .Net Core You can now use WebAssembly from .NET with Wasmtime! Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist
Read more
  • 0
  • 0
  • 7278

article-image-microsoft-technology-evangelist-matthew-weston-on-how-microsoft-powerapps-is-democratizing-app-development-interview
Bhagyashree R
04 Dec 2019
10 min read
Save for later

Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview]

Bhagyashree R
04 Dec 2019
10 min read
In recent years, we have seen a wave of app-building tools coming in that enable users to be creators. Another such powerful tool is Microsoft PowerApps, a full-featured low-code / no-code platform. This platform aims to empower “all developers” to quickly create business web and mobile apps that fit best with their use cases. Microsoft defines “all developers” as citizen developers, IT developers, and Pro developers. To know what exactly PowerApps is and its use cases, we sat with Matthew Weston, the author of Learn Microsoft PowerApps. Weston was kind enough to share a few tips for creating user-friendly and attractive UI/UX. He also shared his opinion on the latest updates in the Power Platform including AI Builder, Portals, and more. [box type="shadow" align="" class="" width=""] Further Learning Weston’s book, Learn Microsoft PowerApps will guide you in creating powerful and productive apps that will help you transform old and inefficient processes and workflows in your organization. In this book, you’ll explore a variety of built-in templates and understand the key difference between types of PowerApps such as canvas and model-driven apps. You’ll learn how to generate and integrate apps directly with SharePoint, gain an understanding of PowerApps key components such as connectors and formulas, and much more. [/box] Microsoft PowerApps: What is it and why is it important With the dawn of InfoPath, a tool for creating user-designed form templates without coding, came Microsoft PowerApps. Introduced in 2016, Microsoft PowerApps aims to democratize app development by allowing users to build cross-device custom business apps without writing code and provides an extensible platform to professional developers. Weston explains, “For me, PowerApps is a solution to a business problem. It is designed to be picked up by business users and developers alike to produce apps that can add a lot of business value without incurring large development costs associated with completely custom developed solutions.” When we asked how his journey started with PowerApps, he said, “I personally ended up developing PowerApps as, for me, it was an alternative to using InfoPath when I was developing for SharePoint. From there I started to explore more and more of its capabilities and began to identify more and more in terms of use cases for the application. It meant that I didn’t have to worry about controls, authentication, or integration with other areas of Office 365.” Microsoft PowerApps is a building block of the Power Platform, a collection of products for business intelligence, app development, and app connectivity. Along with PowerApps, this platform includes Power BI and Power Automate (earlier known as Flow). These sit on the top of the Common Data Service (CDS) that stores all of your structured business data. On top of the CDS, you have over 280 data connectors for connecting to popular services and on-premise data sources such as SharePoint, SQL Server, Office 365, Salesforce, among others. All these tools are designed to work together seamlessly. You can analyze your data with Power BI, then act on the data through web and mobile experiences with PowerApps, or automate tasks in the background using Power Automate. Explaining its importance, Weston said, “Like all things within Office 365, using a single application allows you to find a good solution. Combining multiple applications allows you to find a great solution, and PowerApps fits into that thought process. Within the Power Platform, you can really use PowerApps and Power Automate together in a way that provides a high-quality user interface with a powerful automation platform doing the heavy processing.” “Businesses should invest in PowerApps in order to maximize on the subscription costs which are already being paid as a result of holding Office 365 licenses. It can be used to build forms, and apps which will automatically integrate with the rest of Office 365 along with having the mobile app to promote flexible working,” he adds. What skills do you need to build with PowerApps PowerApps is said to be at the forefront of the “Low Code, More Power” revolution. This essentially means that anyone can build a business-centric application, it doesn’t matter whether you are in finance, HR, or IT. You can use your existing knowledge of PowerPoint or Excel concepts and formulas to build business apps. Weston shared his perspective, “I believe that PowerApps empowers every user to be able to create an app, even if it is only quite basic. Most users possess the ability to write formulas within Microsoft Excel, and those same skills can be transferred across to PowerApps to write formulas and create logic.” He adds, “Whilst the above is true, I do find that you need to have a logical mind in order to really create rich apps using the Power Platform, and this often comes more naturally to developers. Saying that, I have met people who are NOT developers, and have actually taken to the platform in a much better way.” Since PowerApps is a low-code platform, you might be wondering what value it holds for developers. We asked Weston what he thinks about developers investing time in learning PowerApps when there are some great cross-platform development frameworks like React Native or Ionic. Weston said, “For developers, and I am from a development background, PowerApps provides a quick way to be able to create apps for your users without having to write code. You can use formulae and controls which have already been created for you reducing the time that you need to invest in development, and instead you can spend the time creating solutions to business problems.” Along with enabling developers to rapidly publish apps at scale, PowerApps does a lot of heavy lifting around app architecture and also gives you real-time feedback as you make changes to your app. This year, Microsoft also released the PowerApps component framework which enables developers to code custom controls and use them in PowerApps. You also have tooling to use alongside the PowerApps component framework for a better end to end development experience: the new PowerApps CLI and Visual Studio plugins. On the new PowerApps capabilities: AI Builder, Portals, and more At this year’s Microsoft Ignite, the Power Platform team announced almost 400 improvements and new features to the Power Platform. The new AI Builder allows you to build and train your own AI model or choose from pre-built models. Another feature is PowerApps Portals using which organizations can create portals and share them with both internal and external users. We also have UI flows, currently a preview feature that provides Robotic Process Automation (RPA) capabilities to Power Automate. Weston said, “I love to use the Power Platform for integration tasks as much as creating front end interfaces, especially around Power Automate. I have created automations which have taken two systems, which won’t natively talk to each other, and process data back and forth without writing a single piece of code.” Weston is already excited to try UI flows, “I’m really looking forward to getting my hands on UI flows and seeing what I can create with those, especially for interacting with solutions which don’t have the usual web service interfaces available to them.” However, he does have some concerns regarding RPA. “My only reservation about this is that Robotic Process Automation, which this is effectively doing, is usually a deep specialism, so I’m keen to understand how this fits into the whole citizen developer concept,” he shared. Some tips for building secure, user-friendly, and optimized PowerApps When it comes to application development, security and responsiveness are always on the top in the priority list. Weston suggests, “Security is always my number one concern, as that drives my choice for the data source as well as how I’m going to structure the data once my selection has been made. It is always harder to consider security at the end, when you try to shoehorn a security model onto an app and data structure which isn’t necessarily going to accept it.” “The key thing is to never take your data for granted, secure everything at the data layer first and foremost, and then carry those considerations through into the front end. Office 365 employs the concept of security trimming, whereby if you don’t have access to something then you don’t see it. This will automatically apply to the data, but the same should also be done to the screens. If a user shouldn’t be seeing the admin screen, don’t even show them a link to get there in the first place,” he adds. For responsiveness, he suggests, “...if an app is slow to respond then the immediate perception about the app is negative. There are various things that can be done if working directly with the data source, such as pulling data into a collection so that data interactions take place locally and then sync in the background.” After security, another important aspect of building an app is user-friendly and attractive UI and UX. Sharing a few tips for enhancing your app's UI and UX functionality, Weston said, “When creating your user interface, try to find the balance between images and textual information. Quite often I see PowerApps created which are just line after line of text, and that immediately switches me off.” He adds, “It could be the best app in the world that does everything I’d ever want, but if it doesn’t grip me visually then I probably won’t be using it. Likewise, I’ve seen apps go the other way and have far too many images and not enough text. It’s a fine balance, and one that’s quite hard.” Sharing a few ways for optimizing your app performance, Weston said, “Some basic things which I normally do is to load data on the OnVisible event for my first screen rather than loading everything on the App OnStart. This means that the app can load and can then be pulling in data once something is on the screen. This is generally the way that most websites work now, they just get something on the screen to keep the user happy and then pull data in.” “Also, give consideration to the number of calls back and forth to the data source as this will have an impact on the app performance. Only read and write when I really need to, and if I can, I pull as much of the data into collections as possible so that I have that temporary cache to work with,” he added. About the author Matthew Weston is an Office 365 and SharePoint consultant based in the Midlands in the United Kingdom. Matthew is a passionate evangelist of Microsoft technology, blogging and presenting about Office 365 and SharePoint at every opportunity. He has been creating and developing with PowerApps since it became generally available at the end of 2016. Usually, Matthew can be seen presenting within the community about PowerApps and Flow at SharePoint Saturdays, PowerApps and Flow User Groups, and Office 365 and SharePoint User Groups. Matthew is a Microsoft Certified Trainer (MCT) and a Microsoft Certified Solutions Expert (MCSE) in Productivity. Check out Weston’s latest book, Learn Microsoft PowerApps on PacktPub. This book is a step-by-step guide that will help you create lightweight business mobile apps with PowerApps.  Along with exploring the different built-in templates and types of PowerApps, you will generate and integrate apps directly with SharePoint. The book will further help you understand how PowerApps can use several Microsoft Power Automate and Azure functionalities to improve your applications. Follow Matthew Weston on Twitter: @MattWeston365. Denys Vuika on building secure and performant Electron apps, and more Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 6524

article-image-github-universe-2019-github-for-mobile-github-archive-program-and-more-announced-amid-protests-against-githubs-ice-contract
Vincy Davis
14 Nov 2019
4 min read
Save for later

GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract

Vincy Davis
14 Nov 2019
4 min read
Yesterday, GitHub commenced its popular product conference GitHub Universe 2019 in San Francisco. The two-day annual conference celebrates the contribution of GitHub’s 40+ million developers and their contributions to the open source community. Day 1 of the conference had many interesting announcements like GitHub for mobile, GitHub Archive Program, and more. Let’s look at some of the major announcements at the GitHub Universe 2019 conference. GitHub for mobile iOS (beta) Github for mobile is a beta app that aims to give users the flexibility to work and interact with the team, anywhere they want. This will enable users to share feedback on a design discussion or review codes in a non-complex development environment. This native app will adapt to any screen size and will also work in dark mode based on the device preference. Currently available only on iOS, the GitHub team has said that it will soon come up with the Android version of it. https://twitter.com/italolelis/status/1194929030518255616 https://twitter.com/YashSharma___/status/1194899905552105472 GitHub Archive Program “Our world is powered by open source software. It’s a hidden cornerstone of our civilization and the shared heritage of all humanity. The mission of the GitHub Archive Program is to preserve it for generations to come,” states the official GitHub blog. GitHub has partnered with the Stanford Libraries, the Long Now Foundation, the Internet Archive, the Software Heritage Foundation, Piql, Microsoft Research, and the Bodleian Library to preserve all the available open source code in the world. It will safeguard all the data by storing multiple copies across various data formats and locations. This includes a “very-long-term archive” called the GitHub Arctic Code Vault which is designed to last at least 1,000 years. https://twitter.com/vithalreddy/status/1194846571835183104 https://twitter.com/sonicbw/status/1194680722856042499 Read More: GitHub Satellite 2019 focuses on community, security, and enterprise Automating workflows from code to cloud General availability of GitHub Actions Last year, at the GitHub Universe conference, GitHub Actions was announced in beta. This year, GitHub has made it generally available to all the users. In the past year, GitHub Actions has received contributions from the developers of AWS, Google, and others. Actions has now developed as a new standard for building and sharing automation for software development, including a CI/CD solution and native package management.GitHub has also announced the free use of self-hosted runners and artifact caching. https://twitter.com/qmixi/status/1194379789483704320 https://twitter.com/inversemetric/status/1194668430290345984 General availability of GitHub Packages In May this year, GitHub had announced the beta version of the GitHub Package Registry as its new package management service. Later in September, after gathering community feedback, GitHub announced that the service has proxy support for the primary npm registry. Since its launch, GitHub Package has received over 30,000 unique packages that served the needs of over 10,000 organizations. Now, at the GitHub Universe 2019, the GitHub team has announced the general availability of GitHub Packages and also informed that they have added support for using the GitHub Actions token. https://twitter.com/Chris_L_Ayers/status/1194693253532020736 These were some of the major announcements at day 1 of the GitHub Universe 2019 conference, head over to GitHub’s blog for more details of the event. Tech workers protests against GitHub’s ICE contract Major product announcements aside, one thing that garnered a lot of attention at the GitHub Universe conference was the protest conducted by the GitHub workers along with the Tech Workers Coalition to oppose GitHub’s $200,000 contract with Immigration and Customs Enforcement (ICE). Many high-profile speakers have dropped out of the GitHub Universe 2019 conference and at least five GitHub employees have resigned from GitHub due to its support for ICE. https://twitter.com/lily_dart/status/1194216293668401152 Read More: Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE Yesterday at the event, the protesting tech workers brought a giant cage to symbolize how ICE uses them to detain migrant children. https://twitter.com/githubbers/status/1194662876587233280 Tech workers around the world have extended their support to the protest against GitHub. https://twitter.com/ConMijente/status/1194665524191318016 https://twitter.com/CoralineAda/status/1194695061717450752 https://twitter.com/maybekatz/status/1194683980877975552 GitHub along with Weights & Biases introduced CodeSearchNet challenge evaluation and CodeSearchNet Corpus GitHub acquires Semmle to secure open-source supply chain; attains CVE Numbering Authority status GitHub Package Registry gets proxy support for the npm registry GitHub updates to Rails 6.0 with an incremental approach GitHub now supports two-factor authentication with security keys using the WebAuthn API
Read more
  • 0
  • 0
  • 4110

article-image-node-js-13-releases-upgraded-v8-full-icu-support-stable-worker-threads-api
Fatema Patrawala
23 Oct 2019
4 min read
Save for later

Node.js 13 releases with an upgraded V8, full ICU support, stable Worker Threads API and more

Fatema Patrawala
23 Oct 2019
4 min read
Yesterday was a super exciting day for Node.js developers as Node.js foundation announced of Node.js 12 transitions to Long Term Support (LTS) with the release of Node.js 13. As per the team, Node.js 12 becomes the newest LTS release along with version 10 and 8. This release marks the transition of Node.js 12.x into LTS with the codename 'Erbium'. The 12.x release line now moves into "Active LTS" and will remain so until October 2020. Then it will move into "Maintenance" until the end of life in April 2022. The new Node.js 13 release will deliver faster startup and better default heap limits. It includes updates to V8, TLS and llhttp and new features like diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API, and more. Key features in Node.js 13 Let us take a look at the key features included in Node.js 13. V8 gets an upgrade to V8 7.8 This release is compatible with the new version V8 7.8. This new version of the V8 JavaScript engine brings performance tweaks and improvements to keep Node.js up with the ongoing improvements in the language and runtime. Full ICU enabled by default in Node.js 13 As of Node.js 13, full-icu is now available as default, which means hundreds of other local languages are now supported out of the box. This will simplify development and deployment of applications for non-English deployments. Stable workers API Worker Threads API is now a stable feature in both Node.js 12 and Node.js 13. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. New compiler and platform support Node.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 13, the codebase will now require a minimum of version 10 for the OS X development tools and version 7.2 of the AIX operating system. In addition to this there has been progress on supporting Python 3 for building Node.js applications. Systems that have Python 2 and Python 3 installed will still be able to use Python 2, however, systems with only Python 3 should now be able to build using Python 3. Developers discuss pain points in Node.js 13 On Hacker News, users discuss various pain-points in Node.js 13 and some of the functionalities missing in this release. One of the users commented, “To save you the clicks: Node.js 13 doesn't support top-level await. Node includes V8 7.8, released Sep 27. Top-level await merged into V8 on Sep 24, but didn't make it in time for the 7.8 release.” Response on this comment came in from V8 team, they say, “TLA is only in modules. Once node supports modules, it will also have TLA. We're also pushing out a version with 7.9 fairly soonish.” Other users discussed how Node.js performs with TypeScript, “I've been using node with typescript and it's amazing. VERY productive. The key thing is you can do a large refactoring without breaking anything. The biggest challenge I have right now is actually the tooling. Intellij tends to break sometimes. I'm using lerna for a monorepo with sub-modules and it's buggy with regular npm. For example 'npm audit' doesn't work. I might have to migrate to yarn…” If you are interested to know more about this release, check out the official Node.js blog post as well as the GitHub page for release notes. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Google is planning to bring Node.js support to Fuchsia
Read more
  • 0
  • 0
  • 8237

article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 3981
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-reactos-0-4-12-releases-with-kernel-improvements-intel-e1000-nic-driver-support-and-more
Bhagyashree R
25 Sep 2019
2 min read
Save for later

ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more

Bhagyashree R
25 Sep 2019
2 min read
Earlier this week, the ReactOS team announced the release of ReactOS 0.4.12. This release comes with a bunch of kernel improvements, Intel e1000 NIC driver support, font improvements, and more. Key updates in ReactOS 0.4.12 Kernel updates The filesystem infrastructure of ReactOS 0.4.12 has received quite a few improvements to enable Microsoft filesystem drivers. The team has also worked on the common cache module that has deep ties to the memory manager. The team has also improved device power management, fixed support for PXE booting, and overhauled the write-protection functionality. Window snapping ReactOS 0.4.12 comes with support for window snapping. So, users will now be able to align windows to sides or maximize and minimize them by dragging in specific directions. This release also comes with the keyboard shortcuts that accompany this feature. Intel e1000 NIC driver ReactOS 0.4.12 has a new driver to support the Network Interface Card (NIC) out of the box. Now, end-users do not need to manually find and install a driver. This new driver will also be compatible with e1000 NICs. Improvements related to font In ReactOS 0.4.12, font rendering is made more robust and correct. This release fixes a series of problems that badly affected text rendering for buttons in a range of applications,  from iTunes to various .NET applications. User-mode DLLs In this release, the team has made a range of improvements to user-mode components. The common controls (comctl) library is used by most of the Windows applications to draw various generic user interface elements. The team has fixed an issue related to it “reading extremely dryly.” Other updates include drivers for MIDI instruments and animated rotation bar in the startup/shutdown dialog. Check out the official announcement to know more about ReactOS 0.4.12 in detail. ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more! Btrfs now boots ReactOS, a free and open-source alternative for Windows NT ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes Understanding network port numbers, TCP, UDP, and ICMP on an operating system Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report  
Read more
  • 0
  • 0
  • 2706

article-image-red-hat-announces-centos-stream-a-developer-forward-distribution-jointly-with-the-centos-project
Savia Lobo
25 Sep 2019
3 min read
Save for later

Red Hat announces CentOS Stream, a “developer-forward distribution” jointly with the CentOS Project

Savia Lobo
25 Sep 2019
3 min read
On September 24, just after the much-awaited CentOS 8 was released, the Red Hat community in agreement with the CentOS Project announced a new model into the CentOS Linux community called, CentOS Stream. CentOS Stream is an upstream development platform for ecosystem developers. It is a single, continuous stream of content with updates several times daily, encompassing the latest and greatest from the RHEL codebase. Also, it is like having a view into what the next version of RHEL will look like, available to a much broader community than just a beta or "preview" release. Chris Wright, Red Hat's CTO, says CentOS Stream is it's "a developer-forward distribution that aims to help community members, Red Hat partners, and others take full advantage of open source innovation within a more stable and predictable Linux ecosystem. It is a parallel distribution to existing CentOS." In the previous CentOS releases developers would not know beforehand about the releases in RHEL. As the CentOS Stream project sits between the Fedora Project and RHEL in the RHEL Development process, it will provide a "rolling preview" of future RHEL kernels and features. This enables developers to stay one or two steps ahead of what’s coming in RHEL. “CentOS Stream is parallel to existing CentOS builds; this means that nothing changes for current users of CentOS Linux and services, even those that begin to explore the newly-released CentOS 8. We encourage interested users that want to be more tightly involved in driving the future of enterprise Linux, however, to transition to CentOS Stream as the new "pace-setting" distribution,” The Red Hat blog states. CentOS Stream is part of Red Hat’s broader focus to engage with communities and developers in a way that better aligns with the modern IT world. A user on Hacker News commented, “I like it, at least in theory. I develop some industrial software that runs on RHEL so being able to run somewhat similar distribution on my machine would be convenient. I tried running CentOS but it was too frustrating and limiting to deal with all the outdated packages on a dev machine. I suppose it will also be good for devs who just like the RHEL environment but don't need a super stable, outdated packages.” Another user commented, “I wonder what future Fedora will have if this new CentOS Stream will be stable enough for developer daily driver. 6 month release cycle of Fedora always felt awkwardly in-between, not having the stability of lts nor the continuity of rolling. I guess lot depends on details on how the packages flow to CentOS Stream, do they come from released Fedora versions or rawhide etc.” To know more about CentOS Stream in detail, read Red Hat’s official blog post. After RHEL 8 release, users awaiting the release of CentOS 8 After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side license adoption Red Hat announces the general availability of Red Hat OpenShift Service Mesh Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3
Read more
  • 0
  • 0
  • 2552

article-image-introducing-weld-a-runtime-written-in-rust-and-llvm-for-cross-library-optimizations
Bhagyashree R
24 Sep 2019
5 min read
Save for later

Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations

Bhagyashree R
24 Sep 2019
5 min read
Weld is an open-source Rust project for improving the performance of data-intensive applications. It is an interface and runtime that can be integrated into existing frameworks including Spark, TensorFlow, Pandas, and NumPy without changing their user-facing APIs. The motivation behind Weld Data analytics applications today often require developers to combine various functions from different libraries and frameworks to accomplish a particular task. For instance, a typical Python ecosystem application selects some data using Spark SQL, transforms it using NumPy and Pandas, and trains a model with TensorFlow. This improves developers’ productivity as they are taking advantage of functions from high-quality libraries. However, these functions are usually optimized in isolation, which is not enough to achieve the best application performance. Weld aims to solve this problem by providing an interface and runtime that can optimize across data-intensive libraries and frameworks while preserving their user-facing APIs. In an interview with Federico Carrone, a Tech Lead at LambdaClass, Weld’s main contributor, Shoumik Palkar shared, “The motivation behind Weld is to provide bare-metal performance for applications that rely on existing high-level APIs such as NumPy and Pandas. The main problem it solves is enabling cross-function and cross-library optimizations that other libraries today don’t provide.” How Weld works Weld serves as a common runtime that allows libraries from different domains like SQL and machine learning to represent their computations in a common functional intermediate representation (IR). This IR is then optimized by a compiler optimizer and JIT’d to efficient machine code for diverse parallel hardware. It performs a wide range of optimizations on the IR including loop fusion, loop tiling, and vectorization. “Weld’s IR is natively parallel, so programs expressed in it can always be trivially parallelized,” said Palkar. When Weld was first introduced it was mainly used for cross-library optimizations. However, over time people have started to use it for other applications as well. It can be used to build JITs or new physical execution engines for databases or analytics frameworks, individual libraries, target new kinds of parallel hardware using the IR, and more. To evaluate Weld’s performance the team integrated it with popular data analytics frameworks including Spark, NumPy, and TensorFlow. This prototype showed up to 30x improvements over the native framework implementations. While cross library optimizations between Pandas and NumPy also improved performance by up to two orders of magnitude. Source: Weld Why Rust and LLVM were chosen for its implementation The first iteration of Weld was implemented in Scala because of its algebraic data types, powerful pattern matching, and large ecosystem. However, it did have some shortcomings. Palkar shared in the interview, “We moved away from Scala because it was too difficult to embed a JVM-based language into other runtimes and languages.” It had a managed runtime, clunky build system, and its JIT compilations were quite slow for larger programs. Because of these shortcomings the team wanted to redesign the JIT compiler, core API, and runtime from the ground up. They were in the search for a language that was fast, safe, didn’t have a managed runtime, provided a rich standard library, functional paradigms, good package manager, and great community background. So, they zeroed-in on Rust that happens to meet all these requirements. Rust provides a very minimal, no setup required runtime. It can be easily embedded into other languages such as Java and Python. To make development easier, it has high-quality packages, known as crates, and functional paradigms such as pattern matching. Lastly, it is backed by a great Rust Community. Read also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Explaining the reason why they chose LLVM, Palkar said in the interview, “We chose LLVM because its an open-source compiler framework that has wide use and support; we generate LLVM directly instead of C/C++ so we don’t need to rely on the existence of a C compiler, and because it improves compilation times (we don’t need to parse C/C++ code).” In a discussion on Hacker News many users listed other Weld-like projects that developers may find useful. A user commented, “Also worth checking out OmniSci (formerly MapD), which features an LLVM query compiler to gain large speedups executing SQL on both CPU and GPU.” Users also talked about Numba, an open-source JIT compiler that translates Python functions to optimized machine code at runtime with the help of the LLVM compiler library.  “Very bizarre there is no discussion of numba here, which has been around and used widely for many years, achieves faster speedups than this, and also emits an LLVM IR that is likely a much better starting point for developing a “universal” scientific computing IR than doing yet another thing that further complicates it with fairly needless involvement of Rust,” a user added. To know more about Weld, check out the full interview on Medium. Also, watch this RustConf 2019 talk by Shoumik Palkar: https://www.youtube.com/watch?v=AZsgdCEQjFo&t Other news in Programming Darklang available in private beta GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers TextMate 2.0, the text editor for macOS releases  
Read more
  • 0
  • 0
  • 4671

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 10072
article-image-gophercon-2019-go-2-update-open-source-go-library-for-gui-support-for-webassembly-tinygo-for-microcontrollers-and-more
Fatema Patrawala
30 Jul 2019
9 min read
Save for later

GopherCon 2019: Go 2 update, open-source Go library for GUI, support for WebAssembly, TinyGo for microcontrollers and more

Fatema Patrawala
30 Jul 2019
9 min read
Last week Go programmers had a gala time learning, networking and programming at the Marriott Marquis San Diego Marina as the most awaited event GopherCon 2019 was held starting from 24th July till 27th July. GopherCon this year hit the road at San Diego with some exceptional conferences, and many exciting announcements for more than 1800 attendees from around the world. One of the attendees, Andrea Santillana Fernández, says the Go Community is growing, and doing quite well. She wrote on her blog post on the Source graph website that there are 1 million Go programmers around the world and month on month its membership keeps increasing. Indeed there is a significant growth in the Go community, so what did it have in store for the programmers at this year’s GopherCon 2019: On the road to Go 2 The major milestones for the journey to Go 2 were presented by Russ Coxx on Wednesday last week. He explained the main areas of focus for Go 2, which are as below: Error handling Russ notes that writing a program correctly without errors is hard. But writing a program correctly accounting for errors and external dependencies is much more difficult. He listed down a few errors which led in introducing error handling helpers like an optional Unwrap interface, errors.Is and errors.As in Go 1.13 version. Generics Russ spoke about Generics and said that they started exploring a new design since last year. They are working with programming language theory experts on the problem to help refine the proposal of generics code in Go. In a separate session, Ian Lance Taylor, introduced generics codes in Go. He briefly explained the need, implementation and benefits from generics for the Go language. Next, Taylor reviewed the Go contract design draft which included the addition of optional type parameters to types and functions. Taylor defined generics as “Generic programming which enables the representation of functions and data structures in a generic form, with types factored out.” Generic code is written using types, which are specified later. An unspecified type is called as type parameter. A type parameter offers support only when permitted by contracts. A generic code imparts strong basis for sharing codes and building programs. It can be compiled using an interface-based approach which optimizes time as the package is compiled only once. If a generic code is compiled multiple times, it can carry compile time cost. Ian showed a few sample codes written in Generics in Go. Dependency management In Go 2 the team wants to focus on Dependency management and explicitly refer to dependencies similar to Java. Russ explained this by giving a history of how in 2011 they introduced GOPATH to separate the distribution from the actual dependencies so that users could run multiple different distributions and to separate the concerns of the distribution from the external libraries. Then in 2015, they introduced the go vendor spec to formalize the vendor directory and simplify dependency management implementations. But in practice it did not work well. In 2016, they formed the dependency working group. This team started work on dep: a tool to reshape all the existing tools into one.The problem with dep and the vendor directory was multiple distinct incompatible versions of a dependency were represented by one import path. It is now called as the "Import Compatibility Rule". The team took what worked well and learned from VGo. VGo provides package uniqueness without breaking builds. VGo dictates different import paths for incompatible package versions. The team grouped similar packages and gave these groups a name: Modules. The VGo system is now go modules. It now integrates directly with the Go command. The challenge presented going forward is mostly around updating everything to use modules. Everything needs to be updated to work with the new conventions to work well. Tooling Finally, as a result of all these changes, they distilled and refined the Go toolchain. One of the examples of this is gopls or "Go Please". Gopls aims to create a smoother, standard interface to integrate with all editors, IDEs, continuous integration and others. Simple, portable and efficient graphical interfaces in Go Elias Naur presented Gio, a new open source Go library for writing immediate mode GUI programs that run on all the major platforms: Android, iOS/tvOS, macOS, Linux, Windows. The talk covered Gio's unusual design and how it achieves simplicity, portability and performance. Elias said, “I wanted to be able to write a GUI program in GO that I could implement only once and have it work on every platform. This, to me, is the most interesting feature of Gio.” https://twitter.com/rakyll/status/1154450455214190593 Elias also presented Scatter which is a Gio program for end-to-end encrypted messaging over email. Other features of Gio include: Immediate mode design UI state owned by program Only depends on lowest-level platform libraries Minimal dependency tree to keep things low level as possible GPU accelerated vector and text rendering It’s super efficient No garbage generated in drawing or layout code Cross platform (macOS, Linux, Windows, Android, iOS, tvOS, Webassembly) Core is 100% Go while OS-specific native interfaces are optional Gopls, new tool serves as a backend for Go editor Rebecca Stambler, mentioned in her presentation that the Go community has built many amazing tools to improve the Go developer experience. However, when a maintainer disappears or a new Go release wreaks havoc, the Go development experience becomes frustrating and complicated. To solve this issue, Rebecca revealed the details behind a new tool: gopls (pronounced as 'go please'). The tool is currently in development by the Go team and community, and it will ultimately serve as the backend for your Go editor. Below listed functionalities are expected from gopls: Show me errors, like unused variables or typos autocomplete would be nice function signature help, because we often forget While we're at it, hover-accessible "tooltip" documentation in general Help me jump to a variable that is needed to see An outline of package structure Get started with WebAssembly in Go WebAssembly in Go is here and ready to try! Although the landscape is evolving quickly, the opportunity is huge. The ability to deliver truly portable system binaries could potentially replace JavaScript in the browser. WebAssembly has the potential to finally realize the goal of being platform agnostic without having to rely on a JVM. In a session by Johan Brandhorst who introduces the technology, shows how to get started with WebAssembly and Go, discusses what is possible today and what will be possible tomorrow. As of Go 1.13, there is experimental support for WebAssembly using the JavaScript interface but as it is only experimental, using it in production is not recommended. Support for the WASI interface is not currently available but has been planned and may be available as early as in Go 1.14. Better x86 assembly generation from Go Michael McLoughlin in his presentation made the case for code generation techniques for writing x86 assembly from Go. Michael introduced assembly, assembly in Go, the use cases for when you would want to drop into assembly, and techniques for realizing speedups using assembly. He pointed out that most of the time, pure Go will be enough for 97% of programs, but there are those 3% of cases where it is warranted, and the examples he brought up were crypto, syscalls, and scientific computing. Michael then introduced a package called avo which makes high-performance Go assembly easier to write. He said that writing your assembly in Go will allow you to realize the benefits of a high level language such as code readability, the ability to create loops, variables, and functions, and parameterized code generation all while still realizing the benefits of writing assembly. Michael concluded the talk with his ideas for the future of avo. Use avo in projects specifically in large crypto implementations. More architecture support Possibly make avo an assembler itself (these kinds of techniques are used in JIT compilers) avo based libraries (avo/std/math/big, avo/std/crypto) The audience appreciated this talk on Twitter. https://twitter.com/darethas/status/1155336268076576768 The presentation slides for this are available on the blog. Miniature version of Golang, TinyGo for microcontrollers Ron Evans, creator of GoCV, GoBot and "technologist for hire" introduced TinyGo that can run directly on microcontrollers like Arduino and more. TinyGo uses the LLVM compiler toolchain to create native code that can run directly even on the smallest of computing devices. Ron demonstrated how Go code can be run on embedded systems using TinyGo, a compiler intended for use in microcontrollers, WebAssembly (WASM), and command-line tools. Evans began his presentation by countering the idea that Go, while fast, produces executables too large to run on the smallest computers. While that may be true of the standard Go compiler, TinyGo produces much smaller outputs. For example: "Hello World" program compiled using Go 1.12 => 1.1 MB Same program compiled using TinyGo 0.7.0 => 12 KB TinyGo currently lacks support for the full Go language and Go standard library. For example, TinyGo does not have support for the net package, although contributors have created implementations of interfaces that work with the WiFi chip built into Arduino chips. Support for Go Routines is also limited, although simple programs usually work. Evans demonstrated that despite some limitations, thanks to TinyGo, the Go language can still be run in embedded systems. Salvador Evans, son of Ron Evans, assisted him for this demonstration. At age 11, he has become the youngest GopherCon speaker so far. https://twitter.com/erikstmartin/status/1155223328329625600 There were talks by other speakers on topics like, improvements in VSCode for Golang, the first open source Golang interpreter with complete support of the language spec, Athens Project which is a proxy server in Go and how mobile development works in Go. https://twitter.com/ramyanexus/status/1155238591120805888 https://twitter.com/containous/status/1155191121938649091 https://twitter.com/hajimehoshi/status/1155184796386988035 Apart from these there were a whole lot of other talks which happened at the GopherCon 2019. There were live blogs posted by the attendees on various talks and till now more than 25 blogs are posted by the attendees on the Sourcegraph website. The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Go introduces generic codes and a new contract draft design at GopherCon 2019 Is Golang truly community driven and does it really matter?  
Read more
  • 0
  • 0
  • 4058

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5377

article-image-microsoft-announces-the-general-availability-of-live-share-and-brings-it-to-visual-studio-2019
Bhagyashree R
03 Apr 2019
2 min read
Save for later

Microsoft announces the general availability of Live Share and brings it to Visual Studio 2019

Bhagyashree R
03 Apr 2019
2 min read
Microsoft yesterday announced that Live Share is now generally available and now comes included in Visual Studio 2019. This release comes with a lot of updates based on the feedbacks the team got since the public preview of Live Share started including a read-only mode, support for C++ and Python, and more. What is Live Share? Microsoft first introduced Live Share at Connect 2017 and launched its public preview in May 2018. It is a tool that enables your to collaborate with your team on the same codebase without needing to synchronize code or to configure the same development tools, settings, or environment. You can edit and debug your code with others in real time, regardless what programming languages you are using or the type of app you are building. It allows you to do a bunch of different things like instantly sharing your project with a teammate, share debugging sessions, terminal instances, localhost web apps, voice calls, and more. With Live Share, you do not have to leave the comfort of your favorite tools. You can take the advantage collaboration while retaining your personal editor preferences. It also provides developers their own cursor so that to enable seamless transition between following one another. What’s new in Live Share? This release includes features like  read-only mode, support for more languages like C++ and Python, and enabled guests to start debugging sessions. Now, you can use Live Share while pair programming, conducting code reviews, giving lectures and presenting to students and colleagues, or even mob programming during hackathons. This release also comes with support for a few third-party extensions to improve the overall experience when working with Live Share. The two extensions are OzCode and CodeStream. OzCode offers a suite of visualizations like datatips to see how items are passed through a LINQ query. It provides heads-up display to show how a set of boolean expressions evaluates. With CodeStream, you can create discussions about your codebase, which will serve as an integrated chat feature within a LiveShare session. To read the updates in Live Share, check out the official announcement. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud Microsoft announces Game stack with Xbox Live integration to Android and iOS  
Read more
  • 0
  • 0
  • 2856
article-image-the-openjdk-transition-things-to-know-and-do
Rich Sharples
28 Feb 2019
5 min read
Save for later

The OpenJDK Transition: Things to know and do

Rich Sharples
28 Feb 2019
5 min read
At this point, it should not be a surprise that a number of major changes were announced in the Java ecosystem, some of which have the potential to force a reassessment of Java roadmaps and even vendor selection for enterprise Java users. Some of the biggest changes are taking place in the Upstream OpenJDK (Open Java Development Kit), which means that users will need to have a backup plan in place for the transition. Read on to better understand exactly what the changes Oracle made to Oracle JDK are, why you should care about them and how to choose the best next steps for your organization. For some quick background, OpenJDK began as a free and open source implementation of the Java Platform, Standard Edition, through a 2006 Sun Microsystems initiative. They made the announcement at the 2006 JavaOne event that Java and the core Java platform would be open sourced. Over the next few years, major components of the Java Platform were released as free, open source software using the GPL. Other vendors - like Red Hat - then got involved with the project about a year later, through a broad contributor agreement which covers the participation of non-Sun engineers in the project. This piece is by Rich Sharples, Senior Director of Product Management at Red Hat , on what Oracle ending free lifecycle support means, and what steps users can take now to prepare for the change. So what is new in the world of Java and JDK? Why should you care? Okay, enough about the history. What are we anticipating for the future of Java and OpenJDK? At the end of January 2019, Oracle officially ended free public updates to Oracle JDK for non-Oracle customer commercial users. These users will no longer be able to get updates without an Oracle support contract. Additionally, Oracle has changed the Oracle JDK license (BCPL) so that commercial use for JDK 11 and beyond will require an Oracle subscription. They do also have a non-proprietary (GPLv2+CPE) distribution but the support policy around that is unclear at this time. This is a big deal because it means that users need to have strategies and plans in place to continue to get the support that Oracle had offered. You may be wondering whether it really is critical to run OpenJDK with support and if you can get away with running it without the support. The truth is that while the open source license and nature of OpenJDK means that you can technically run the software with no commercial support, it doesn’t mean that you necessarily should. There are too many risks associated with running OpenJDK unsupported, including numerous cases of critical vulnerabilities and security flaws. While there is nothing inherently insecure about open source software, when it is deployed without support or even a maintenance plan, it can open up your organization to threats that you may not be prepared to handle and resolve. Additionally, and probably the biggest reason why it is important to have commercial OpenJDK support, is that without a third party to help, you have to be entirely self-sufficient, meaning that you would have to ensure that you have specific engineering efforts that are permanently allocated to monitoring the upstream project and maintaining installations of OpenJDK. Not all organizations have the bandwidth or resources to be able to do this consistently. How can vendors help and what are other key technology considerations? Software vendors, including Red Hat, offer OpenJDK distributions and support to make sure that those who had been reliant on Oracle JDK have a seamless transition to take care of the above risks associated with not having OpenJDK support. It also allows customers and partners to focus on their business software, rather than needing to spend valuable engineering efforts on underlying Java runtimes. Additionally, Java SE 11 has introduced some significant new features, as well as deprecated others, and is the first significant update to the platform that will require users to think seriously about the impact of moving from it. With this, it is important to separate upgrading from Java SE 8 to Java SE 11 from moving from one vendor’s Java SE distribution to another vendor’s. In fact, moving from Java SE distributions - ones that are based in OpenJDK - without needing to change versions should be a fairly simple task. It is not recommended to change both the vendor and the version in one single step. Luckily also, the differences between Oracle JDK and OpenJDK are fairly minor and in fact, are slowly aligning to become more similar. Technology vendors can help in the transition that will inevitably come for OpenJDK - if it has not happened already. Having a proper regression test plan in place to ensure that applications run as they previously did, with a particular focus on performance and scalability is key and is something that vendors can help set up. Auditing and understanding if you need to make any code changes to applications is also a major undertaking that a third-party can help guide you on, that likely includes rulesets for the migrations. Finally, third-party vendors can help you deploy to the new OpenJDK solution and provide patches and security guidance as it comes up, to help keep the environment secure, updated and safe. While there are changes to Oracle JDK, once you find the alternative solution that is best suited for your organization, it should be fairly straightforward and cost-effective to make the transition, and third party vendors have the expertise, knowledge, and experience to help guide users through the OpenJDK transition. OpenJDK Project Valhalla is now in Phase III State of OpenJDK: Past, Present and Future with Oracle Mark Reinhold on the evolution of Java platform and OpenJDK
Read more
  • 0
  • 0
  • 5938

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 4613