Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Languages

135 Articles
article-image-mozilla-proposes-webassembly-interface-types-to-enable-language-interoperability
Bhagyashree R
23 Aug 2019
4 min read
Save for later

Mozilla proposes WebAssembly Interface Types to enable language interoperability

Bhagyashree R
23 Aug 2019
4 min read
WebAssembly will soon be able to use the same high-level types in Python, Rust, and Node says Lin Clark, a Principal Research Engineer at Mozilla, with the help of a new proposal: WebAssembly Interface Types. This proposal aims to add a new set of interface types that will describe high-level values like strings, sequences, records, and variants in WebAssembly. https://twitter.com/linclark/status/1164206550010884096 Why WebAssembly Interface Type matters Mozilla and many other companies have been putting their efforts into bringing WebAssembly outside the browser with projects like WASI and Fastly’s Lucet. Developers also want to run WebAssembly from different source languages like Python, Ruby, and Rust. Clark believes there are three reasons why developers want to do that. First, this will allow them to easily use native modules and deliver better speed to their application users. Second, they can use WebAssembly to sandbox native code for better security. Third, they can save time and maintenance cost by sharing native code across platforms. However, currently, this “cross-language integration” is very complicated. The problem is that WebAssembly currently only supports numbers, so it becomes difficult in cases like passing a string between JS and WebAssembly. You will first have to convert the string into an array of numbers and then convert them back into a string. “This means the two languages can call each other’s functions. But if a function takes or returns anything besides numbers, things get complicated,” Clark explains. So, to get past this hurdle you either need to write “a really hard-to-use API that only speaks in numbers” or “add glue code for every single environment you want this module to run in.” This is why Clark and her team have come up with WebAssembly Interface Types. It will allow WebAssembly modules to interoperate with modules running in their own native runtimes and other WebAssembly modules written in different source languages. It will also be able to talk directly with the host systems. It will achieve all of this using rich APIs and complex types. Source: Mozilla WebAssembly Interface Types are different from the types we have in WebAssembly today. Also, there will not be any new operations added to WebAssembly because of them. All the operations will be performed on the concrete types on both communicating sides. Explaining how this will work, Clark wrote, “There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.” What WebAssembly developers think about this proposal The news sparked a discussion on Hacker News. A user commented that this could in the future prevent a lot of rewrites and duplication, “I'm very happy to see the WebIDL proposal replaced with something generalized.  The article brings up an interesting point: WebAssembly really could enable seamless cross-language integration in the future. Writing a project in Rust, but really want to use that popular face detector written in Python? And maybe the niche language tokenizer written in PHP? And sprinkle ffmpeg on top, without the hassle of target-compatible compilation and worrying about use after free vulnerabilities? No problem use one of the many WASM runtimes popping up and combine all those libraries by using their pre-compiled WASM packages distributed on a package repo like WAPM, with auto-generated bindings that provide a decent API from your host language.” Another user added, ”Of course, cross-language interfaces will always have tradeoffs. But we see Interface Types extending the space where the tradeoffs are worthwhile, especially in combination with wasm's sandboxing.” Some users are also unsure that this will actually work in practice. Here’s what a Reddit user said, “I wonder how well this will work in practice. effectively this is attempting to be universal language interop. that is a bold goal. I suspect this will never work for complicated object graphs. maybe this is for numbers and strings only. I wonder if something like protobuf wouldn't actually be better. it looked from the graphics that memory is still copied anyway (which makes sense, eg going from a cstring to a java string), but this is still marshalling. maybe you can skip this in some cases, but is that important enough to hinge the design there?” To get a deeper understanding of WebAssembly Interface Types, watch this explainer video by Mozilla: https://www.youtube.com/watch?time_continue=17&v=Qn_4F3foB3Q Also, check out Lin Clark’s article, WebAssembly Interface Types: Interoperate with All the Things. Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces
Read more
  • 0
  • 0
  • 3509

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 10279

article-image-lukasz-langa-at-pylondinium19-if-python-stays-synonymous-with-cpython-for-too-long-well-be-in-big-trouble
Sugandha Lahoti
13 Aug 2019
7 min read
Save for later

Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”

Sugandha Lahoti
13 Aug 2019
7 min read
PyLondinium, the conference for Python developers was held in London, from the 14th to the 16th of June, 2019. At the Sunday Keynote Łukasz Langa, the creator of Black (Python code formatter) and Python 3.8 release manager spoke on where Python could be in 2020 and how Python developers should try new browser and mobile-friendly versions of Python. Python is an extremely expressive language, says Łukasz. “When I first started I was amazed how much you can accomplish with just a few lines of code especially compared to Java. But there are still languages that are even more expressive and enables even more compact notation.” So what makes Python special? Python is run above pseudocode; it reads like English; it is very elegant. “Our responsibility as developers,” Łukasz mentions “is to make Python’s runnable pseudocode convenient to use for new programmers.” Python has gotten much bigger, stable and more complex in the last decade. However, the most low-hanging fruit, Łukasz says, has already been picked up and what's left is the maintenance of an increasingly fossilizing interpreter and a stunted library. This maintenance is both tedious and tricky especially for a dynamic interpreter language like Python. Python being a community-run project is both a blessing and a curse Łukasz talks about how Python is the biggest community ran programming language on the planet. Other programming languages with similar or larger market penetration are either run by single corporations or multiple committees. Being a community project is both a blessing and a curse for Python, says Łukasz. It's a blessing because it's truly free from shareholder pressure and market swing. It’s a curse because almost the entire core developer team is volunteering their time and effort for free and the Python Software Foundation is graciously funding infrastructure and events; it does not currently employ any core developers. Since there is both Python and software right in the name of the foundation, Lukasz says he wants it to change. “If you don't pay people, you have no influence over what they work on. Core developers often choose problems to tackle based on what inspires them personally. So we never had an explicit roadmap on where Python should go and what problems or developers should focus on,” he adds. Python is no longer governed by a BDFL says Łukasz, “My personal hope is that the steering council will be providing visionary guidance from now on and will present us with an explicit roadmap on where we should go.” Interesting and dead projects in Python Łukasz talked about mypyc and invited people to work and contribute to this project as well as organizations to sponsor it. Mypyc is a compiler that compiles mypy-annotated, statically typed Python modules into CPython C extensions. This restricts the Python language to enable compilation. Mypyc supports a subset of Python. He also mentioned MicroPython, which is a Kickstarter-funded subset of Python optimized to run on microcontrollers and other constrained environments. It is a compatible runtime for microcontrollers that has very little memory- 16 kilobytes of RAM and 256 kilobytes for code memory and minimal computing power. He also talks about micro:bit. He also mentions many dead/dying/defunct projects for alternative Python interpreters, including Unladen Swallow, Pyston, IronPython. He talked about PyPy - the JIT Python compiler written in Python. Łukasz mentions that since it is written in Python 2, it makes it the most complex applications written in the industry. “This is at risk at the moment,” says Łukasz “since it’s a large Python 2 codebase needs updating to Python 3. Without a tremendous investment, it is very unlikely to ever migrate to Python 3.” Also, trying to replicate CPython quirks and bugs requires a lot of effort. Python should be aligned with where developer trends are shifting Łukasz believes that a stronger division between language and the reference implementation is important in case of Python. He declared, “If Python stays synonymous with CPython for too long, we’ll be in big trouble.” This is because CPython is not available where developer trends are shifting. For the web, the lingua franca is JavaScript now. For the two biggest operating systems on mobile, there is Swift the modern take on Objective C and Kotlin, the modern take on Java. For VR AR and 3D games, there is C# provided by Unity. While Python is growing fast, it’s not winning ground in two big areas: the browser, and mobile. Python is also slowly losing ground in the field of systems orchestration where Go is gaining traction. He adds, “if there were not the rise of machine learning and artificial intelligence, Python would have not survived the transition between Python 2 and Python 3.” Łukasz mentions how providing a clear supported and official option for the client-side web is what Python needs in order to satisfy the legion of people that want to use it.  He says, “for Python, the programming language to need to reach new heights we need a new kind of Python. One that caters to where developer trends are shifting - mobile, web, VR, AR, and 3D games. There should be more projects experimenting with Python for these platforms. This especially means trying restricted versions of the language because they are easier to optimize. We need a Python compiler for Web and Python on Mobile Łukasz talked about the need to shift to where developer trends are shifting. He says we need a Python compiler for the web - something that compiles your Python code to the web platform directly. He also adds, that to be viable for professional production use, Python on the web must not be orders of magnitude slower than the default option (Javascript) which is already better supported and has better documentation and training. Similarly, for mobile he wants a small Python application so that websites run fast and have quick user interactions. He gives the example of the Go programming language stating how “one of Go’s claims to fame is the fact that they shipped static binaries so you only have one file. You can choose to still use containers but it’s not necessary; you don't have virtual ends, you don't have pip installs, and you don't have environments that you have to orchestrate.” Łukasz further adds how the areas of modern focus where Python currently has no penetration don't require full compatibility with CPython. Starting out with a familiar subset of Python for the user that looks like Python would simplify the development of a new runtime or compiler a lot and potentially would even fit the target platform better. What if I want to work on CPython? Łukasz says that developers can still work on CPython if they want to. “I'm not saying that CPython is a dead end; it will forever be an important runtime for Python. New people are still both welcome and needed in fact. However, working on CPython today is different from working on it ten years ago; the runtime is mission-critical in many industries which is why developers must be extremely careful.” Łukasz sums his talk by declaring, “I strongly believe that enabling Python on new platforms is an important job. I'm not saying Python as the entire programming language should just abandon what it is now. I would prefer for us to be able to keep Python exactly as it is and just move it to all new platforms. Albeit, it is not possible without multi-million dollar investments over many years.” The talk was well appreciated by Twitter users with people lauding it as ‘fantastic’ and ‘enlightening’. https://twitter.com/WillingCarol/status/1156411772472971264 https://twitter.com/freakboy3742/status/1156365742435995648 https://twitter.com/jezdez/status/1156584209366081536 You can watch the full Keynote on YouTube. NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 3738
Banner background image

article-image-julia-co-creator-jeff-bezanson-on-whats-wrong-with-julialang-and-how-to-tackle-issues-like-modularity-and-extension
Vincy Davis
08 Aug 2019
5 min read
Save for later

Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension

Vincy Davis
08 Aug 2019
5 min read
The Julia language, which has been touted as the new fastest-growing programming language, held its 6th Annual JuliaCon 2019, between July 22nd to 26th at Baltimore, USA. On the fourth day of the conference, the co-creator of Julia language and the co-founder of Julia computing, Jeff Bezanson, gave a talk explaining “What’s bad about Julia”. Firstly, Bezanson states a disclaimer that he’s mentioning only those bad things in Julia which he is currently aware of. Next, he begins by listing many popular issues with the programming language. What’s wrong with Julia Compiler latency: Compiler latency has been one of the high priority issues in Julia. It is a lot slower when compared to other languages like Python(~27x slower) or C( ~187x slower). Static compilation support: Of Course, Julia can be compiled. Unlike the language C which is compiled before execution, Julia is compiled at runtime. Thus Julia provides poor support for static compilation. Immutable arrays: Many developers have contributed immutable array packages, however,  many of these packages assume mutability by default, resulting in more work for users. Thus Julia users have been requesting better support for immutable arrays. Mutation issues: This is a common stumbling block for Julia developers as many complain that it is difficult to identify which package is safe to mutate. Array optimizations: To get good performance, Julia users have to use manually in-place operations to get high performance array code. Better traits: Users have been requesting more traits in Julia, to avoid the big unions of listing all the examples of a type, instead of adding a declaration. This has been a big issue in array code and linear algebra. Incomplete notations: Many codes in Julia have incomplete notations. For eg. N-d array Many members from the audience agreed with Bezanson’s list and appreciated his frank efforts in accepting the problems in Julia. In this talk, Bezanson opts to explore two not-so-popular Julia issues - modularity and extension. He says that these issues are weird and worrisome to even him. How to tackle modularity and extension issues in Julia A typical Julia module extends functions from another module. This helps users in composing many things and getting lots of new functionality for free. However, what if a user wants a separately compiled module, which would be completely sealed, predictable, and will need less  time to compile, like an isolated module. Bezanson starts illustrating how the two issues of modularity and extension can be avoided in Julia code. Firstly, he starts by using two unrelated packages, which can communicate to each other by using extension in another base package. This scenario, he states, is common when used in a core module, which requires few primitives like any type, int type, and others. The two packages in a core module are called Core.Compiler and base, with each having their own definitions. The two packages have some codes which are common among them, thus it requires the user to write the same code twice in both the packages, which Bezanson think is “fine”. The more intense problem, Bezanson says is the typeof present in the core module. As both these packages needs to define constructors for their own types, it is not possible to share these constructors. This means that, except for constructors, everything else is isolated among the two packages. He adds that, “In practice, it doesn’t really matter because the types are different, so they can be distinguished just fine, but I find it annoying that we can’t sort of isolate those method tables of the constructors. I find it kind of unsatisfying that there’s just this one exception.” Bezanson then explains how Types can be described using different representations and extensions. Later, Bezanson provides two rules to tackle method specificity issues in Julia. The first rule is to be more specific, i.e., if it is a strict subtype (<:,not==) of another signature. According to Bezanson, the second rule is that it cannot be avoided. If methods overlap in arguments and have no specificity relationship, then “users have to give an ambiguity error”. Bezanson says that thus users can be on the safer side and assume that things do overlap. Also, if two signatures are similar, “then it does not matter which signature is called”, adds Bezanson. Finally, after explaining all the workarounds with regard to the said issues, Bezanson concludes that “Julia is not that bad”. And states that the “Julia language could be alot better and the team is trying their best to tackle all the issues.” Watch the video below to check out all the illustrations demonstrated by Bezanson during his talk. https://www.youtube.com/watch?v=TPuJsgyu87U Julia users around the world have loved Bezanson’s honest and frank talk at the JuliaCon 2019. https://twitter.com/MoseGiordano/status/1154371462205231109 https://twitter.com/johnmyleswhite/status/1154726738292891648 Read More Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Creating a basic Julia project for loading and saving data [Tutorial]
Read more
  • 0
  • 0
  • 6232

article-image-openjdk-project-valhalla-is-ready-for-developers-working-in-building-data-structures-or-compiler-runtime-libraries
Vincy Davis
01 Aug 2019
4 min read
Save for later

OpenJDK Project Valhalla is ready for developers working in building data structures or compiler runtime libraries

Vincy Davis
01 Aug 2019
4 min read
This year the JVM Language Summit 2019 was held on July 29th – 31st at Santa Clara, California. On the first day of the Summit, Oracle Java language architect Brian Goetz gave a talk on updates to OpenJDK Project Valhalla. He shared details on its progress, challenges being faced and what to expect from Project Valhalla in the future. He also talked about the significance of Project Valhalla’s LW2 phase which was released earlier last month.  OpenJDK Project Valhalla is now ready for developers to use for early-adopter experimentation in data structures and language runtimes, concluded Goetz. The main goal of OpenJDK Project Valhalla is to reboot the Java Virtual Machine (JVM) relationship with data and memory and in particular to enable denser and flatter layouts of object graphs in memory. The major restriction in the development of object layout has been the object identity. Object identity enables mutability, layout polymorphism and locking among others. As all objects do not need object identity and it would be impractical to determine whether an identity is relevant or not, Goetz expects programmers to inform about the whereabouts of a class such that it will make it easier to make a broader range of assumptions about it. Who cares about Value types? Goetz believes that value types are important for many applications and writers who desire a better control of memory layout and like to use memory very wisely. He says that library writers would always prefer Value types as it allows them to use all the traditional abstracts without paying the runtime cost of taking an extra indirection every time somebody uses a particular abstraction. Thus library classes like optional or cursors or better numerix do not have to pay the object tax.  Similarly, compiler writers of non-Java languages use Value types as an efficient substrate for language features like tuples, multiple return, built-in numeric types and wrapped native resources. Thus both library writers and compiler writers and their users pay the object tax.  Value types, in a nutshell, can help programmers make their code run faster.  Erased and specialized generics Currently, OpenJDK Project Valhalla uses erased generics and will eventually have specialized generics. In an erased generics, Valhalla uses the knowable type convention where the erased list of values can be called as Foo<V?>. This will also be moved to specialized generics later on. He also adds that this syntax cannot be used as of now, as the Valhalla team still does not have existing utterances of Foo for spontaneously changing their meaning. Goetz hopes that the migration of generic classes like Array List<T> to specialized generics would be painless.  New top types Project Valhalla needs new top types RefObject and ValObject for references and values as types are used to indicate a programmer’s intent. It helps the object model reflect the new reality, as everything is an object, but every object does not need an identity. There are many benefits of implementing ref-ness and val-ness into the type system such as: Dynamically ask x instanceof Ref object Statically constrain method parameters or return values Restrict type parameters Natural place to hang ref- or val-specific behavior Nullity Nullity is labelled as one of the most controversial issues in Valhalla. As many values use all their bit patterns, Nullity interferes with a number of useful optimizations. On the other hand, if some types are migrated towards values, the existing code will assume nullability. Nullity is expected to be a focus of the L3 investigation. What to expect next in Project Valhalla Lastly, Goetz announces that developers building data structures or compiler runtime libraries can start using Project Valhalla. He also adds that the Project Valhalla team is working hard to validate the current programming model by working on quantifying the costs of equality, covariance, etc and is trying to better the user control experience.  Goetz concluded by stating that OpenJDK Project Valhalla is at an inflection point and is trying to figure out Nullity, Migration, specialized generics and support for Graal in the future builds. You can watch the full talk of Brian Goetz for more details. Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more Brian Goetz on Java futures at FOSDEM 2019
Read more
  • 0
  • 0
  • 3473

article-image-gophercon-2019-go-2-update-open-source-go-library-for-gui-support-for-webassembly-tinygo-for-microcontrollers-and-more
Fatema Patrawala
30 Jul 2019
9 min read
Save for later

GopherCon 2019: Go 2 update, open-source Go library for GUI, support for WebAssembly, TinyGo for microcontrollers and more

Fatema Patrawala
30 Jul 2019
9 min read
Last week Go programmers had a gala time learning, networking and programming at the Marriott Marquis San Diego Marina as the most awaited event GopherCon 2019 was held starting from 24th July till 27th July. GopherCon this year hit the road at San Diego with some exceptional conferences, and many exciting announcements for more than 1800 attendees from around the world. One of the attendees, Andrea Santillana Fernández, says the Go Community is growing, and doing quite well. She wrote on her blog post on the Source graph website that there are 1 million Go programmers around the world and month on month its membership keeps increasing. Indeed there is a significant growth in the Go community, so what did it have in store for the programmers at this year’s GopherCon 2019: On the road to Go 2 The major milestones for the journey to Go 2 were presented by Russ Coxx on Wednesday last week. He explained the main areas of focus for Go 2, which are as below: Error handling Russ notes that writing a program correctly without errors is hard. But writing a program correctly accounting for errors and external dependencies is much more difficult. He listed down a few errors which led in introducing error handling helpers like an optional Unwrap interface, errors.Is and errors.As in Go 1.13 version. Generics Russ spoke about Generics and said that they started exploring a new design since last year. They are working with programming language theory experts on the problem to help refine the proposal of generics code in Go. In a separate session, Ian Lance Taylor, introduced generics codes in Go. He briefly explained the need, implementation and benefits from generics for the Go language. Next, Taylor reviewed the Go contract design draft which included the addition of optional type parameters to types and functions. Taylor defined generics as “Generic programming which enables the representation of functions and data structures in a generic form, with types factored out.” Generic code is written using types, which are specified later. An unspecified type is called as type parameter. A type parameter offers support only when permitted by contracts. A generic code imparts strong basis for sharing codes and building programs. It can be compiled using an interface-based approach which optimizes time as the package is compiled only once. If a generic code is compiled multiple times, it can carry compile time cost. Ian showed a few sample codes written in Generics in Go. Dependency management In Go 2 the team wants to focus on Dependency management and explicitly refer to dependencies similar to Java. Russ explained this by giving a history of how in 2011 they introduced GOPATH to separate the distribution from the actual dependencies so that users could run multiple different distributions and to separate the concerns of the distribution from the external libraries. Then in 2015, they introduced the go vendor spec to formalize the vendor directory and simplify dependency management implementations. But in practice it did not work well. In 2016, they formed the dependency working group. This team started work on dep: a tool to reshape all the existing tools into one.The problem with dep and the vendor directory was multiple distinct incompatible versions of a dependency were represented by one import path. It is now called as the "Import Compatibility Rule". The team took what worked well and learned from VGo. VGo provides package uniqueness without breaking builds. VGo dictates different import paths for incompatible package versions. The team grouped similar packages and gave these groups a name: Modules. The VGo system is now go modules. It now integrates directly with the Go command. The challenge presented going forward is mostly around updating everything to use modules. Everything needs to be updated to work with the new conventions to work well. Tooling Finally, as a result of all these changes, they distilled and refined the Go toolchain. One of the examples of this is gopls or "Go Please". Gopls aims to create a smoother, standard interface to integrate with all editors, IDEs, continuous integration and others. Simple, portable and efficient graphical interfaces in Go Elias Naur presented Gio, a new open source Go library for writing immediate mode GUI programs that run on all the major platforms: Android, iOS/tvOS, macOS, Linux, Windows. The talk covered Gio's unusual design and how it achieves simplicity, portability and performance. Elias said, “I wanted to be able to write a GUI program in GO that I could implement only once and have it work on every platform. This, to me, is the most interesting feature of Gio.” https://twitter.com/rakyll/status/1154450455214190593 Elias also presented Scatter which is a Gio program for end-to-end encrypted messaging over email. Other features of Gio include: Immediate mode design UI state owned by program Only depends on lowest-level platform libraries Minimal dependency tree to keep things low level as possible GPU accelerated vector and text rendering It’s super efficient No garbage generated in drawing or layout code Cross platform (macOS, Linux, Windows, Android, iOS, tvOS, Webassembly) Core is 100% Go while OS-specific native interfaces are optional Gopls, new tool serves as a backend for Go editor Rebecca Stambler, mentioned in her presentation that the Go community has built many amazing tools to improve the Go developer experience. However, when a maintainer disappears or a new Go release wreaks havoc, the Go development experience becomes frustrating and complicated. To solve this issue, Rebecca revealed the details behind a new tool: gopls (pronounced as 'go please'). The tool is currently in development by the Go team and community, and it will ultimately serve as the backend for your Go editor. Below listed functionalities are expected from gopls: Show me errors, like unused variables or typos autocomplete would be nice function signature help, because we often forget While we're at it, hover-accessible "tooltip" documentation in general Help me jump to a variable that is needed to see An outline of package structure Get started with WebAssembly in Go WebAssembly in Go is here and ready to try! Although the landscape is evolving quickly, the opportunity is huge. The ability to deliver truly portable system binaries could potentially replace JavaScript in the browser. WebAssembly has the potential to finally realize the goal of being platform agnostic without having to rely on a JVM. In a session by Johan Brandhorst who introduces the technology, shows how to get started with WebAssembly and Go, discusses what is possible today and what will be possible tomorrow. As of Go 1.13, there is experimental support for WebAssembly using the JavaScript interface but as it is only experimental, using it in production is not recommended. Support for the WASI interface is not currently available but has been planned and may be available as early as in Go 1.14. Better x86 assembly generation from Go Michael McLoughlin in his presentation made the case for code generation techniques for writing x86 assembly from Go. Michael introduced assembly, assembly in Go, the use cases for when you would want to drop into assembly, and techniques for realizing speedups using assembly. He pointed out that most of the time, pure Go will be enough for 97% of programs, but there are those 3% of cases where it is warranted, and the examples he brought up were crypto, syscalls, and scientific computing. Michael then introduced a package called avo which makes high-performance Go assembly easier to write. He said that writing your assembly in Go will allow you to realize the benefits of a high level language such as code readability, the ability to create loops, variables, and functions, and parameterized code generation all while still realizing the benefits of writing assembly. Michael concluded the talk with his ideas for the future of avo. Use avo in projects specifically in large crypto implementations. More architecture support Possibly make avo an assembler itself (these kinds of techniques are used in JIT compilers) avo based libraries (avo/std/math/big, avo/std/crypto) The audience appreciated this talk on Twitter. https://twitter.com/darethas/status/1155336268076576768 The presentation slides for this are available on the blog. Miniature version of Golang, TinyGo for microcontrollers Ron Evans, creator of GoCV, GoBot and "technologist for hire" introduced TinyGo that can run directly on microcontrollers like Arduino and more. TinyGo uses the LLVM compiler toolchain to create native code that can run directly even on the smallest of computing devices. Ron demonstrated how Go code can be run on embedded systems using TinyGo, a compiler intended for use in microcontrollers, WebAssembly (WASM), and command-line tools. Evans began his presentation by countering the idea that Go, while fast, produces executables too large to run on the smallest computers. While that may be true of the standard Go compiler, TinyGo produces much smaller outputs. For example: "Hello World" program compiled using Go 1.12 => 1.1 MB Same program compiled using TinyGo 0.7.0 => 12 KB TinyGo currently lacks support for the full Go language and Go standard library. For example, TinyGo does not have support for the net package, although contributors have created implementations of interfaces that work with the WiFi chip built into Arduino chips. Support for Go Routines is also limited, although simple programs usually work. Evans demonstrated that despite some limitations, thanks to TinyGo, the Go language can still be run in embedded systems. Salvador Evans, son of Ron Evans, assisted him for this demonstration. At age 11, he has become the youngest GopherCon speaker so far. https://twitter.com/erikstmartin/status/1155223328329625600 There were talks by other speakers on topics like, improvements in VSCode for Golang, the first open source Golang interpreter with complete support of the language spec, Athens Project which is a proxy server in Go and how mobile development works in Go. https://twitter.com/ramyanexus/status/1155238591120805888 https://twitter.com/containous/status/1155191121938649091 https://twitter.com/hajimehoshi/status/1155184796386988035 Apart from these there were a whole lot of other talks which happened at the GopherCon 2019. There were live blogs posted by the attendees on various talks and till now more than 25 blogs are posted by the attendees on the Sourcegraph website. The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Go introduces generic codes and a new contract draft design at GopherCon 2019 Is Golang truly community driven and does it really matter?  
Read more
  • 0
  • 0
  • 4286
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-npm-inc-co-founder-and-chief-data-officer-quits-leaving-the-community-to-question-the-stability-of-the-javascript-registry
Fatema Patrawala
22 Jul 2019
6 min read
Save for later

Npm Inc. co-founder and Chief data officer quits, leaving the community to question the stability of the JavaScript Registry

Fatema Patrawala
22 Jul 2019
6 min read
On Thursday, The Register reported that Laurie Voss, the co-founder and chief data officer of JavaScript package registry, NPM Inc left the company. Voss’s last day in office was 1st July while he officially announced the news on Thursday. Voss joined NPM in January 2014 and decided to leave the company in early May this year. NPM has faced its share of unrest in the company in the past few months. In the month of March  5 NPM employees were fired from the company in an unprofessional and unethical way. Later 3 of those employees were revealed to have been involved in unionization and filed complaints against NPM Inc with the National Labor Relations Board (NLRB).  Earlier this month NPM Inc at the third trial settled the labor claims brought by these three former staffers through the NLRB. Voss’ s resignation will be third in line after Rebecca Turner, former core contributor who resigned in March and Kat Marchan, former CLI and community architect who resigned from NPM early this month. Voss writes on his blog, “I joined npm in January of 2014 as co-founder, when it was just some ideals and a handful of servers that were down as often as they were up. In the following five and a half years Registry traffic has grown over 26,000%, and worldwide users from about 1 million back then to more than 11 million today. One of our goals when founding npm Inc. was to make it possible for the Registry to run forever, and I believe we have achieved that goal. While I am parting ways with npm, I look forward to seeing my friends and colleagues continue to grow and change the JavaScript ecosystem for the better.” Voss also told The Register that he supported unions, “As far as the labor dispute goes, I will say that I have always supported unions, I think they're great, and at no point in my time at NPM did anybody come to me proposing a union,” he said. “If they had, I would have been in favor of it. The whole thing was a total surprise to me.” The Register team spoke to one of the former staffers of NPM and they said employees tend not to talk to management in the fear of retaliation and Voss seemed uncomfortable to defend the company’s recent actions and felt powerless to affect change. In his post Voss is optimistic about NPM’s business areas, he says, “Our paid products, npm Orgs and npm Enterprise, have tens of thousands of happy users and the revenue from those sustains our core operations.” However, Business Insider reports that a recent NPM Inc funding round of the company raised only enough to continue operating until early 2020. https://twitter.com/coderbyheart/status/1152453087745007616 A big question on everyone’s mind currently is the stability of the public Node JS Registry. Most users in the JavaScript community do not have a fallback in place. While the community see Voss’s resignation with appreciation for his accomplishments, some are disappointed that he could not raise his voice against these odds and had to quit. "Nobody outside of the company, and not everyone within it, fully understands how much Laurie was the brains and the conscience of NPM," Jonathan Cowperthwait, former VP of marketing at NPM Inc, told The Register. CJ Silverio, a principal engineer at Eaze who served as NPM Inc's CTO said that it’s good that Voss is out but she wasn't sure whether his absence would matter much to the day-to-day operations of NPM Inc. Silverio was fired from NPM Inc late last year shortly after CEO Bryan Bogensberger’s arrival. “Bogensberger marginalized him almost immediately to get him out of the way, so the company itself probably won’t notice the departure," she said. "What should affect fundraising is the massive brain drain the company has experienced, with the entire CLI team now gone, and the registry team steadily departing. At some point they’ll have lost enough institutional knowledge quickly enough that even good new hires will struggle to figure out how to cope." Silverio also mentions that she had heard rumors of eliminating the public registry while only continuing with their paid enterprise service, which will be like killing their own competitive advantage. She says if the public registry disappears there are alternative projects like the one spearheaded by Silverio and a fellow developer Chris Dickinson, Entropic. Entropic is available under an open source Apache 2.0 license, Silverio says "You can depend on packages from any other Entropic instance, and your home instance will mirror all your dependencies for you so you remain self-sufficient." She added that the software will mirror any packages installed by a legacy package manager, which is to say npm. As a result, the more developers use Entropic, the less they'll need NPM Inc's platform to provide a list of available packages. Voss feels the scale of npm is 3x bigger than any other registry and boasts of an extremely fast growth rate i.e approx 8% month on month. "Creating a company to manage an open source commons creates some tensions and challenges is not a perfect solution, but it is better than any other solution I can think of, and none of the alternatives proposed have struck me as better or even close to equally good." he said. With  NPM Inc. sustainability at stake, the JavaScript community on Hacker News discussed alternatives in case the public registry comes to an end. One of the comments read, “If it's true that they want to kill the public registry, that means I may need to seriously investigate Entropic as an alternative. I almost feel like migrating away from the normal registry is an ethical issue now. What percentage of popular packages are available in Entropic? If someone else's repo is not in there, can I add it for them?” Another user responds, “The github registry may be another reasonable alternative... not to mention linking git hashes directly, but that has other issues.” Other than Entropic another alternative discussed is nixfromnpm, it is a tool in which you can translate NPM packages to Nix expression. nixfromnpm is developed by Allen Nelson and two other contributors from Chicago. Surprise NPM layoffs raise questions about the company culture Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports
Read more
  • 0
  • 0
  • 3434

article-image-python-3-8-new-features-the-walrus-operator-positional-only-parameters-and-much-more
Bhagyashree R
18 Jul 2019
5 min read
Save for later

Python 3.8 new features: the walrus operator, positional-only parameters, and much more

Bhagyashree R
18 Jul 2019
5 min read
Earlier this month, the team behind Python announced the release of Python 3.8b2, the second of four planned beta releases. Ahead of the third beta release, which is scheduled for 29th July, we look at some of the key features coming to Python 3.8. The "incredibly controversial" walrus operator The walrus operator was proposed in PEP 572 (Assignment Expressions) by Chris Angelico, Tim Peters, and Guido van Rossum last year. Since then it has been heavily discussed in the Python community with many questioning whether it is a needed improvement. Others were excited as the operator does make the code a tiny bit more readable. At the end of the PEP discussion, Guido van Rossum stepped down as BDFL (benevolent dictator for life) and the creation of a new governance model. In an interview with InfoWorld, Guido shared, “The straw that broke the camel’s back was a very contentious Python enhancement proposal, where after I had accepted it, people went to social media like Twitter and said things that really hurt me personally. And some of the people who said hurtful things were actually core Python developers, so I felt that I didn’t quite have the trust of the Python core developer team anymore.” According to PEP 572, the assignment expression is a syntactical operator that allows you to assign values to a variable as a part of an expression. Its aim is to simplify things like multiple-pattern matches and the so-called loop and a half. At PyCon 2019, Dustin Ingram, a PyPI maintainer, gave a few examples where you can use this syntax: Balancing lines of codes and complexity Avoiding inefficient comprehensions Avoiding unnecessary variables in scope You can watch the full talk on YouTube: https://www.youtube.com/watch?v=6uAvHOKofws The feature was implemented by Emily Morehouse, Python core developer and Founder, Director of Engineering at Cuttlesoft, and was merged earlier this year: https://twitter.com/emilyemorehouse/status/1088593522142339072 Explaining other improvements this feature brings, Jake Edge, a contributor on LWN.net wrote, “These and other uses (e.g. in list and dict comprehensions) help make the intent of the programmer clearer. It is a feature that many other languages have, but Python has, of course, gone without it for nearly 30 years at this point. In the end, it is actually a fairly small change for all of the uproars it caused.” Positional-only parameters Proposed in PEP 570, this introduces a new syntax (/) to specify positional-only parameters in Python function definitions. This is similar to how * indicates that the arguments to its right are keyword only. This syntax is already used by many CPython built-in and standard library functions, for instance, the pow() function: pow(x, y, z=None, /) This syntax gives library authors more control over better expressing the intended usage of an API and allows the API to “evolve in a safe, backward-compatible way.”  It gives library authors the flexibility to change the name of positional-only parameters without breaking callers. Additionally, this also ensures consistency of the Python language with existing documentation and the behavior of various  "builtin" and standard library functions. As with PEP 572, this proposal also got mixed reactions from Python developers. In support, one developer said, “Position-only parameters already exist in cpython builtins like range and min. Making their support at the language level would make their existence less confusing and documented.” While others think that this will allow authors to “dictate” how their methods could be used. “Not the biggest fan of this one because it allows library authors to overly dictate how their functions can be used, as in, mark an argument as positional merely because they want to. But cool all the same,” a Redditor commented. Debug support for f-strings Formatted strings (f-strings) were introduced in Python 3.6 with PEP 498. It enables you to evaluate an expression as part of the string along with inserting the result of function calls and so on. In Python 3.8, some additional syntax changes have been made by adding add (=) specifier and a !d conversion for ease of debugging. You can use this feature like this: print(f'{foo=} {bar=}') This provides developers a better way of doing “print-style debugging”, especially for those who have a background in languages that already have such feature such as  Perl, Ruby, JavaScript, etc. One developer expressed his delight on Hacker News, “F strings are pretty awesome. I’m coming from JavaScript and partially java background. JavaScript’s String concatenation can become too complex and I have difficulty with large strings.” Python Initialization Configuration Though Python is highly configurable, its configuration seems scattered all around the code.  The PEP 587 introduces a new C API to configure the Python Initialization giving developers finer control over the configuration and better error reporting. Among the improvements, this API will bring include ability to read and modify configuration before it is applied and overriding how Python computes the module search paths (``sys.path``). Along with these, there are many other exciting features coming to Python 3.8, which is currently scheduled for October, including a fast calling protocol for CPython, Vectorcall, support for out-of-band buffers in pickle protocol 5, and more. You can find the full list on Python’s official website. Python serious about diversity, dumps offensive ‘master’, ‘slave’ terms in its documentation Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust Python 3.8 beta 1 is now ready for you to test  
Read more
  • 0
  • 0
  • 7661

article-image-microsoft-mulls-replacing-c-and-c-code-with-rust-calling-it-a-a-modern-safer-system-programming-language-with-great-memory-safety-features
Vincy Davis
18 Jul 2019
3 min read
Save for later

Microsoft mulls replacing C and C++ code with Rust calling it a "modern safer system programming language" with great memory safety features

Vincy Davis
18 Jul 2019
3 min read
Here's another reason why Rust is the present and the future in programming. Few days ago, Microsoft announced that they are going to start exploring Rust and skip their own C languages. This announcement was made by the Principal Security Engineering Manager of Microsoft Security Response Centre (MSRC), Gavin Thomas. Thomas states that ~70% of the vulnerabilities which Microsoft assigns a CVE each year are caused by developers, who accidently insert memory corruption bugs into their C and C++ code. He adds, "As Microsoft increases its code base and uses more Open Source Software in its code, this problem isn’t getting better, it's getting worse. And Microsoft isn’t the only one exposed to memory corruption bugs—those are just the ones that come to MSRC." Image Source: Microsoft blog He highlights the fact that even after having so many security mechanisms (like static analysis tools, fuzzing at scale, taint analysis, many encyclopaedias of coding guidelines, threat modelling guidance, etc) to make a code secure, developers have to invest a lot of time in studying about more tools for training and vulnerability fixes. Thomas states that though C++ has many qualities like fast, mature, small memory and disk footprint, it does not have the memory security guarantee of languages like .NET C#. He believes that Rust is one language, which can provide both the requirements. Thomas strongly advocates that a software security industry should focus on providing a secure environment for developers to work on, rather than turning deaf ear to the importance of security, outdated methods and approaches. He thus concludes by hinting that Microsoft is going to adapt the Rust programming language. As he says that, "Perhaps it's time to scrap unsafe legacy languages and move on to a modern safer system programming language?" Microsoft exploring Rust is not surprising as Rust has been popular with many developers for its simpler syntax, less bugs, memory safe and thread safety. It has also been voted as the most loved programming language, according to the 2019 StackOverflow survey, the biggest developer survey on the internet. It allows developers to focus on their applications, rather than worrying about its security and maintenance. Recently, there have been many applications written in Rust, like Vector, Brave ad-blocker, PyOxidizer and more. Developers couldn't agree more with this post, as all have expressed their love for Rust. https://twitter.com/alilleybrinker/status/1151495738158977024 https://twitter.com/karanganesan/status/1151485485644054528 https://twitter.com/shah_sheikh/status/1151457054004875264 A Redditor says, "While this first post is very positive about memory-safe system programming languages in general and Rust in particular, I would not call this an endorsement. Still, great news!" Visit the Microsoft blog for more details. Introducing Ballista, a distributed compute platform based on Kubernetes and Rust EU Commission opens an antitrust case against Amazon on grounds of violating EU competition rules Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview]
Read more
  • 0
  • 0
  • 7724

article-image-is-the-npm-6-9-1-bug-a-symptom-of-the-organizations-cultural-problems
Fatema Patrawala
02 Jul 2019
4 min read
Save for later

Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems?

Fatema Patrawala
02 Jul 2019
4 min read
The emergence of worker solidarity and organization throughout the tech industry has been one of the few upsides to a difficult 18 months. And although it might be tempting to see this wave as somehow separate from the technical side of building software, the reality is that worker power - and, indeed, worker safety and respect - are crucial to ensure safe and high quality software. Last week’s npm bug, reported by users last Friday, is a good case in point. It follows a matter of months after news in April of surprise layoffs, and accusations of punitive anti-union actions. It perhaps confirms what one former npm employee told The Register last month: "I think it’s time to break the in-case-of-emergency glass to assess how to keep JavaScript safe… Soon there won’t be any knowledgeable engineers left." What was the npm 6.9.1 bug? The npm 6.9.1 bug is complex. There are a number of layers to the issue, some of which relate to earlier iterations of the package manager. For those interested, Rebecca Turner, a former core contributor to npm who resigned her position at npm in March in response to the layoffs, explains in detail how the bug came about: “...npm publish ignores .git folders by default but forces all files named readme to be included… And that forced include overrides the exclude. And then there was once a remote branch named readme… and that goes in the .git folder, gets included in the publish, which then permanently borks your npm install, because of EISGIT, which in turn is a restriction that’s afaik entirely vestigial, copied forward from earlier versions of npm without clear insight into why you’d want that restriction in the first place.” Turner says she suspects the bug was “introduced with tar rewrite.” Whoever published it, she goes on to say, must have had a repository with a remote reference and had failed to follow the setup guide “which recommends using a separate copy of the repo for publication.” Kat Marchán, CLI and Community Architect at npm, later confirmed that to fix the issue the team had published npm 6.9.2, but said that users would have to uninstall it manually before upgrading. “We are discussing whether to unpublish 6.9.1 as well, but this should stop any further accidents,” Marchán said. The impact of npm’s internal issues The important subplot to all of this is the fact that it appears that npm 6.9.1 was delayed because of npm’s internal issues. A post on GitHub by Audrey Eschright, one of the employees who are currently filing a case against npm with the National Labor Relations Board, explained that work on the open source project had been interrupted because npm’s management had made the decision to remove “core employee contributors to the npm cli.” The implication, then, is that management’s attitude here has had a negative impact on npm 6.9.1. If the allegations of ‘union busting’ are true, then it would seem that preventing its workers from organizing to protect one another were more important than building robust and secure software. At a more basic level, whatever the reality of the situation, it would seem that npm’s management is unable to cultivate an environment that allows employees to do what they do best. Why is this significant? This is ultimately just a story about a bug. Not all that remarkable. But given the context, it’s significant because it highlights that tech worker organization, and how management responds to it, has a direct link to the quality and reliability of the software we use. If friction persists between the commercial leaders within a company and engineers, software is the thing that’s going to suffer. Read Next Surprise NPM layoffs raise questions about the company culture Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks
Read more
  • 0
  • 0
  • 2640
article-image-the-v-programming-language-is-now-open-sourced-is-it-too-good-to-be-true
Bhagyashree R
24 Jun 2019
5 min read
Save for later

The V programming language is now open source - is it too good to be true?

Bhagyashree R
24 Jun 2019
5 min read
Yesterday, a new statically-typed programming language named V was open sourced. It is described as a simple, fast, and compiled language for creating maintainable software. Its creator, Alex Medvednikov, says that it is very similar to Go and is inspired by Oberon, Rust, and Swift. What to expect from V programming language Fast compilation V can compile up to 1.2 million lines of code per second per CPU. It achieves this by direct machine code generation and strong modularity. If we decide to emit C code, the compilation speed drops to approximately 100k of code per second per CPU. Medvednikov mentions that direct machine code generation is still in its very early stages and right now only supports x64/Mach-O. He plans to make this feature stable by the end of this year. Safety It seems to be an ideal language because it has no null, global variables, undefined values, undefined behavior, variable shadowing, and does bound checking. It supports immutable variables, pure functions, and immutable structs by default. Generics are right now work in progress and are planned for next month. Performance According to the website, V is as fast as C, requires a minimal amount of allocations, and supports built-in serialization without runtime reflection. It compiles to native binaries without any dependencies. Just a 0.4 MB compiler Compared to Go, Rust, GCC, and Clang, the space required and build time of V are very very less. The entire language and standard library is just 400 KB and you can build it in 0.4s. By the end of this year, the author aims to bring this build time down to 0.15s. C/C++ translation V allows you to translate your V code to C or C++. However, this feature is at a very early stage, given that C and C++ are a very complex language. The creator aims to make this feature stable by the end of this year. What do developers think about this language? As much as developers like to have a great language to build applications, many felt that V is too good to be true. Looking at the claims made on the site some developers thought that the creator is either not being truthful about the capabilities of V or is scamming people. https://twitter.com/warnvod/status/1112571835558825986 A language that has the simplicity of Go and the memory management model of Rust is what everyone desires. However, the main reason that makes people skeptical about V is that there is not much proof behind the hard claims it makes. A user on Hacker news commented, “...V's author makes promises and claims which are then retracted, falsified, or untestable. Most notably, the source for V's toolchain has been teased repeatedly as coming soon but has never been released. Without an open toolchain, none of the claims made on V's front page [2] can be verified.” Another thing that makes this case concerning is that the V programming language is currently in alpha stage and is incomplete. Despite that, the creator is making $827 per month from his Patreon account. “However, advertising a product can do something and then releasing it stating it cannot do it yet, is one thing, but accepting money for a product that does not what is advertised, is a fraud,” a user commented. Some developers are also speculating that the creator is maybe just embarrassed to open source his code because of bad coding pattern choices. A user speculates, “V is not Free Software, which is disappointing but not atypical; however, V is not even open source, which precludes a healthy community. Additionally, closed languages tend to have bad patterns like code dumps over the wall, poor community communication, untrustworthy binary behaviors, and delayed product/feature releases. Yes, it's certainly embarrassing to have years of history on display for everybody to see, but we all apparently have gotten over it. What's hiding in V's codebase? We don't know. As a best guess, I think that the author may be ashamed of the particular nature of their bootstrap.” The features listed on the official website are incredible. The only concern was that the creator was not being transparent about how he plans to achieve them. Also, as this was closed source earlier, there was no way for others to verify the performance guarantees it promises that’s why so much confusion happened. Alex Medvednikov on why you can trust V programming On an issue that was reported on GitHub, the creator commented, “So you either believe me or you don't, we'll see who is right in June. But please don't call me a liar, scammer and spread misinformation.” Medvednikov was maybe overwhelmed by the responses and speculations, he was seeing on different discussion forums. Developing a whole new language requires a lot of work and perhaps his deadlines are ambitious. Going by the release announcement Medvednikov made yesterday, he is aware that the language designing process hasn’t been the most elegant version of his vision. He wrote, “There are lots of hacks I'm really embarrassed about, like using os.system() instead of native API calls, especially on Windows. There's a lot of ugly C code with #, which I regret adding at all.” Here’s great advice shared by a developer on V’s GitHub repository: Take your time, good software takes time. It's easy to get overwhelmed building Free software: sometimes it's better to say "no" or "not for now" in order to build great things in the long run :) Visit the official website of the V programming language for more detail. Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near Pull Panda is now a part of GitHub; code review workflows now get better! Scala 2.13 is here with overhauled collections, improved compiler performance, and more!
Read more
  • 0
  • 0
  • 14511

article-image-polyglot-programming-allows-developers-to-choose-the-right-language-to-solve-tough-engineering-problems
Richard Gall
11 Jun 2019
9 min read
Save for later

Polyglot programming allows developers to choose the right language to solve tough engineering problems

Richard Gall
11 Jun 2019
9 min read
Programming languages can divide opinion. They are, for many engineers, a mark of identity. Yes, they say something about the kind of work you do, but they also say something about who you are and what you value. But this is changing, with polyglot programming becoming a powerful and important trend. We’re moving towards a world in which developers are no longer as loyal to their chosen programming languages as they were. Instead, they are more flexible and open minded about the languages they use. This year’s Skill Up report highlights that there are a number of different drivers behind the programming languages developers use which, in turn, imply a level of contextual decision making. Put simply, developers today are less likely to stick with a specific programming language, and instead move between them depending on the problems they are trying to solve and the tasks they need to accomplish. Download this year's Skill Up report here. [caption id="attachment_28338" align="aligncenter" width="554"] Skill Up 2019 data[/caption] As the data above shows, languages aren’t often determined by organizational requirements. They are more likely to be if you’re primarily using Java or C#, but that makes sense as these are languages that have long been associated with proprietary software organizations (Oracle and Microsoft respectively); in fact, programming languages are often chosen due to projects and use cases. The return to programming language standardization This is something backed up by the most recent ThoughtWorks Radar, published in April. Polyglot programming finally moved its way into the Adopt ‘quadrant’. This is after 9 years of living in the Trial quadrant. Part of the reason for this, ThoughtWorks explains, is that the organization is seeing a reaction against this flexibility, writing that “we're seeing a new push to standardize language stacks by both developers and enterprises.” The organization argues - quite rightly - that , “promoting a few languages that support different ecosystems or language features is important for both enterprises to accelerate processes and go live more quickly and developers to have the right tools to solve the problem at hand.” Arguably, we’re in the midst of a conflict within software engineering. On the one hand the drive to standardize tooling in the face of increasingly complex distributed systems makes sense, but it’s one that we should resist. This level of standardization will ultimately remove decision making power from engineers. What’s driving polyglot programming? It’s probably worth digging a little deeper into why developers are starting to be more flexible about the languages they use. One of the most important drivers of this change is the dominance of Agile as a software engineering methodology. As Agile has become embedded in the software industry, software engineers have found themselves working across the stack rather than specializing in a specific part of it. Full-stack development and polyglot programming This is something suggested by Stack Overflow survey data. This year 51.9% of developers described themselves as full-stack developers compared to 50.0% describing themselves as backend developers. This is a big change from 2018 where 57.9% described themselves as backend developers compared to 48.2% of respondents calling themselves full-stack developers. Given earlier Stack Overflow data from 2016 indicates that full-stack developers are comfortable using more languages and frameworks than other roles, it’s understandable that today we’re seeing developers take more ownership and control over the languages (and, indeed, other tools) they use. With developers sitting in small Agile teams working more closely to problem domains than they may have been a decade ago, the power is now much more in their hands to select and use the programming languages and tools that are most appropriate. If infrastructure is code, more people are writing code... which means more people are using programming languages But it's not just about full-stack development. With infrastructure today being treated as code, it makes sense that those responsible for managing and configuring it - sysadmins, SREs, systems engineers - need to use programming languages. This is a dramatic shift in how we think about system administration and infrastructure management; programming languages are important to a whole new group of people. Python and polyglot programming The popularity of Python is symptomatic of this industry-wide change. Not only is it a language primarily selected due to use case (as the data above shows), it’s also a language that’s popular across the industry. When we asked our survey respondents what language they want to learn next, Python came out on top regardless of their primary programming language. [caption id="attachment_28340" align="aligncenter" width="563"] Skill Up 2019 data[/caption] This highlights that Python has appeal across the industry. It doesn’t fit neatly into a specific job role, it isn’t designed for a specific task. It’s flexible - as developers today need to be. Although it’s true that Python’s popularity is being driven by machine learning, it would be wrong to see this as the sole driver. It is, in fact, its wide range of use cases ranging from scripting to building web services and APIs that is making Python so popular. Indeed, it’s worth noting that Python is viewed as a tool as much as it is a programming language. When we specifically asked survey respondents what tools they wanted to learn, Python came up again, suggesting it occupies a category unlike every other programming language. [caption id="attachment_28341" align="aligncenter" width="585"] Skill Up 2019 data[/caption] What about other programming languages? The popularity of Python is a perfect starting point for today’s polyglot programmer. It’s relatively easy to learn, and it can be used for a range of different tasks. But if we’re to convincingly talk about a new age of programming, where developers are comfortable using multiple programming languages, we have to look beyond the popularity of Python at other programming languages. Perhaps a good way to do this is to look at the languages developers primarily using Python want to learn next. If you look at the graphic above, there’s no clear winner for Python developers. While every other language is showing significant interest in Python, Python developers are looking at a range of different languages. This alone isn’t evidence of the popularity of polyglot programming, but it does indicate some level of fragmentation in the programming language ‘marketplace’. Or, to put it another way, we’re moving to a place where it becomes much more difficult to say that given languages are definitive in a specific field. The popularity of Golang Go has particular appeal for Python programmers with almost 20% saying they want to learn it next. This isn’t that surprising - Go is a flexible language that has many applications, from microservices to machine learning, but most importantly can give you incredible performance. With powerful concurrency, goroutines, and garbage collection, it has features designed to ensure application efficiency. Given it was designed by Google this isn’t that surprising - it’s almost purpose built for software engineering today. It’s popularity with JavaScript developers further confirms that it holds significant developer mindshare, particularly among those in positions where projects and use cases demand flexibility. Read next: Is Golang truly community driven and does it really matter? A return to C++ An interesting contrast to the popularity of Go is the relative popularity of C++ in our Skill Up results. C++ is ancient in comparison to Golang, but it nevertheless seems to occupy a similar level of developer mindshare. The reasons are probably similar - it’s another language that can give you incredible power and performance. For Python developers part of the attraction is down to its usefulness for deep learning (TensorFlow is written in C++). But more than that, C++ is also an important foundational language. While it isn’t easy to learn, it does help you to understand some of the fundamentals of software. From this perspective, it provides a useful starting point to go on and learn other languages; it’s a vital piece that can unlock the puzzle of polyglot programming. A more mature JavaScript JavaScript also came up in our Skill Up survey results. Indeed, Python developers are keen on the language, which tells us something about the types of tasks Python developers are doing as well as the way JavaScript has matured. On the one hand, Python developers are starting to see the value of web-based technologies, while on the other JavaScript is also expanding in scope to become much more than just a front end programming language. Read next: Is web development dying? Kotlin and TypeScript The appearance of other smaller languages in our survey results emphasises the way in which the language ecosystem is fragmenting. TypeScript, for example, may not ever supplant JavaScript, but it could become an important addition to a developer’s skill set if they begin running into problems scaling JavaScript. Kotlin represents something similar for Java developers - indeed, it could even eventually out pace its older relative. But again, it’s popularity will emerge according to specific use cases. It will begin to take hold in particular where Java’s limitations become more exposed, such as in modern app development. Rust: a goldilocks programming language perfect for polyglot programming One final mention deserves to go to Rust. In many ways Rust’s popularity is related to the continued relevance of C++, but it offers some improvements - essentially, it’s easier to leverage Rust, while using C++ to its full potential requires experience and skill. Read next: How Deliveroo migrated from Ruby to Rust without breaking production One commenter on Hacker News described it as a ‘Goldilocks’ language - “It's not so alien as to make it inaccessible, while being alien enough that you'll learn something from it.” This is arguably what a programming language should be like in a world where polyglot programming rules. It shouldn’t be so complex as to consume your time and energy, but it should also be sophisticated enough to allow you to solve difficult engineering problems. Learning new programming languages makes it easier to solve engineering problems The value of learning multiple programming languages is indisputable. Python is the language that’s changing the game, becoming a vital additional extra to a range of developers from different backgrounds, but there are plenty of other languages that could prove useful. What’s ultimately important is to explore the options that are available and to start using a language that’s right for you. Indeed, that’s not always immediately obvious - but don’t let that put you off. Give yourself some time to explore new languages and find the one that’s going to work for you.
Read more
  • 0
  • 0
  • 7053

article-image-typescript-3-5-releases-with-omit-helper-improved-speed-excess-property-checks-and-more
Vincy Davis
30 May 2019
5 min read
Save for later

TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more

Vincy Davis
30 May 2019
5 min read
Yesterday, Daniel Rosenwasser, Program Manager at TypeScript, announced the release of TypeScript 3.5. This release has great new additions in compiler and language, editor tooling, some breaking changes as well. Some key features include speed improvements, ‘omit’ helper type, improved excess property checks, and more. The earlier version of TypeScript 3.4 was released two months ago. Compiler and Language Speed improvements Typescripts team have been focusing heavily on optimizing certain code paths and stripping down certain functionality, since the past release. This has resulted in TypeScript 3.5 being faster than TypeScript 3.3 for many incremental checks. The compile time of TypeScript 3.5 has also fallen compared to 3.4, but users have been alerted that code completion and any other editor operations would be much ‘snappier’. This release also includes several optimizations to compiler settings such as why files were looked up, where files were found, etc. It’s also been found that in TypeScript 3.5, the amount of time rebuilding can be reduced by as much as 68% compared to TypeScript 3.4. The ‘Omit’ helper type Usually, users create an object that omits certain properties. In TypeScript 3.5, a new version of ‘Omit’ has been defined. It will include its own  lib.d.ts which can be used everywhere. The compiler itself will use this ‘Omit’ type to express types created through object rest, destructuring declarations on generics. Improved excess property checks in union types TypeScript has this feature of excess property checking in object literals. In the earlier versions, certain excess properties were allowed in the object literal, even if it didn’t match between Point and Label. In this new version, the type-checker will verify that all the provided properties belong to some union member and have the appropriate type. The --allowUmdGlobalAccess flag In TypeScript 3.5, you can now reference UMD global declarations like export as namespace foo. This is possible from anywhere, even modules by using the new --allowUmdGlobalAccess flag. Smarter union type checking When checking against union types, TypeScript usually compares each constituent type in isolation. While assigning source to target, it typically involves checking whether the type of source is assignable to target. In TypeScript 3.5, when assigning to types with discriminant properties like in T, the language actually will go further and decompose types like S into a union of every possible inhabitant type. This was not possible in the previous versions. Higher order type inference from generic constructors TypeScript 3.4’s inference allowed newFn to be generic. In TypeScript 3.5, this behavior is generalized to work on constructor functions as well. This means that functions that operate on class components in certain UI libraries like React, can more correctly operate on generic class components. New Editing Tools Smart Select This will provide an API for editors to expand text selections farther outward in a syntactical manner.  This feature is cross-platform and available to any editor which can appropriately query TypeScript’s language server. Extract to type alias TypeScript 3.5 will now support a useful new refactoring, to extract types to local type aliases. However, for users who prefer interfaces over type aliases, an issue still exists for extracting object types to interfaces as well. Breaking changes Generic type parameters are implicitly constrained to unknown In TypeScript 3.5, generic type parameters without an explicit constraint are now implicitly constrained to unknown, whereas previously the implicit constraint of type parameters was the empty object type {}. { [k: string]: unknown } is no longer a wildcard assignment target TypeScript 3.5 has removed the specialized assignability rule to permit assignment to { [k: string]: unknown }. This change was made because of the change from {} to unknown, if generic inference has no candidates. Depending on the intended behavior of { [s: string]: unknown }, several alternatives are available: { [s: string]: any } { [s: string]: {} } object unknown any Improved excess property checks in union types Typescript 3.5 adds a type assertion onto the object (e.g. { myProp: SomeType } as ExpectedType) It also adds an index signature to the expected type to signal, that unspecified properties are expected (e.g. interface ExpectedType { myProp: SomeType; [prop: string]: unknown }) Fixes to unsound writes to indexed access types TypeScript allows you to represent the operation of accessing a property of an object via the name of that property. In TypeScript 3.5, samples will correctly issue an error. Most instances of this error represent potential errors in the relevant code. Object.keys rejects primitives in ES5 In ECMAScript 5 environments, Object.keys throws an exception if passed through  any non-object argument. In TypeScript 3.5, if target (or equivalently lib) is ES5, calls to Object.keys must pass a valid object. This change interacts with the change in generic inference from {} to unknown. The aim of this version of TypeScript is to make the coding experience faster and happier. In the announcement, Daniel has also given the 3.6 iteration plan document and the feature roadmap page, to give users an idea of what’s coming in the next version of TypeScript. Users are quite content with the new additions and breaking changes in TypeScript 3.5. https://twitter.com/DavidPapp/status/1130939572563697665 https://twitter.com/sebastienlorber/status/1133639683332804608 A user on Reddit comments, “Those are some seriously impressive improvements. I know it's minor, but having Omit built in is just awesome. I'm tired of defining it myself in every project.” To read more details of TypeScript 3.5, head over to the official announcement. 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed All Docker versions are now vulnerable to a symlink race attack
Read more
  • 0
  • 0
  • 3875
article-image-is-golang-truly-community-driven-and-does-it-really-matter
Sugandha Lahoti
24 May 2019
6 min read
Save for later

Is Golang truly community driven and does it really matter?

Sugandha Lahoti
24 May 2019
6 min read
Golang, also called Go, is a statically typed, compiled programming language designed by Google. Golang is going from strength to strength, as more engineers than ever are using it at work, according to Go User Survey 2019. An opinion that has led to the Hacker News community into a heated debate last week: “Go is Google's language, not the community's”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google's project.” Chris explicitly states that the community's voice doesn't matter very much for Go's development, and we have to live with that. He argues that Google is the gatekeeper for community contributions; it alone decides what is and isn't accepted into Go. If a developer wants some significant feature to be accepted into Golang, working to build consensus in the community is far less important than persuading the Golang core team. He then cites the example of how one member of Google's Go core team discarded the entire Go Modules system that the Go community had been working on and brought in a relatively radically different model. Chris believes that the Golang team cares about the community and want them to be involved, but only up to a certain point. He wants the Go core team to be bluntly honest about the situation, rather than pretend and implicitly lead people on. He further adds, “Only if Go core team members start leaving Google and try to remain active in determining Go's direction, can we [be] certain Golang is a community-driven language.” He then compares Go with C++, calling the latter a genuine community-driven language. He says there are several major implementations in C++ which are genuine community projects, and the direction of C++ is set by an open standards committee with a relatively distributed membership. https://twitter.com/thatcks/status/1131319904039309312 What is better - community-driven or corporate ownership? There has been an opinion floating around developers about how some open source programming projects are just commercial projects driven mainly by a single company.  If we look at the top open source projects, most of them have some kind of corporate backing ( Apple’s Swift, Oracle’s Java, MySQL, Microsoft’s Typescript, Google’s Kotlin, Golang, Android, MongoDB, Elasticsearch) to name a few. Which brings us to the question, what does corporate ownership of open source projects really mean? A benevolent dictatorship can have two outcomes. If the community for a particular project suggests a change, and in case a change is a bad idea, the corporate team can intervene and stop changes. On the other hand, though, it can actually stop good ideas from the community in being implemented, even if a handful of members from the core team disagree. Chris’s post has received a lot of attention by developers on Hacker News who both sided with and disagreed with the opinion put forward. A comment reads, “It's important to have a community and to work with it, but, especially for a programming language, there has to be a clear concept of which features should be implemented and which not - just accepting community contributions for the sake of making the community feel good would be the wrong way.” Another comment reads, “Many like Go because it is an opinionated language. I'm not sure that a 'community' run language will create something like that because there are too many opinions. Many claims to represent the community, but not the community that doesn't share their opinion. Without clear leaders, I fear technical direction and taste will be about politics which seems more uncertain/risky. I like that there is a tight cohesive group in control over Go and that they are largely the original designers. I might be more interested in alternative government structures and Google having too much control only if those original authors all stepped down.” Rather than splitting between Community or Corporate, a more accurate representation would be how much market value is depending on those projects. If a project is thriving, usually enterprises will take good decisions to handle it. However, another but entirely valid and important question to ask is ‘should open source projects be driven by their market value?’ Another common argument is that the core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google (or Microsoft, or Apple, or Facebook for that matter) will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more that a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Google also has a propensity to kill its own products. What happens when Google is not as interested in Golang anymore? The company could leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project. Or they may let the authors only work on Golang in their spare time at home or at the weekends. While Google's history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. As a Hacker news user wrote, “Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al). They took the time to think through all their decisions, the impacts of said decisions, along with keeping things as simple as possible. Basically, doing things right the first time and not bolting on features simply because the community wants them.” Another says, “The way how Golang team handles potentially tectonic changes in language is also exemplary – very well communicated ideas, means to provide feedback and clear explanation of how the process works.” Rest assured, if any major change is made to Go, even a drastic one such as killing it, it will not be done without consulting the community and taking their feedback. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch State of Go February 2019 – Golang developments report for this month released
Read more
  • 0
  • 0
  • 6711

article-image-pycon-2019-highlights-python-steering-council-discusses-the-changes-in-the-current-python-governance-structure
Bhagyashree R
07 May 2019
8 min read
Save for later

PyCon 2019 highlights: Python Steering Council discusses the changes in the current Python governance structure

Bhagyashree R
07 May 2019
8 min read
At the ongoing PyCon 2019 event, Python Steering Council shed some light on the recent changes in the Python governance structure and what these changes mean for the larger Python community. PyCon 2019 is the biggest gathering of developers and experts who work with the Python programming language. It is scheduled from May 1 to May 9 and is happening at Cleveland, Ohio. Backed by Python Software Foundation (PSF), this event hosts various tutorials, talks, summits, as well as a job fair. The Python Steering Council After a two week nomination period (January 7 to January 20), which was followed by a two week voting period (January 21 to February 4), five members were selected for the Python Steering Council: Guido van Rossum, the brilliant mind behind the Python programming language and the former Python BDFL (Benevolent Dictator for Life). Barry Warsaw, a Senior Staff Software Engineer at LinkedIn, and also the lead maintainer for Jython. Brett Cannon, a Principal Software Engineering Manager at Microsoft and a  Python core developer for over 15 years. Carol Willing, a Research Software Engineer for Project Jupyter, Python core developer, and PSF Fellow Nick Coghlan, a CPython Core developer for Python Software Foundation On why Guido van Rossum stepped down from being BDFL Since the dawn of the Python era, Guido van Rossum served as its Benevolent Dictator for Life (BDFL). It is a designation given to open-source software development leaders who have the final say in any kind of argument within the community. Guido stepped down from this designation last year in July and became a part of the Steering Council. Being a BDFL he was responsible for going through all the Python-ideas that might become controversial. Eventually, it ended up becoming his responsibility to take the final decision for PEPs which has already been discussed among the people with greater domain knowledge and expertise. After playing such a key authoritative role for nearly 30 years, Guido started experiencing, what is really common nowadays in the tech industry, the burnout syndrome. So, he finally took the right step of stepping down from his role as a BDFL and urging the core python core developers to discuss and decide amongst themselves the kind of governance structure they want for the community going forward. After months of intense research and debate, the team arrived at the decision of distributing these responsibilities among the five elected steering council members who have earned the trust of the Python community. He adds, “...that's pretty stressful and so I'm very glad that responsibility is now distributed over five experts who have more trust of the community because they've actually been voted in rather than just becoming the leader by happenstance.” Sharing his feelings about stepping down from the BDFL role, he said, “...when your kid goes off to college some of you may have experience with that I will soon have that experience. You're no longer directly involved in their lives maybe but you never stop worrying and that's how I feel about Python at the moment and that's why I nominated myself for the steering committee.” Changes in the PEP process with the new governance model The purpose behind Python Enhancement Proposals (PEPs) was to take away the burden from Guido of going through each and every email to understand what the proposal was about. He just needed to read one document listing all the pros and cons related to the proposal and then make a decision. This entire decision-making process was documented within the PEPs. With the growing Python community, this process became quite unattainable for Guido as all the decisions funneled through him. So, that is why the idea of BDFL delegate came up: an expert who will take care of the decision-making for a particular feature. However, earlier employing a BDFL delegate was the last resort and it was done for those aspects of the ecosystem that Guido didn't want to get involved in. With the new governance model, this has become the first resort. Barry Warsaw, said, “...we don't want to make those decisions if there are people in the community who are better equipped to do that. That's what we want to do, we want to allow other people to become engaged with shaping where Python is going to go in the next 25 years.” Hiring a Project Manager to help transition from Python 2 to 3 The countdown for Python 2 has started and it will not be maintained past 2019. The steering council has plans for hiring a Project Manager to help them better manage the sunset of Python 2. The PM will also have the responsibility of looking into minor details as well, for instance, in the documentation, there is mention of Python 2 and 3. These instances will need to be updated, as from 2020 we will only have Python and eventually developers will not have to care about the major numbers. For the systems that haven't migrated, there will be commercial vendors offering support beyond 2020. There will also be options for business-critical systems, but it will take time, shared Willing. One of the responsibilities that the PM role will take care of will be looking into the various best practices that other companies have followed for the migration and help others to easily migrate. Willing said, “Back in a couple of years, Instagram did a great keynote about how they were moving things from 2 to 3. I think one of the things that we want a PM to help us in this transition is to really take those best practices that we're learning from large companies who have found the business case to transition to make it easier.” Status of CPython issue tracking migration from Roundup to GitHub All PSF’s projects including CPython have moved to GitHub, but the issue tracking for CPython is still done through Roundup. Marieta Wijaya, a Platform Engineer at Zapier and a core python developer, wrote PEP 581 that proposes using GitHub for its issue tracking. Barry Warsaw has taken the initial steps and split the PEP into PEP 581 and 588. While PEP 581 gives the rationale and background, PEP 588 gives a detailed plan of how the migration will take place. The council has requested the PSF to hire a PM to take the responsibilities of the migration. Brett Cannon adds, “...with even the PSF about potentially trying to have a PM sort of role to help handle the migration because we realize that if we go forward with this the migration of those issues are going to be critical and we don't want any problems.” The features or improvements Python Packaging Workgroup should now focus on The Python Packaging Workgroup supports the efforts taken for improving and maintaining the packaging ecosystem in Python by fundraising and distributing this fund among different efforts. The efforts this workgroup supports include PyPI, pip, packaging.python.org, setuptools, and cross-project efforts. Currently, the workgroup is supporting the Warehouse project, which is a new implementation of PyPI aiming to solve the issues PyPI users face. Last year, the workgroup came out with the Warehouse code base and in March this year they have laid out work for the next set of improvements which will be around security and accessibility. When Coghlan was asked about what are the next steps now, he shared that they are looking into improving the overall publisher experience. He adds, that though there have been improvements in the consumer experience, very fewer efforts have been put on improving the publisher side. Publisher-side releases are becoming complicated, people want to upload source distributions with multiple wheels for different platforms and different Python versions. Currently, the packaging process is not that flexible. “at the moment the packing index is kind of this instant publish thing like you push it up and it's done...we'd really like to be able to offer a staging area where people can put up all the artifacts for release, make sure everything's in order, and then once they're happy with the release, push button and have it go publish.” These were some of the highlights from the discussion about the changes in the Python governance structure. You can watch the full discussion on YouTube: https://www.youtube.com/watch?v=8dDp-UHBJ_A&feature=youtu.be&t=379 Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly RStudio 1.2 releases with improved testing and support for Python chunks, R scripts, and much more!
Read more
  • 0
  • 0
  • 3611