Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Programming

1083 Articles
article-image-9-reasons-why-rust-programmers-love-rust
Richa Tripathi
03 Oct 2018
8 min read
Save for later

9 reasons why Rust programmers love Rust

Richa Tripathi
03 Oct 2018
8 min read
The 2018 survey of the RedMonk Programming Language Rankings marked the entry of a new programming language in their Top 25 list. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It also jumped from the 46th most popular language on GitHub to the 18th position. The Stack overflow survey of 2018 is another indicator of the rise of Rust programming language. Almost 78% of the developers who are working with Rust loved working on it. It topped the list of the most loved programming language among the developers who took the survey for a straight third year in the row. Not only that but it ranked 8th in the most wanted programming language in the survey, which means that the respondent of the survey who has not used it yet but would like to learn. Although, Rust was designed as a low-level language, best suited for systems, embedded, and other performance critical code, it is gaining a lot of traction and presents a great opportunity for web developers and game developers. RUST is also empowering novice developers with the tools to start shipping code fast. So, why is Rust so tempting? Let's explore the high points of this incredible language and understand the variety of features that make it interesting to learn. Automatic Garbage Collection Garbage collection and non-memory resources often create problems with some systems languages. But Rust pays no head to garbage collection and removes the possibilities of failures caused by them. In Rust, garbage collection is completely taken care of by RAII (Resource Acquisition Is Initialization). Better support for Concurrency Concurrency and parallelism are incredibly imperative topics in computer science and are also a hot topic in the industry today. Computers are gaining more and more cores, yet many programmers aren't prepared to fully utilize the power of them. Handling concurrent programming safely and efficiently is another major goal of Rust language. Concurrency is difficult to reason about. In Rust, there is a strong, static type system that helps to reason about your code. As such, Rust also gives you two traits Send and Sync to help you make sense of code that can possibly be concurrent. Rust's standard library also provides a library for threads, which enable you to run Rust code in parallel. You can also use Rust’s threads as a simple isolation mechanism. Error Handling in Rust is beautiful A programmer is bound to make errors, irrespective of the programming language they use. Making errors while programming is normal, but it's the error handling mechanism of that programming language, which enhances the experience of writing the code. In Rust, errors are divided into types: unrecoverable errors and recoverable errors. Unrecoverable errors An error is classified as 'unrecoverable' when there is no other option other than to abort the program. The panic! macro in Rust is very helpful in these cases, especially when a bug has been detected in the code but the programmer is not clear how to handle that error. The panic! macro generates a failure message that helps the user to debug a problem. It also helps to stop the execution before more catastrophic events occur. Recoverable errors The errors which can be handled easily or which do not have a serious impact on the execution of the program are known as recoverable errors. It is represented by the Result<T, E>. The Result<T, E> is an enum that consists of two variants, i.e., OK<T> and Err<E>. It describes the possible error in the program. OK<T>: The 'T' is a type of value which returns the OK variant in the success case. It is an expected outcome. Err<E>: The 'E' is a type of error which returns the ERR variant in the failure. It is an unexpected outcome. Resource Management The one attribute that makes Rust stand out (and completely overpowers Google’s Go for that matter), is the algorithm used for resource management. Rust follows the C++ lead, with concepts like borrowing and mutable borrowing on the plate and thus resource management becomes an elegant process. Furthermore, Rust didn’t need a second chance to know that resource management is not just about memory usage; the fact that they did it right first time makes them a standout performer on this point. Although the Rust documentation does a good job of explaining the technical details, the article by Tim explains the concept in a much friendlier and easy to understand language. As such I thought, it would be good to list his points as well here. The following excerpt is taken from the article written by M.Tim Jones. Reusable code via modules Rust allows you to organize code in a way that promotes its reuse. You attain this reusability by using modules which are nothing but organized code as packages that other programmers can use. These modules contain functions, structures and even other modules that you can either make public, which can be accessed by the users of the module or you can make it private which can be used only within the module and not by the module user. There are three keywords to create modules, use modules, and modify the visibility of elements in modules. The mod keyword creates a new module The use keyword allows you to use the module (expose the definitions into the scope to use them) The pub keyword makes elements of the module public (otherwise, they're private). Cleaner code with better safety checks In Rust, the compiler enforces memory safety and another checking that make the programming language safe. Here, you will never have to worry about dangling pointers or bother using an object after it has been freed. These things are part of the core Rust language that allows you to write clean code. Also, Rust includes an unsafe keyword with which you can disable checks that would typically result in a compilation error. Data types and Collections in Rust Rust is a statically typed programming language, which means that every value in Rust must have a specified data type. The biggest advantage of static typing is that a large class of errors is identified earlier in the development process. These data types can be broadly classified into two types: scalar and compound. Scalar data types represent a single value like integer, floating-point, and character, which are commonly present in other programming languages as well. But Rust also provides compound data types which allow the programmers to group multiple values in one type such as tuples and arrays. The Rust standard library provides a number of data structures which are also called collections. Collections contain multiple values but they are different from the standard compound data types like tuples and arrays which we discussed above. The biggest advantage of using collections is the capability of not specifying the amount of data at compile time which allows the structure to grow and shrink as the program runs. Vectors, Strings, and hash maps are the three most commonly used collections in Rust. The friendly Rust community Rust owes it success to the breadth and depth of engagement of its vibrant community, which supports a highly collaborative process for helping the language to evolve in a truly open-source way. Rust is built from the bottom up, rather than any one individual or organization controlling the fate of the technology. Reliable Robust Release cycles of Rust What is common between Java, Spring, and Angular? They never release their update when they promise to. The release cycle of the Rust community works with clockwork precision and is very reliable. Here’s an overview of the dates and versions: In mid-September 2018, the Rust team released Rust 2018 RC1 version. Rust 2018 is the first major new edition of Rust (after Rust 1.0 released in 2015). This new release would mark the culmination of the last three years of Rust’s development from the core team, and brings the language together in one neat package. This version includes plenty of new features like raw identifiers, better path clarity, new optimizations, and other additions. You can learn more about the Rust language and its evolution at the Rust blog and download from the Rust language website. Note: the headline was edited 09.06.2018 to make it clear that Rust was found to be the most loved language among developers using it. Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes Rust as a Game Programming Language: Is it any good? Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more
Read more
  • 0
  • 3
  • 11126

article-image-linux-programmers-opposed-to-new-code-of-conduct-threaten-to-pull-code-from-project
Melisha Dsouza
25 Sep 2018
6 min read
Save for later

Linux programmers opposed to new Code of Conduct threaten to pull code from project

Melisha Dsouza
25 Sep 2018
6 min read
Facts of the Controversy at hand To “help make the kernel community a welcoming environment to participate in”, Linux accepted a new Code of Conduct earlier this month. This created conflict among the developer community because of the clauses in the CoC. The CoC is derived from a Contributor Covenant , created by Coraline Ada Ehmke, a software developer, an open-source advocate, and an LGBT activist. Just 30 minutes after signing this CoC, Principal kernel contributor- Linus Torvalds sent a mail apologizing for his past behavior and announced temporary break to improve upon his behavior. “This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry” This decision of taking a break is speculated among many as a precautionary measure to prevent Torvalds from violating the newly created code of conduct. Controversies took better shape after Caroline’s sarcastic tweet-                                               Source: Twitter The new Linux Code of Conduct is causing huge conflict Linux’s move from its Code of Conflicts to a new Code of Conduct has not been received well by many of its developers. Some have threatened to pull out their blocks of code important to the project to revolt against the change. This could have serious consequences because Linux is one of the most important pieces of open source software in the world. If threats are put into action, large parts of the internet would be left vulnerable to exploits. Applications that use Linux would be like an incomplete Jenga stack that could collapse any minute. Why some Linux developers are opposed to the new code of conduct Here is a summary of developers views on the Code of Conduct that, according to them, justifies their decision: Amending to the CoC would mean good contributors are removed over trivial matters or even events that happened a long time ago–like Larry Garfield, a prominent Drupal contributor who was asked to step down after his sex fetish was made public. There is a lack of proper definitions for punishments, time frames, and what constitutes abuse or harassment, inappropriate behavior and leaves the Code of Conduct wide open for exploitation.  It gives the people charged with enforcement immense power. It could force acceptance of contributions that wouldn’t make the cut if made by cis white males. Developers are concerned that Linux will start accepting patches from anyone and everyone just to keep par with the Code of Conduct. It will no longer depend on the ability of a person to code, but on social parameters like race, color, sex, and gender. Why some developers believe in the new Linux Code of Conduct On the other side of the argument, here are some potential reasons why the CoC will foster social justice: Encouraging an inclusive and safe space for women, LGBTQIA+, and People of Color, who in the absence of the CoC are excluded, harassed, and sometimes even raped by cis white males. The CoC aims to overcome meritocracy, which in many organizations has consistently shown itself to mainly benefit those with privilege, to the exclusion of underrepresented people in technology VA vastmajority of Linux contributors are cis white males. CC’s Code of Conduct would enable the building of a more diverse demographic array of contributors What does this mean to the developer community? Linux includes programmers who are always free to contribute to its open source platform. Contributing good code would help them climb up the ladder and become a ‘maintainer’. The greatest strength of Linux was its flexibility. Developers would contribute to the kernel and be concerned about only a single entity- their code patch. The Linux community would judge the code based on its quality. However, with the new Code of Conduct, critics say this could make passing judgement on code more challenging, For them, the Code of Conduct is a set of rules that expects everyone to be at equal levels in the community. It could mean that certain patches are accepted for fear of contravening the Code of Conduct. Here is what Caroline Ada Ehmke was forthright in her criticism of this view:   Source: Twitter   Clearly, many of the fears of the Code of Conduct’s critics haven’t yet come to pass. What they’re ultimately worried about is that there could be negative consequences. Google Developer Ted Ts’o next on the hit list Earlier this week, activist Sage Sharp, tweeted about Ted Ts'o:                                                      Source: Twitter This perhaps needs some context - the beginning of this argument dates all the way back to 2011 when Ts’o when was a member of the Linux Foundation's technical advisory board-participated in a discussion on the mailing list for the Australian national Linux conference that year, making comments that were later interpreted by Aurora as rape apologism. Using Aurora's piece as a fuse, Google employee Matthew Garrett slammed Ts'o on his beliefs. In 2017, yielding to the demands of SJWs, Google threw out James Damore, an engineer who circulated an internal creed about reverse discrimination in hiring practices. The SJW’s are coming for him and the best way to go forward would be to “take a break”, just like Linus did. As claimed by Caroline, the underlying aim of the CoC was to guide people in behaving in a respectable way and create a positive environment for people irrespective of their race, ethnicity, religion, nationality and political views. However, overlooking this aim, developers are concerned with the loopholes in the CoC. Gain more insights on this news as well as views from members of the Linux community at itfloss. Linux drops Code of Conflict and adopts new Code of Conduct Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’ NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018
Read more
  • 0
  • 0
  • 8929

article-image-why-golan-is-the-fastest-growing-language-on-github
Sugandha Lahoti
09 Aug 2018
4 min read
Save for later

Why Golang is the fastest growing language on GitHub

Sugandha Lahoti
09 Aug 2018
4 min read
Google’s Go language or alternatively Golang is currently one of the fastest growing programming languages in the software industry. Its speed, simplicity, and reliability make it the perfect choice for all kinds of developers. Now, its popularity has further gained momentum. According to a report, Go is the fastest growing language on GitHub in Q2 of 2018. Go has grown almost 7% overall with a 1.5% change from the previous Quarter. Source: Madnight.github.io What makes Golang so popular? A person was quoted on Reddit saying, “What I would have done in Python, Ruby, C, C# or C++, I'm now doing in Go.” Such is the impact of Go. Let’s see what makes Golang so popular. Go is cross-platform, so you can target an operating system of your choice when compiling a piece of code. Go offers a native concurrency model that is unlike most mainstream programming languages. Go relies on a concurrency model called CSP ( Communicating Sequential Processes). Instead of locking variables to share memory, Golang allows you to communicate the value stored in your variable from one thread to another. Go has a fairly mature package of its own. Once you install Go, you can build production level software that can cover a wide range of use cases from Restful web APIs to encryption software, before needing to consider any third party packages. Go code typically compiles to a single native binary, which basically makes deploying an application written in Go as easy as copying the application file to the destination server. Go is also being rapidly being adopted as the go-to cloud native language and by leading projects like Docker and Ethereum. It’s concurrency feature and easy deployment make it a popular choice for cloud development. Can Golang replace Python? Reddit is abuzz with people sharing their thoughts about whether Golang would replace Python. A user commented that “Writing a utility script is quicker in Go than in Python or JS. Not quicker as in performance, but in terms of raw development speed.” Another Reddit user pointed out three reasons not to use Python in a Reddit discussion, Why are people ditching python for go?: Dynamic compilation of Python can result in errors that exist in code, but they are in fact not detected. CPython really is very slow; very specifically, procedures that are invoked multiple times are not optimized to run more quickly in future runs (like pypy); they always run at the same slow speed. Python has a terrible distribution story; it's really hard to ship all your Python dependencies onto a new system. Go addresses those points pretty sharply. It has a good distribution story with static binaries. It has a repeatable build process, and it's pretty fast. In the same discussion, however, a user nicely sums it up saying, “There is nothing wrong with python except maybe that it is not statically typed and can be a bit slow, which also depends on the use case. Go is the new kid on the block, and while Go is nice, it doesn't have nearly as many libraries as python does. When it comes to stable, mature third-party packages, it can't beat python at the moment.” If you’re still thinking about whether or not to begin coding with Go, here’s a quirky rendition of the popular song Let it Go from Disney’s Frozen to inspire you. Write in Go! Write in Go! Go Cloud is Google’s bid to establish Golang as the go-to language of cloud Writing test functions in Golang [Tutorial] How Concurrency and Parallelism works in Golang [Tutorial]
Read more
  • 0
  • 0
  • 23291
Banner background image

article-image-transactions-for-async-programming-in-javaee
Aaron Lazar
31 Jul 2018
5 min read
Save for later

Using Transactions with Asynchronous Tasks in JavaEE [Tutorial]

Aaron Lazar
31 Jul 2018
5 min read
Threading is a common issue in most software projects, no matter which language or other technology is involved. When talking about enterprise applications, things become even more important and sometimes harder. Using asynchronous tasks could be a challenge: what if you need to add some spice and add a transaction to it? Thankfully, the Java EE environment has some great features for dealing with this challenge, and this article will show you how. This article is an extract from the book Java EE 8 Cookbook, authored by Elder Moraes. Usually, a transaction means something like code blocking. Isn't it awkward to combine two opposing concepts? Well, it's not! They can work together nicely, as shown here. Adding Java EE 8 dependency Let's first add our Java EE 8 dependency: <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Let's first create a User POJO: public class User { private Long id; private String name; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public User(Long id, String name) { this.id = id; this.name = name; } @Override public String toString() { return "User{" + "id=" + id + ", name=" + name + '}'; } } And here is a slow bean that will return User: @Stateless public class UserBean { public User getUser(){ try { TimeUnit.SECONDS.sleep(5); long id = new Date().getTime(); return new User(id, "User " + id); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); long id = new Date().getTime(); return new User(id, "Error " + id); } } } Now we create a task to be executed that will return User using some transaction stuff: public class AsyncTask implements Callable<User> { private UserTransaction userTransaction; private UserBean userBean; @Override public User call() throws Exception { performLookups(); try { userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); return user; } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } } private void performLookups() throws NamingException{ userBean = CDI.current().select(UserBean.class).get(); userTransaction = CDI.current() .select(UserTransaction.class).get(); } } And finally, here is the service endpoint that will use the task to write the result to a response: @Path("asyncService") @RequestScoped public class AsyncService { private AsyncTask asyncTask; @Resource(name = "LocalManagedExecutorService") private ManagedExecutorService executor; @PostConstruct public void init(){ asyncTask = new AsyncTask(); } @GET public void asyncService(@Suspended AsyncResponse response){ Future<User> result = executor.submit(asyncTask); while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } try { response.resume(Response.ok(result.get()).build()); } catch (InterruptedException | ExecutionException ex) { System.err.println(ex.getMessage()); response.resume(Response.status(Response .Status.INTERNAL_SERVER_ERROR) .entity(ex.getMessage()).build()); } } } To try this code, just deploy it to GlassFish 5 and open this URL: http://localhost:8080/ch09-async-transaction/asyncService How the Asynchronous execution works The magic happens in the AsyncTask class, where we will first take a look at the performLookups method: private void performLookups() throws NamingException{ Context ctx = new InitialContext(); userTransaction = (UserTransaction) ctx.lookup("java:comp/UserTransaction"); userBean = (UserBean) ctx.lookup("java:global/ ch09-async-transaction/UserBean"); } It will give you the instances of both UserTransaction and UserBean from the application server. Then you can relax and rely on the things already instantiated for you. As our task implements a Callabe<V> object that it needs to implement the call() method: @Override public User call() throws Exception { performLookups(); try { userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); return user; } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } } You can see Callable as a Runnable interface that returns a result. Our transaction code lives here: userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); And if anything goes wrong, we have the following: } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } Now we will look at AsyncService. First, we have some declarations: private AsyncTask asyncTask; @Resource(name = "LocalManagedExecutorService") private ManagedExecutorService executor; @PostConstruct public void init(){ asyncTask = new AsyncTask(); } We are asking the container to give us an instance from ManagedExecutorService, which It is responsible for executing the task in the enterprise context. Then we call an init() method, and the bean is constructed (@PostConstruct). This instantiates the task. Now we have our task execution: @GET public void asyncService(@Suspended AsyncResponse response){ Future<User> result = executor.submit(asyncTask); while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } try { response.resume(Response.ok(result.get()).build()); } catch (InterruptedException | ExecutionException ex) { System.err.println(ex.getMessage()); response.resume(Response.status(Response. Status.INTERNAL_SERVER_ERROR) .entity(ex.getMessage()).build()); } } Note that the executor returns Future<User>: Future<User> result = executor.submit(asyncTask); This means this task will be executed asynchronously. Then we check its execution status until it's done: while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } And once it's done, we write it down to the asynchronous response: response.resume(Response.ok(result.get()).build()); The full source code of this recipe is at Github. So now, using Transactions with Asynchronous Tasks in JavaEE isn't such a daunting task, is it? If you found this tutorial helpful and would like to learn more, head on to this book Java EE 8 Cookbook. Oracle announces a new pricing structure for Java Design a RESTful web API with Java [Tutorial] How to convert Java code into Kotlin
Read more
  • 0
  • 0
  • 4507

article-image-firefox-nightly-browser-debugging-your-app-is-now-fun-with-mozillas-new-time-travel-feature
Natasha Mathur
30 Jul 2018
3 min read
Save for later

Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature

Natasha Mathur
30 Jul 2018
3 min read
Earlier this month, Mozilla announced a fancy new feature called “Time Travel debugging” for its Firefox Nightly web browser at the JSConf EU 2018.  With time travel debugging, you can easily track the bugs in your code or app as it lets you pause and rewind to the exact time when your app broke down. Time travel debugging technology is particularly useful for local web development where it allows you to pause and step forward or backward, pause and rewind to a previous state, rewind to the time a console message was logged and rewind to the time where an element had a certain style. It is also great for times where you might want to save user recordings or view a test recording when the testing fails. With time travel debugging, you can record a tab on your browser and later replay it using WebReplay, an experimental project which allows you to record, rewind and replay the processes for the web. According to Jason Laster, a Senior Software Engineer at Mozilla,“ with time travel, we have a full recording of time, you can jump to any point in the path and see it immediately, you don’t have to refresh or re-click or pause or look at logs”. Here’s a video of Jason Laster talking about the potential of time travel debugging. JSConf  He also mentioned how time travel is “not a new thing” and he was inspired by Dan Abramov, creator of Redux when he showcased Redux at JSConfEU saying how he wanted “time travel” to “reduce his action over time”. With Redux, you get a slider that shows you all the actions over time and as you’re moving, you get to see the UI update as well. In fact, Mozilla rebuilt the debugger in order to use React and redux for its time travel feature. Their debugger comes equipped with Redux dev tools, which shows a list of all the actions for the debugger. So, the dev tools show you the state of the app, sources, and the pause data. Finally, Laster added how “this is just the beginning” and that “they hope to pull this off well in the future”. To use this new time travel debugging feature, you must install the Firefox Nightly browser first. For more details on the new feature, check out the official documentation. Mozilla is building a bridge between Rust and JavaScript Firefox has made a password manager for your iPhone Firefox 61 builds on Firefox Quantum, adds Tab Warming, WebExtensions, and TLS 1.3  
Read more
  • 0
  • 0
  • 4166

article-image-setting-gradle-properties-to-build-a-project
Savia Lobo
30 Jul 2018
10 min read
Save for later

Setting Gradle properties to build a project [Tutorial]

Savia Lobo
30 Jul 2018
10 min read
A Gradle script is a program. We use a Groovy DSL to express our build logic. Gradle has several useful built-in methods to handle files and directories as we often deal with files and directories in our build logic. In today's post, we will take a look at how to set Gradle properties in a project build.  We will also see how to use the Gradle Wrapper task to distribute a configurable Gradle with our build scripts. This article is an excerpt taken from, 'Gradle Effective Implementations Guide - Second Edition' written by Hubert Klein Ikkink.  Setting Gradle project properties In a Gradle build file, we can access several properties that are defined by Gradle, but we can also create our own properties. We can set the value of our custom properties directly in the build script and we can also do this by passing values via the command line. The default properties that we can access in a Gradle build are displayed in the following table: NameTypeDefault valueprojectProjectThe project instance.nameStringThe name of the project directory. The name is read-only.pathStringThe absolute path of the project.descriptionStringThe description of the project.projectDirFileThe directory containing the build script. The value is read-only.buildDirFileThe directory with the build name in the directory, containing the build script.rootDirFileThe directory of the project at the root of a project structure.groupObjectNot specified.versionObjectNot specified.antAntBuilderAn AntBuilder instance. The following build file has a task of showing the value of the properties: version = '1.0' group = 'Sample' description = 'Sample build file to show project properties' task defaultProperties << { println "Project: $project" println "Name: $name" println "Path: $path" println "Project directory: $projectDir" println "Build directory: $buildDir" println "Version: $version" println "Group: $project.group" println "Description: $project.description" println "AntBuilder: $ant" } When we run the build, we get the following output: $ gradle defaultProperties :defaultProperties Project: root project 'props' Name: defaultProperties Path: :defaultProperties Project directory: /Users/mrhaki/gradle-book/Code_Files/props Build directory: /Users/mrhaki/gradle-book/Code_Files/props/build Version: 1.0 Group: Sample Description: Sample build file to show project properties AntBuilder: org.gradle.api.internal.project.DefaultAntBuilder@3c95cbbd BUILD SUCCESSFUL Total time: 1.458 secs Defining custom properties in script To add our own properties, we have to define them in an  ext{} script block in a build file. Prefixing the property name with ext. is another way to set the value. To read the value of the property, we don't have to use the ext. prefix, we can simply refer to the name of the property. The property is automatically added to the internal project property as well. In the following script, we add a customProperty property with a String value custom. In the showProperties task, we show the value of the property: // Define new property. ext.customProperty = 'custom' // Or we can use ext{} script block. ext { anotherCustomProperty = 'custom' } task showProperties { ext { customProperty = 'override' } doLast { // We can refer to the property // in different ways: println customProperty println project.ext.customProperty println project.customProperty } } After running the script, we get the following output: $ gradle showProperties :showProperties override custom custom BUILD SUCCESSFUL Total time: 1.469 secs Defining properties using an external file We can also set the properties for our project in an external file. The file needs to be named gradle.properties, and it should be a plain text file with the name of the property and its value on separate lines. We can place the file in the project directory or Gradle user home directory. The default Gradle user home directory is $USER_HOME/.gradle. A property defined in the properties file, in the Gradle user home directory, overrides the property values defined in a properties file in the project directory. We will now create a gradle.properties file in our project directory, with the following contents. We use our build file to show the property values: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } If we run the build file, we don't have to pass any command-line options, Gradle will use gradle.properties to get values of the properties: $ gradle showProperties :showProperties Version: 4.0 Custom property: Property value from gradle.properties BUILD SUCCESSFUL Total time: 1.676 secs Passing properties via the command line Instead of defining the property directly in the build script or external file, we can use the -P command-line option to add an extra property to a build. We can also use the -P command-line option to set a value for an existing property. If we define a property using the -P command-line option, we can override a property with the same name defined in the external gradle.properties file. The following build script has a showProperties task that shows the value of an existing property and a new property: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } Let's run our script and pass the values for the existing version property and the non-existent  customProperty: $ gradle -Pversion=1.1 -PcustomProperty=custom showProperties :showProperties Version: 1.1 Custom property: custom BUILD SUCCESSFUL Total time: 1.412 secs Defining properties via system properties We can also use Java system properties to define properties for our Gradle build. We use the -D command-line option just like in a normal Java application. The name of the system property must start with org.gradle.project, followed by the name of the property we want to set, and then by the value. We can use the same build script that we created before: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } However, this time we use different command-line options to get a result: $ gradle -Dorg.gradle.project.version=2.0 -Dorg.gradle.project.customProperty=custom showProperties :showProperties Version: 2.0 Custom property: custom BUILD SUCCESSFUL Total time: 1.218 secs Adding properties via environment variables Using the command-line options provides much flexibility; however, sometimes we cannot use the command-line options because of environment restrictions or because we don't want to retype the complete command-line options each time we invoke the Gradle build. Gradle can also use environment variables set in the operating system to pass properties to a Gradle build. The environment variable name starts with ORG_GRADLE_PROJECT_ and is followed by the property name. We use our build file to show the properties: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } Firstly, we set ORG_GRADLE_PROJECT_version and ORG_GRADLE_PROJECT_customProperty environment variables, then we run our showProperties task, as follows: $ ORG_GRADLE_PROJECT_version=3.1 ORG_GRADLE_PROJECT_customProperty="Set by environment variable" gradle showProp :showProperties Version: 3.1 Custom property: Set by environment variable BUILD SUCCESSFUL Total time: 1.373 secs Using the Gradle Wrapper Normally, if we want to run a Gradle build, we must have Gradle installed on our computer. Also, if we distribute our project to others and they want to build the project, they must have Gradle installed on their computers. The Gradle Wrapper can be used to allow others to build our project even if they don't have Gradle installed on their computers. The wrapper is a batch script on the Microsoft Windows operating systems or shell script on other operating systems that will download Gradle and run the build using the downloaded Gradle. By using the wrapper, we can make sure that the correct Gradle version for the project is used. We can define the Gradle version, and if we run the build via the wrapper script file, the version of Gradle that we defined is used. Creating wrapper scripts To create the Gradle Wrapper batch and shell scripts, we can invoke the built-in wrapper task. This task is already available if we have installed Gradle on our computer. Let's invoke the wrapper task from the command-line: $ gradle wrapper :wrapper BUILD SUCCESSFUL Total time: 0.61 secs After the execution of the task, we have two script files—gradlew.bat and gradlew—in the root of our project directory. These scripts contain all the logic needed to run Gradle. If Gradle is not downloaded yet, the Gradle distribution will be downloaded and installed locally. In the gradle/wrapper directory, relative to our project directory, we find the gradle-wrapper.jar and gradle-wrapper.properties files. The gradle-wrapper.jar file contains a couple of class files necessary to download and invoke Gradle. The gradle-wrapper.properties file contains settings, such as the URL, to download Gradle. The gradle-wrapper.properties file also contains the Gradle version number. If a new Gradle version is released, we only have to change the version in the gradle-wrapper.properties file and the Gradle Wrapper will download the new version so that we can use it to build our project. All the generated files are now part of our project. If we use a version control system, then we must add these files to the version control. Other people that check out our project can use the gradlew scripts to execute tasks from the project. The specified Gradle version is downloaded and used to run the build file. If we want to use another Gradle version, we can invoke the wrapper task with the --gradle-version option. We must specify the Gradle version that the Wrapper files are generated for. By default, the Gradle version that is used to invoke the wrapper task is the Gradle version used by the wrapper files. To specify a different download location for the Gradle installation file, we must use the --gradle-distribution-url option of the wrapper task. For example, we could have a customized Gradle installation on our local intranet, and with this option, we can generate the Wrapper files that will use the Gradle distribution on our intranet. In the following example, we generate the wrapper files for Gradle 2.12 explicitly: $ gradle wrapper --gradle-version=2.12 :wrapper BUILD SUCCESSFUL Total time: 0.61 secs Customizing the Gradle Wrapper If we want to customize properties of the built-in wrapper task, we must add a new task to our Gradle build file with the org.gradle.api.tasks.wrapper.Wrapper type. We will not change the default wrapper task, but create a new task with new settings that we want to apply. We need to use our new task to generate the Gradle Wrapper shell scripts and support files. We can change the names of the script files that are generated with the scriptFile property of the Wrapper task. To change the name and location of the generated JAR and properties files, we can change the jarFile property: task createWrapper(type: Wrapper) { // Set Gradle version for wrapper files. gradleVersion = '2.12' // Rename shell scripts name to // startGradle instead of default gradlew. scriptFile = 'startGradle' // Change location and name of JAR file // with wrapper bootstrap code and // accompanying properties files. jarFile = "${projectDir}/gradle-bin/gradle-bootstrap.jar" } If we run the createWrapper task, we get a Windows batch file and shell script and the Wrapper bootstrap JAR file with the properties file is stored in the gradle-bin directory: $ gradle createWrapper :createWrapper BUILD SUCCESSFUL Total time: 0.605 secs $ tree . . ├── gradle-bin │ ├── gradle-bootstrap.jar │ └── gradle-bootstrap.properties ├── startGradle ├── startGradle.bat └── build.gradle 2 directories, 5 files To change the URL from where the Gradle version must be downloaded, we can alter the distributionUrl property. For example, we could publish a fixed Gradle version on our company intranet and use the distributionUrl property to reference a download URL on our intranet. This way we can make sure that all developers in the company use the same Gradle version: task createWrapper(type: Wrapper) { // Set URL with custom Gradle distribution. distributionUrl = 'http://intranet/gradle/dist/gradle-custom- 2.12.zip' } We discussed the Gradle properties and how to use the Gradle Wrapper to allow users to build our projects even if they don't have Gradle installed. We discussed how to customize the Wrapper to download a specific version of Gradle and use it to run our build. If you've enjoyed reading this post, do check out our book 'Gradle Effective Implementations Guide - Second Edition' to know more about how to use Gradle for Java Projects. Top 7 Python programming books you need to read 4 operator overloading techniques in Kotlin you need to know 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 45720
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-adding-postgis-layers-using-qgis-tutorial
Pravin Dhandre
26 Jul 2018
5 min read
Save for later

Adding PostGIS layers using QGIS [Tutorial]

Pravin Dhandre
26 Jul 2018
5 min read
Viewing tables as layers is great for creating maps or for simply working on a copy of the database outside the database. In this tutorial, we will establish a connection to our PostGIS database in order to add a table as a layer in QGIS (formerly known as Quantum GIS). Please navigate to the following site to install the latest version LTR of QGIS. On this page, click on Download Now and you will be able to choose a suitable operating system and the relevant settings. QGIS is available for Android, Linux, macOS X, and Windows. You might also be inclined to click on Discover QGIS to get an overview of basic information about the program along with features, screenshots, and case studies. This QGIS tutorial is an excerpt from a book written by Mayra Zurbaran,Pedro Wightman, Paolo Corti, Stephen Mather, Thomas Kraft and Bborie Park, titled PostGIS Cookbook - Second Edition. Loading Database... To begin, create the schema for this tutorial then, download data from the U.S. Census Bureau's FTP site: The shapefile is All Lines for Cuyahoga county in Ohio, which consist of roads and streams among other line features. Extract the ZIP file to your working directory and then load it into your database using shp2pgsql. Be sure to specify the spatial reference system, EPSG/SRID: 4269. When in doubt about using projections, use the service provided by the folks at OpenGeo at the following website: http://prj2epsg.org/search Use the following command to generate the SQL to load the shapefile: shp2pgsql -s 4269 -W LATIN1 -g the_geom -I tl_2012_39035_edges.shp chp11.tl_2012_39035_edges > tl_2012_39035_edges.sql How to do it... Now it's time to give the data we downloaded a look using QGIS. We must first create a connection to the database in order to access the table. Get connected and add the table as a layer by following the ensuing steps: Click on the Add PostGIS Layers icon: Click on the New button below the Connections drop-down menu. Create a new PostGIS connection. After the Add PostGIS Table(s) window opens, create a name for the connection and fill in a few parameters for your database, including Host, Port, Database, Username, and Password: Once you have entered all of the pertinent information for your database, click on the Test Connection button to verify that the connection is successful. If the connection is not successful, double-check for typos and errors. Additionally, make sure you are attempting to connect to a PostGIS-enabled database. If the connection is successful, go ahead and check the Save Username and Save Password checkboxes. This will prevent you from having to enter your login information multiple times throughout the exercise. Click on OK at the bottom of the menu to apply the connection settings. Now you can connect! Make sure the name of your PostGIS connection appears in the drop-down menu and then click on the Connect button. If you choose not to store your username and password, you will be asked to submit this information every time you try to access the database. Once connected, all schemas within the database will be shown and the tables will be made visible by expanding the target schema. Select the table(s) to be added as a layer by simply clicking on the table name or anywhere along its row. Selection(s) will be highlighted in blue. To deselect a table, click on it a second time and it will no longer be highlighted. Select the tl_2012_39035_edges table that was downloaded at the beginning of the tutorial and click on the Add button, as shown in the following screenshot: A subset of the table can also be added as a layer. This is accomplished by double-clicking on the desired table name. The Query Builder window will open, which aids in creating simple SQL WHERE clause statements. Add the roads by selecting the records where roadflg = Y. This can be done by typing a query or using the buttons within Query Builder: Click on the OK button followed by the Add button. A subset of the table is now loaded into QGIS as a layer. The layer is strictly a static, temporary copy of your database. You can make whatever changes you like to the layer and not affect the database table. The same holds true the other way around. Changes to the table in the database will have no effect on the layer in QGIS. If needed, you can save the temporary layer in a variety of formats, such as DXF, GeoJSON, KML, or SHP. Simply right-click on the layer name in the Layers panel and click on Save As. This will then create a file, which you can recall at a later time or share with others. The following screenshot shows the Cuyahoga county road network: You may also use the QGIS Browser Panel to navigate through the now connected PostGIS database and list the schemas and tables. This panel allows you to double-click to add spatial layers to the current project, providing a better user experience not only of connected databases, but on any directory of your machine: How it works... You have added a PostGIS layer into QGIS using the built-in Add PostGIS Table GUI. This was achieved by creating a new connection and entering your database parameters. Any number of database connections can be set up simultaneously. If working with multiple databases is more common for your workflows, saving all of the connections into one XML file and would save time and energy when returning to these projects in QGIS. To explore more on 3D capabilities of PostGIS, including LiDAR point clouds, read PostGIS Cookbook - Second Edition. Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data Top 7 libraries for geospatial analysis Learning R for Geospatial Analysis
Read more
  • 0
  • 0
  • 7735

article-image-quantum-computing-is-poised-to-take-a-quantum-leap-with-industries-and-governments-on-its-side
Natasha Mathur
26 Jul 2018
9 min read
Save for later

Quantum Computing is poised to take a quantum leap with industries and governments on its side

Natasha Mathur
26 Jul 2018
9 min read
“We’re really just, I would say, six months out from having quantum computers that will outperform classical computers in some capacity". -- Joseph Emerson, Quantum Benchmark CEO There are very few people who would say that their life hasn’t been changed by the computers. But, there are certain tasks that even a computer isn’t capable of solving efficiently and that’s where Quantum Computers come into the picture. They are incredibly powerful machines that process information at a subatomic level. Quantum Computing leverages the logic and principles of Quantum Mechanics. Quantum computers operate one million times faster than any other device that you’re currently using. They are a whole different breed of machines that promise to revolutionize computing. How Quantum computers differ from traditional computers? To start with, let’s look at how these computers look like. Unlike your mobile or desktop computing devices, Quantum Computers will never be able to fit in your pocket and you cannot station these at desks. At least, for the next few years. Also, these are fragile computers that look like vacuum cells or tubes with a bunch of lasers shining into them, and the whole apparatus must be kept in temperatures near to absolute zero at all times. IBM Research Secondly, we look at the underlying mechanics of how they each compute. Quantum Computing is different from classic digital computing in the sense that classical computing requires data to be encoded into binary digits (bits), each of which is always present in one of the two definite states (0 or 1). Whereas, in Quantum Computing, data units called Quantum bits or qubits can be present in more than one state at a time, called, “superpositions” of states. Two particles can also exhibit “entanglement,” where changing of one state may instantaneously affect the other. Thirdly, let’s look at what make up these machines i.e., their hardware and software components. A regular computer’s hardware includes components such as a CPU, monitor, and routers for communicating across the computer network. The software includes systems programs and different protocols that run on top of the seven layers namely physical layer, a data-link layer, network layer, transport layer, session layer, presentation layer and the application layer. Now, Quantum computer also consists of hardware and software components. There are Ion traps, semiconductors, vacuum cells, Josephson junction and wire loops on the hardware side. The software and protocol that lies within a Quantum Computer are divided into five different layers namely physical layer, virtual, Quantum Error Correction ( QEC) layer, logical layer, and application layer. Layered Quantum Computer architecture Quantum Computing: A rapidly growing field Quantum Computing is special and is advancing on a large scale. According to a new market research report by marketsandmarkets, the Quantum Computing Market will be worth 495.3 Million USD by 2023. Also, as per ABI research, total revenue generated from quantum computing services will exceed $15 billion by 2028. Also, Matthew Brisse, research VP at Gartner states, “Quantum computing is heavily hyped and evolving at different rates, but it should not be ignored”. Now, there isn’t any certainty from the Quantum Computing world about when it will make the shift from scientific discoveries to applications for the average user, but, there are different areas, like pharmaceuticals, Machine Learning, Security, etc, that are leveraging the potential of Quantum Computing. Apart from these industries, even countries are getting their feet dirty in the field of Quantum Computing, lately, by joining the “Quantum arms race” ( the race for excelling in Quantum Technology). How is the tech industry driving Quantum Computing forward? Both big tech giants like IBM, Google, Microsoft and startups like Rigetti, D-wave and Quantum Benchmark are slowly, but steadily, bringing the Quantum Computing Era a step closer to us. Big Tech’s big bets on Quantum computing IBM established a landmark in the quantum computing world back in November 2017 when it announced its first powerful quantum computer that can handle 50 qubits. The company is also working on making its 20-qubit system available through its cloud computing platform. It is collaborating with startups such as Quantum Benchmark, Zapata Computing, 1Qbit, etc to further accelerate Quantum Computing. Google also announced Bristlecone, the largest 72 qubit quantum computer, earlier this year, which promises to provide a testbed for research into scalability and error rates of qubit technology. Google along with IBM plans to commercialize its quantum computing technologies in the next few years. Microsoft is also working on Quantum Computing with plans to build a topological quantum computer. This is an effort to bring a practical quantum computer quicker for commercial use. It also introduced a language called Q#  ( Q Sharp ), last year, along with tools to help coders develop software for quantum computers "We want to solve today's unsolvable problems and we have an opportunity with a unique, differentiated technology to do that," mentioned Todd HolmDahl, Corporate VP of Microsoft Quantum. Startups leading the quantum computing revolution - Quantum Valley, anyone? Apart from these major tech giants, startups are also stepping up their Quantum Computing game. D-Wave Systems Inc., a Canadian company, was the first one to sell a quantum computer named D-wave one, back in 2011, even though the usefulness of this computer is limited.  These quantum computers are based on adiabatic quantum computation and have been sold to government agencies, aerospace, and cybersecurity vendors. Earlier this year, D-Wave Systems announced that it has completed testing a working prototype of its next-generation processor, as well as the installation of a D-Wave 2000Q system for a customer. Another startup worth mentioning is San Francisco based Rigetti computing, which is racing against Google, Microsoft, IBM, and Intel to build Quantum Computing projects. Rigetti has raised nearly $70 million for development of quantum computers. According to Chad Rigetti, CEO at Rigetti, “This is going to be a very large industry—every major organization in the world will have to have a strategy for how to use this technology”. Lastly, Quantum Benchmark, another Canadian startup, is collaborating with Google to support the development of quantum computing. Quantum Benchmark’s True-Q technology has been integrated with Cirq, Google’s latest open-source quantum computing framework. Looking at the efforts these companies are putting in to boost the growth of Quantum computing, it’s hard to say who will win the quantum race. But one thing is clear, just as Silicon Valley drove product innovation and software adoption during the early days of computing, having many businesses work together and compete with each other, may not be a bad idea for quantum computing. How are countries preparing for the Quantum arms race? Though Quantum Computing hasn’t yet made its way to the consumer market, it’s remarkable progress in research and it’s infinite potential have now caught the attention of nations worldwide. China, the U.S and other great powers, are now joining the Quantum arms race. But, why are these governments so serious about investing in the research and development of Quantum Computing? The answer is simple: Quantum computing will be a huge competitive advantage to the government that finds breakthroughs with their Quantum technology-based innovation as it will be able to cripple militaries, topples the global economy as well as yields spectacular and quick solutions to complex problems. It further has the power to revolutionize everything in today’s world from cellular communications and navigations to sensors and imaging. China continues to leap ahead in the quantum race China has been consistently investing billions of dollars in the field of Quantum Computing. From building its first form of Quantum Computer, back in 2017, to coming out with world’s first unhackable “quantum satellite” in 2016, China is a clearly killing it when it comes to Quantum Computing. Last year, the Chinese government has also funded $10 billion for the construction of the world’s biggest quantum research facility planned to open in 2020. The country plans to develop quantum computers and performs quantum metrology to support the military and national defense efforts. The U.S is pushing hard to win the race Americans are also trying to top up their game and win the Quantum arms race. For instance, the Pentagon is working on applying quantum computing in the future to the U.S. military. They plan to establish highly secure and encrypted communications for satellites along with ensuring accurate navigation that does not need GPS signals. In fact, Congress has proposed $800 million funding to Pentagon’s quantum projects for the next five years. Similarly, the Defense Advanced Research Projects Agency (DARPA) is interested in exploring Quantum Computing to improve the general computing performance as well as artificial intelligence and machine learning systems. Other than that, the U.S. government spends about $200 million per year on quantum research. The UK is stepping up its game The United Kingdom is also not far behind in the Quantum arms race. It has a $400 million program underway for quantum-based sensing and timing. European Union is also planning to invest worth $1 billion spread over 10 years for projects involving scientific research and development of devices for sensing, communication, simulation, and computing. Other great powers such as Canada, Australia, and Israel are also acquainting themselves with the exciting world of Quantum Computing. These governments are confident that whoever makes it first, gets an upper hand for life. The Quantum Computing revolution has begun Quantum Computing has the power to lead revolutionary breakthroughs even though, Quantum computers that can outperform regular computers are not there yet. Also, this is quite a complex technology which a lot of people find hard to understand as the rules of the quantum world different drastically from those of the physical world. While all progress made is constrained to research labs at the moment, it has a lot of potentials, given the intense development, and investments that are going into Quantum Computing from governments and large corporations across the globe. It’s likely that we’ll be able to see the working prototypes of Quantum computers emerge soon. With the Quantum revolution here, the possibilities that lie ahead are limitless. Perhaps we could crack the big bang theory finally!? Or maybe quantum computing powered by AI will just speed up the arrival of singularity. Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation PyCon US 2018 Highlights: Quantum computing, blockchains, and serverless rule!  
Read more
  • 0
  • 1
  • 4671

article-image-will-rust-replace-c
Aaron Lazar
26 Jul 2018
6 min read
Save for later

Will Rust Replace C++?

Aaron Lazar
26 Jul 2018
6 min read
This question has been asked several times, showing that developers like yourself want to know whether Rust will replace the good old, painfully difficult to program, C++. Let’s find out, shall we? Going with the trends If I compare both Rust vs C++ on Google Trends, this is what I get. C++ beats Rust to death. Each one of C++’s troughs are like daggers piercing through Rust, pinning it down to the floor! C++ seems to have it’s own ups and downs, but it’s maintaining a pretty steady trend, over the past 5 years. Now if I knock C++ out of the way, this is what I get, That’s a pretty interesting trend there! I’d guess it’s about a 25 degree slope there. Never once has Rust seen a major dip in it’s gradual rise to fame. But what’s making it grow that well? What Developers Love and Why Okay, if you’re in a mood for funsies, try this out at your workplace: Assemble your team members in a room and then tell them there’s a huge project coming up. Tell them that the requirements state that it’s to be developed in Rust. You might find 78.9% of them beaming! Give it a few moments, then say you’re sorry and that you actually meant C++. Watch those smiles go right out the window! ;) You might wonder why I used the very odd percentage, 78.9%. Well, that’s just the percentage of developers who love Rust, as per the 2018 StackOverflow survey. Now this isn’t something that happened overnight, as Rust topped the charts even in 2017, with 73.1% respondents loving the language. You want me to talk about C++ too? Okay, if you insist, where is it? Ahhhhh… there it is!!! C++ coming up at 4th place…. from the bottom! So why this great love for Rust and this not so great love for C++? C++ is a great language, you get awesome performance, you can build super fast applications with its rich function library. You can build a wide variety of applications from GUI apps to 3D graphics, games, desktop apps, as well as hard core computer vision applications. On the other hand, Rust is pretty fast too. It can be used just about anywhere C++ can be used. It has a superb community and most of all, it’s memory safe! Rust’s concurrency capabilities have often been hailed as being superior to C++, and developers all around are eager to get their hands on Rust for this feature! Wondering how I know? I have access to a dashboard that puts a smile on my face, everytime I check the sales of Hands-On Concurrency with Rust! ;) You should get the book too, you know. Coming back to our discussion, Rust’s build and dependency injection tool, Cargo, is a breeze to work with. Why Rust is a winner When compared with C++, the main advantage of using Rust is safety. C++ doesn’t protect its own abstractions, and so, doesn’t allow programmers to protect theirs either. Rust on the other hand, does both. If you make a mistake in C++, your program will technically have no meaning, which can result in arbitrary behavior. Unlike C++, Rust protects you from such dangers, so you can instead concentrate on solving problems. If you’re already a C++ programmer, Rust will allow you to be more effective, while allowing those with little to no low level programming experience, to create things they might not have been capable of doing before. Mozilla was very wise in creating Rust, and the reason behind it was that they wanted web developers to have a practical and efficient language at hand, should they need to write low level code. Kudos to Mozilla! Now back to the question - Will Rust replace C++? Should C++ really worry about Rust replacing it someday? Honestly speaking, I think it has a pretty good shot at replacing C++. Rust is much better in several aspects, like memory safety, concurrency and it lets you think more carefully about memory usage and pointers. Rust will make you a better and more efficient programmer. The transition is already happening in various fields. In game development, for example, AAA game studio, At Dawn Studios is switching entirely to Rust, after close to 3 decades of using C++. That’s a pretty huge step, considering there might be a lot of considerations and workarounds to figure out. But if you look at the conversations on Twitter, the Rust team is delighted at this move and is willing to offer any kind of support if need be. Don’t you just want to give the Rust team a massive bear hug? IoT is another booming field, where Rust is finding rapid adoption. Hardware makers like Tessel provide support for Rust already. In terms of security, Microsoft created an open source repo on github, for an IoT Edge Security Daemon, written entirely in Rust. Rust seems to be doing pretty well in the GUI department too, with tools like Piston. In fact, you might also find Rust being used along with popular GUI framework, Qt. All this shows that Rust is seriously growing in adoption. While I say it might eventually be the next C++, it’s probably going to take years for that to happen. This is mainly because entire ecosystems are built on C++ and they will continue to be. Today there are many dead programming languages whose applications still live on and breed newer generations of developers. (I’m looking at you, COBOL!) In this world of Polyglotism, if that’s even a word, the bigger question we should be asking is how much will we benefit if both C++ and Rust are implemented together. There is definitely a strong case for C++ developers to learn Rust. The question then really is: Do you want to be a programmer working in mature industries and projects or do you want to be a code developer working at the cutting edge of technological progress? I’ll flip the original question and pose it to you: Will you replace C++ with Rust? Perform Advanced Programming with Rust Learn a Framework; forget the language! Firefox 61 builds on Firefox Quantum, adds Tab Warming, WebExtensions, and TLS 1.3  
Read more
  • 0
  • 5
  • 27750

article-image-whats-new-in-intellij-idea-2018-2
Sugandha Lahoti
26 Jul 2018
4 min read
Save for later

What’s new in IntelliJ IDEA 2018.2

Sugandha Lahoti
26 Jul 2018
4 min read
JetBrains has released the second version of their popular IDE for this year. IntelliJ IDEA 2018.2 is full of changes including support for Java 11, updates to version editor, user interface, JVM debugger, Gradle and more. Let’s have a quick look at all the different features and updates. Updates to Java IntelliJ IDEA 2018.2 brings support for the upcoming Java 11. The IDE now supports local-variable syntax for lambda parameters according to JEP 323. The Dataflow information can now be viewed in the editor. Quick Documentation can now be configured to pop-up together with autocompletion. Extract Method has a new preview panel to check the results of refactoring before actual changes are made. The @Contract annotation adds new return values: new, this, and paramX. The IDE also has updated its inspections and intention actions including smarter Join Line action and Stream API support, among many others. Improvements to Editor IntelliJ IDEA now underlines reassigned local variables and reassigned parameters, by default. While typing, users can use Tab to navigate outside the closing brackets or closing quotes. For or while keywords are highlighted when the caret is placed on the corresponding break or continue keyword. Changes to User Interface IntelliJ IDEA 2018.2 comes with support for the MacBook Touch Bar. Users can now use dark window headers in MacOS There are new cleaner and simpler icons on the IDE toolbar and tool windows for better readability. The IntelliJ theme on Linux has been updated to look more modern. Updates to the Version Control System The updated Files Merged with Conflicts dialog displays Git branch names and adds a new Group files by directory option. Users can now open several Log tabs in the Version Control toolwindow. The IDE now displays the Favorites branches in the Branch filter on the Log tab. While using the Commit and Push action users can either skip the Push dialog completely or only show this dialog when pushing to protected branches. The  IDE also adds support for configuring multiple GitHub accounts. Improvements in the JVM debugger IntelliJ IDEA 2018.2 includes several new breakpoint intentions actions for debugging Java projects. Users now have the ability to filter a breakpoint hit by the caller method. Changes to Gradle The included buildSrc Gradle projects are now discovered automatically. Users can now debug Gradle DSL blocks. Updates to Kotlin The Kotlin plugin bundled with the IDE has been updated to v1.2.51. Users can now run Kotlin Script scratch files and see the results right inside the editor. An intention to convert end-of-line comments into the block comments and vice versa has been added. New coroutine inspections and intentions added. Improvements in the Scala plugin The Scala plugin can show implicits right in the editor and can even show places where implicits are not found. The Scalafmt formatter has been integrated. Added updates to Semantic highlighting and improved auto-completion for pattern matching. JavaScript & TypeScript changes The newExtract React component can be used for refactoring to break a component into two. A new intention to Convert React class components into functional components added. New features can be added to an Angular app using the integration with ng add. New JavaScript and TypeScript intentions: Implement interface, Create derived class, Implement members of an interface or abstract class, Generate cases for ‘switch’, and Iterate with ‘for..of’. A new Code Coverage feature for finding the unused code in client-side apps. These are just a select few updates from the IntelliJ IDEA 2018.2 release. A complete list of all the changes can be found in the release notes. You can also read the JetBrains blog for a concise version. How to set up the Scala Plugin in IntelliJ IDE [Tutorial] Eclipse IDE’s Photon release will support Rust GitLab open sources its Web IDE in GitLab 10.7
Read more
  • 0
  • 0
  • 4199
article-image-binary-search-tree-tutorial
Pavan Ramchandani
25 Jul 2018
19 min read
Save for later

Build a C++ Binary search tree [Tutorial]

Pavan Ramchandani
25 Jul 2018
19 min read
A binary tree is a hierarchical data structure whose behavior is similar to a tree, as it contains root and leaves (a node that has no child). The root of a binary tree is the topmost node. Each node can have at most two children, which are referred to as the left child and the right child. A node that has at least one child becomes a parent of its child. A node that has no child is a leaf. In this tutorial, you will be learning about the Binary tree data structures, its principles, and strategies in applying this data structures to various applications. This C++ tutorial has been taken from C++ Data Structures and Algorithms. Read more here.  Take a look at the following binary tree: From the preceding binary tree diagram, we can conclude the following: The root of the tree is the node of element 1 since it's the topmost node The children of element 1 are element 2 and element 3 The parent of elements 2 and 3 is 1 There are four leaves in the tree, and they are element 4, element 5, element 6, and element 7 since they have no child This hierarchical data structure is usually used to store information that forms a hierarchy, such as a file system of a computer. Building a binary search tree ADT A binary search tree (BST) is a sorted binary tree, where we can easily search for any key using the binary search algorithm. To sort the BST, it has to have the following properties: The node's left subtree contains only a key that's smaller than the node's key The node's right subtree contains only a key that's greater than the node's key You cannot duplicate the node's key value By having the preceding properties, we can easily search for a key value as well as find the maximum or minimum key value. Suppose we have the following BST: As we can see in the preceding tree diagram, it has been sorted since all of the keys in the root's left subtree are smaller than the root's key, and all of the keys in the root's right subtree are greater than the root's key. The preceding BST is a balanced BST since it has a balanced left and right subtree. We also can define the preceding BST as a balanced BST since both the left and right subtrees have an equal height (we are going to discuss this further in the upcoming section). However, since we have to put the greater new key in the right subtree and the smaller new key in the left subtree, we might find an unbalanced BST, called a skewed left or a skewed right BST. Please see the following diagram:   The preceding image is a sample of a skewed left BST, since there's no right subtree. Also, we can find a BST that has no left subtree, which is called a skewed right BST, as shown in the following diagram: As we can see in the two skewed BST diagrams, the height of the BST becomes taller since the height equals to N - 1 (where N is the total keys in the BST), which is five. Comparing this with the balanced BST, the root's height is only three. To create a BST in C++, we need to modify our TreeNode class in the preceding binary tree discussion, Building a binary tree ADT. We need to add the Parent properties so that we can track the parent of each node. It will make things easier for us when we traverse the tree. The class should be as follows: class BSTNode { public: int Key; BSTNode * Left; BSTNode * Right; BSTNode * Parent; }; There are several basic operations which BST usually has, and they are as follows: Insert() is used to add a new node to the current BST. If it's the first time we have added a node, the node we inserted will be a root node. PrintTreeInOrder() is used to print all of the keys in the BST, sorted from the smallest key to the greatest key. Search() is used to find a given key in the BST. If the key exists it returns TRUE, otherwise it returns FALSE. FindMin() and FindMax() are used to find the minimum key and the maximum key that exist in the BST. Successor() and Predecessor() are used to find the successor and predecessor of a given key. We are going to discuss these later in the upcoming section. Remove() is used to remove a given key from BST. Now, let's discuss these BST operations further. Inserting a new key into a BST Inserting a key into the BST is actually adding a new node based on the behavior of the BST. Each time we want to insert a key, we have to compare it with the root node (if there's no root beforehand, the inserted key becomes a root) and check whether it's smaller or greater than the root's key. If the given key is greater than the currently selected node's key, then go to the right subtree. Otherwise, go to the left subtree if the given key is smaller than the currently selected node's key. Keep checking this until there's a node with no child so that we can add a new node there. The following is the implementation of the Insert() operation in C++: BSTNode * BST::Insert(BSTNode * node, int key) { // If BST doesn't exist // create a new node as root // or it's reached when // there's no any child node // so we can insert a new node here if(node == NULL) { node = new BSTNode; node->Key = key; node->Left = NULL; node->Right = NULL; node->Parent = NULL; } // If the given key is greater than // node's key then go to right subtree else if(node->Key < key) { node->Right = Insert(node->Right, key); node->Right->Parent = node; } // If the given key is smaller than // node's key then go to left subtree else { node->Left = Insert(node->Left, key); node->Left->Parent = node; } return node; } As we can see in the preceding code, we need to pass the selected node and a new key to the function. However, we will always pass the root node as the selected node when performing the Insert() operation, so we can invoke the preceding code with the following Insert() function: void BST::Insert(int key) { // Invoking Insert() function // and passing root node and given key root = Insert(root, key); } Based on the implementation of the Insert() operation, we can see that the time complexity to insert a new key into the BST is O(h), where h is the height of the BST. However, if we insert a new key into a non-existing BST, the time complexity will be O(1), which is the best case scenario. And, if we insert a new key into a skewed tree, the time complexity will be O(N), where N is the total number of keys in the BST, which is the worst case scenario. Traversing a BST in order We have successfully created a new BST and can insert a new key into it. Now, we need to implement the PrintTreeInOrder() operation, which will traverse the BST in order from the smallest key to the greatest key. To achieve this, we will go to the leftmost node and then to the rightmost node. The code should be as follows: void BST::PrintTreeInOrder(BSTNode * node) { // Stop printing if no node found if(node == NULL) return; // Get the smallest key first // which is in the left subtree PrintTreeInOrder(node->Left); // Print the key std::cout << node->Key << " "; // Continue to the greatest key // which is in the right subtree PrintTreeInOrder(node->Right); } Since we will always traverse from the root node, we can invoke the preceding code as follows: void BST::PrintTreeInOrder() { // Traverse the BST // from root node // then print all keys PrintTreeInOrder(root); std::cout << std::endl; } The time complexity of the PrintTreeInOrder() function will be O(N), where N is the total number of keys for both the best and the worst cases since it will always traverse to all keys. Finding out whether a key exists in a BST Suppose we have a BST and need to find out if a key exists in the BST. It's quite easy to check whether a given key exists in a BST, since we just need to compare the given key with the current node. If the key is smaller than the current node's key, we go to the left subtree, otherwise we go to the right subtree. We will do this until we find the key or when there are no more nodes to find. The implementation of the Search() operation should be as follows: BSTNode * BST::Search(BSTNode * node, int key) { // The given key is // not found in BST if (node == NULL) return NULL; // The given key is found else if(node->Key == key) return node; // The given is greater than // current node's key else if(node->Key < key) return Search(node->Right, key); // The given is smaller than // current node's key else return Search(node->Left, key); } Since we will always search for a key from the root node, we can create another Search() function as follows: bool BST::Search(int key) { // Invoking Search() operation // and passing root node BSTNode * result = Search(root, key); // If key is found, returns TRUE // otherwise returns FALSE return result == NULL ? false : true; } The time complexity to find out a key in the BST is O(h), where h is the height of the BST. If we find a key which lies in the root node, the time complexity will be O(1), which is the best case. If we search for a key in a skewed tree, the time complexity will be O(N), where N is the total number of keys in the BST, which is the worst case. Retrieving the minimum and maximum key values Finding out the minimum and maximum key values in a BST is also quite simple. To get a minimum key value, we just need to go to the leftmost node and get the key value. On the contrary, we just need to go to the rightmost node and we will find the maximum key value. The following is the implementation of the FindMin() operation to retrieve the minimum key value, and the FindMax() operation to retrieve the maximum key value: int BST::FindMin(BSTNode * node) { if(node == NULL) return -1; else if(node->Left == NULL) return node->Key; else return FindMin(node->Left); } int BST::FindMax(BSTNode * node) { if(node == NULL) return -1; else if(node->Right == NULL) return node->Key; else return FindMax(node->Right); } We return -1 if we cannot find the minimum or maximum value in the tree, since we assume that the tree can only have a positive integer. If we intend to store the negative integer as well, we need to modify the function's implementation, for instance, by returning NULL if no minimum or maximum values are found. As usual, we will always find the minimum and maximum key values from the root node, so we can invoke the preceding operations as follows: int BST::FindMin() { return FindMin(root); } int BST::FindMax() { return FindMax(root); } Similar to the Search() operation, the time complexity of the FindMin() and FindMax() operations is O(h), where h is the height of the BST. However, if we find the maximum key value in a skewed left BST, the time complexity will be O(1), which is the best case, since it doesn't have any right subtree. This also happens if we find the minimum key value in a skewed right BST. The worst case will appear if we try to find the minimum key value in a skewed left BST or try to find the maximum key value in a skewed right BST, since the time complexity will be O(N). Finding out the successor of a key in a BST Other properties that we can find from a BST are the successor and the predecessor. We are going to create two functions named Successor() and Predecessor() in C++. But before we create the code, let's discuss how to find out the successor and the predecessor of a key of a BST. In this section, we are going to learn about the successor first, and then we will discuss the predecessor in the upcoming section. There are three rules to find out the successor of a key of a BST. Suppose we have a key, k, that we have searched for using the previous Search() function. We will also use our preceding BST to find out the successor of a specific key. The successor of k can be found as follows: If k has a right subtree, the successor of k will be the minimum integer in the right subtree of k. From our preceding BST, if k = 31, Successor(31) will give us 53 since it's the minimum integer in the right subtree of 31. Please take a look at the following diagram: If k does not have a right subtree, we have to traverse the ancestors of k until we find the first node, n, which is greater than node k. After we find node n, we will see that node k is the maximum element in the left subtree of n. From our preceding BST, if k = 15, Successor(15) will give us 23 since it's the first greater ancestor compared with 15, which is 23. Please take a look at the following diagram: If k is the maximum integer in the BST, there's no successor of k. From the preceding BST, if we run Successor(88), we will get -1, which means no successor has been found, since 88 is the maximum key of the BST. Based on our preceding discussion about how to find out the successor of a given key in a BST, we can create a Successor() function in C++ with the following implementation: int BST::Successor(BSTNode * node) { // The successor is the minimum key value // of right subtree if (node->Right != NULL) { return FindMin(node->Right); } // If no any right subtree else { BSTNode * parentNode = node->Parent; BSTNode * currentNode = node; // If currentNode is not root and // currentNode is its right children // continue moving up while ((parentNode != NULL) && (currentNode == parentNode->Right)) { currentNode = parentNode; parentNode = currentNode->Parent; } // If parentNode is not NULL // then the key of parentNode is // the successor of node return parentNode == NULL ? -1 : parentNode->Key; } } However, since we have to find a given key's node first, we have to run Search() prior to invoking the preceding Successor() function. The complete code for searching for the successor of a given key in a BST is as follows: int BST::Successor(int key) { // Search the key's node first BSTNode * keyNode = Search(root, key); // Return the key. // If the key is not found or // successor is not found, // return -1 return keyNode == NULL ? -1 : Successor(keyNode); } From our preceding Successor() operation, we can say that the average time complexity of running the operation is O(h), where h is the height of the BST. However, if we try to find out the successor of a maximum key in a skewed right BST, the time complexity of the operation is O(N), which is the worst case scenario. Finding out the predecessor of a key in a BST If k has a left subtree, the predecessor of k will be the maximum integer in the left subtree of k. From our preceding BST, if k = 12, Predecessor(12) will be 7 since it's the maximum integer in the left subtree of 12. Please take a look at the following diagram: If k does not have a left subtree, we have to traverse the ancestors of k until we find the first node, n, which is lower than node k. After we find node n, we will see that node n is the minimum element of the traversed elements. From our preceding BST, if k = 29, Predecessor(29) will give us 23 since it's the first lower ancestor compared with 29, which is 23. Please take a look at the following diagram: If k is the minimum integer in the BST, there's no predecessor of k. From the preceding BST, if we run Predecessor(3), we will get -1, which means no predecessor is found since 3 is the minimum key of the BST. Now, we can implement the Predecessor() operation in C++ as follows: int BST::Predecessor(BSTNode * node) { // The predecessor is the maximum key value // of left subtree if (node->Left != NULL) { return FindMax(node->Left); } // If no any left subtree else { BSTNode * parentNode = node->Parent; BSTNode * currentNode = node; // If currentNode is not root and // currentNode is its left children // continue moving up while ((parentNode != NULL) && (currentNode == parentNode->Left)) { currentNode = parentNode; parentNode = currentNode->Parent; } // If parentNode is not NULL // then the key of parentNode is // the predecessor of node return parentNode == NULL ? -1 : parentNode->Key; } } And, similar to the Successor() operation, we have to search for the node of a given key prior to invoking the preceding Predecessor() function. The complete code for searching for the predecessor of a given key in a BST is as follows: int BST::Predecessor(int key) { // Search the key's node first BSTNode * keyNode = Search(root, key); // Return the key. // If the key is not found or // predecessor is not found, // return -1 return keyNode == NULL ? -1 : Predecessor(keyNode); } Similar to our preceding Successor() operation, the time complexity of running the Predecessor() operation is O(h), where h is the height of the BST. However, if we try to find out the predecessor of a minimum key in a skewed left BST, the time complexity of the operation is O(N), which is the worst case scenario. Removing a node based on a given key The last operation in the BST that we are going to discuss is removing a node based on a given key. We will create a Remove() operation in C++. There are three possible cases for removing a node from a BST, and they are as follows: Removing a leaf (a node that doesn't have any child). In this case, we just need to remove the node. From our preceding BST, we can remove keys 7, 15, 29, and 53 since they are leaves with no nodes. Removing a node that has only one child (either a left or right child). In this case, we have to connect the child to the parent of the node. After that, we can remove the target node safely. As an example, if we want to remove node 3, we have to point the Parent pointer of node 7 to node 12 and make the left node of 12 points to 7. Then, we can safely remove node 3. Removing a node that has two children (left and right children). In this case, we have to find out the successor (or predecessor) of the node's key. After that, we can replace the target node with the successor (or predecessor) node. Suppose we want to remove node 31, and that we want 53 as its successor. Then, we can remove node 31 and replace it with node 53. Now, node 53 will have two children, node 29 in the left and node 88 in the right. Also, similar to the Search() operation, if the target node doesn't exist, we just need to return NULL. The implementation of the Remove() operation in C++ is as follows: BSTNode * BST::Remove( BSTNode * node, int key) { // The given node is // not found in BST if (node == NULL) return NULL; // Target node is found if (node->Key == key) { // If the node is a leaf node // The node can be safely removed if (node->Left == NULL && node->Right == NULL) node = NULL; // The node have only one child at right else if (node->Left == NULL && node->Right != NULL) { // The only child will be connected to // the parent's of node directly node->Right->Parent = node->Parent; // Bypass node node = node->Right; } // The node have only one child at left else if (node->Left != NULL && node->Right == NULL) { // The only child will be connected to // the parent's of node directly node->Left->Parent = node->Parent; // Bypass node node = node->Left; } // The node have two children (left and right) else { // Find successor or predecessor to avoid quarrel int successorKey = Successor(key); // Replace node's key with successor's key node->Key = successorKey; // Delete the old successor's key node->Right = Remove(node->Right, successorKey); } } // Target node's key is smaller than // the given key then search to right else if (node->Key < key) node->Right = Remove(node->Right, key); // Target node's key is greater than // the given key then search to left else node->Left = Remove(node->Left, key); // Return the updated BST return node; } Since we will always remove a node starting from the root node, we can simplify the preceding Remove() operation by creating the following one: void BST::Remove(int key) { root = Remove(root, key); } As shown in the preceding Remove() code, the time complexity of the operation is O(1) for both case 1 (the node that has no child) and case 2 (the node that has only one child). For case 3 (the node that has two children), the time complexity will be O(h), where h is the height of the BST, since we have to find the successor or predecessor of the node's key. If you found this tutorial useful, do check out the book C++ Data Structures and Algorithms for more useful material on data structure and algorithms with real-world implementation in C++. Working with shaders in C++ to create 3D games Getting Inside a C++ Multithreaded Application Understanding the Dependencies of a C++ Application
Read more
  • 0
  • 3
  • 58612

article-image-implementing-c-libraries-in-delphi-for-hpc-tutorial
Pavan Ramchandani
24 Jul 2018
16 min read
Save for later

Implementing C++ libraries in Delphi for HPC [Tutorial]

Pavan Ramchandani
24 Jul 2018
16 min read
Using C object files in Delphi is hard but possible. Linking to C++ object files is, however, nearly impossible. The problem does not lie within the object files themselves but in C++. While C is hardly more than an assembler with improved syntax, C++ represents a sophisticated high-level language with runtime support for strings, objects, exceptions, and more. All these features are part of almost any C++ program and are as such compiled into (almost) any object file produced by C++. In this tutorial, we will leverage various C++ libraries that enable high-performance with Delphi. It starts with memory management, which is an important program for any high performance applications. The article is an excerpt from a book written by Primož Gabrijelčič, titled Delphi High Performance. The problem here is that Delphi has no idea how to deal with any of that. C++ object is not equal to a Delphi object. Delphi has no idea how to call functions of a C++ object, how to deal with its inheritance chain, how to create and destroy such objects, and so on. The same holds for strings, exceptions, streams, and other C++ concepts. If you can compile the C++ source with C++Builder then you can create a package (.bpl) that can be used from a Delphi program. Most of the time, however, you will not be dealing with a source project. Instead, you'll want to use a commercial library that only gives you a bunch of C++ header files (.h) and one or more static libraries (.lib). Most of the time, the only Windows version of that library will be compiled with Microsoft's Visual Studio. A more general approach to this problem is to introduce a proxy DLL created in C++. You will have to create it in the same development environment as was used to create the library you are trying to link into the project. On Windows, that will in most cases be Visual Studio. That will enable us to include the library without any problems. To allow Delphi to use this DLL (and as such use the library), the DLL should expose a simple interface in the Windows API style. Instead of exposing C++ objects, the API must expose methods implemented by the objects as normal (non-object) functions and procedures. As the objects cannot cross the API boundary we must find some other way to represent them on the Delphi side. Instead of showing how to write a DLL wrapper for an existing (and probably quite complicated) C++ library, I have decided to write a very simple C++ library that exposes a single class, implementing only two methods. As compiling this library requires Microsoft's Visual Studio, which not all of you have installed, I have also included the compiled version (DllLib1.dll) in the code archive. The Visual Studio solution is stored in the StaticLib1 folder and contains two projects. StaticLib1 is the project used to create the library while the Dll1 project implements the proxy DLL. The static library implements the CppClass class, which is defined in the header file, CppClass.h. Whenever you are dealing with a C++ library, the distribution will also contain one or more header files. They are needed if you want to use a library in a C++ project—such as in the proxy DLL Dll1. The header file for the demo library StaticLib1 is shown in the following. We can see that the code implements a single CppClass class, which implements a constructor (CppClass()), destructor (~CppClass()), a method accepting an integer parameter (void setData(int)), and a function returning an integer (int getSquare()). The class also contains one integer private field, data: #pragma once class CppClass { int data; public: CppClass(); ~CppClass(); void setData(int); int getSquare(); }; The implementation of the CppClass class is stored in the CppClass.cpp file. You don't need this file when implementing the proxy DLL. When we are using a C++ library, we are strictly coding to the interface—and the interface is stored in the header file. In our case, we have the full source so we can look inside the implementation too. The constructor and destructor don't do anything and so I'm not showing them here. The other two methods are as follows. The setData method stores its parameter in the internal field and the getSquare function returns the squared value of the internal field: void CppClass::setData(int value) { data = value; } int CppClass::getSquare() { return data * data; } This code doesn't contain anything that we couldn't write in 60 seconds in Delphi. It does, however, serve as a perfect simple example for writing a proxy DLL. Creating such a DLL in Visual Studio is easy. You just have to select File | New | Project, and select the Dynamic-Link Library (DLL) project type from the Visual C++ | Windows Desktop branch. The Dll1 project from the code archive has only two source files. The file, dllmain.cpp was created automatically by Visual Studio and contains the standard DllMain method. You can change this file if you have to run project-specific code when a program and/or a thread attaches to, or detaches from, the DLL. In my example, this file was left just as the Visual Studio created it. The second file, StaticLibWrapper.cpp fully implements the proxy DLL. It starts with two include lines (shown in the following) which bring in the required RTL header stdafx.h and the header definition for our C++ class, CppClass.h: #include "stdafx.h" #include "CppClass.h" The proxy has to be able to find our header file. There are two ways to do that. We could simply copy it to the folder containing the source files for the DLL project, or we can add it to the project's search path. The second approach can be configured in Project | Properties | Configuration Properties | C/C++ | General | Additional Include Directories. This is also the approach used by the demonstration program. The DLL project must be able to find the static library that implements the CppClass object. The path to the library file should be set in project options, in the Configuration Properties | Linker | General | Additional Library Directories settings. You should put the name of the library (StaticLib1.lib) in the Linker | Input | Additional Dependencies settings. The next line in the source file defines a macro called EXPORT, which will be used later in the program to mark a function as exported. We have to do that for every DLL function that we want to use from the Delphi code. Later, we'll see how this macro is used: #define EXPORT comment(linker, "/EXPORT:" __FUNCTION__ "=" __FUNCDNAME__) The next part of the StaticLibWrapper.cpp file implements an IndexAllocator class, which is used internally to cache C++ objects. It associates C++ objects with simple integer identifiers, which are then used outside the DLL to represent the object. I will not show this class in the book as the implementation is not that important. You only have to know how to use it. This class is implemented as a simple static array of pointers and contains at most MAXOBJECTS objects. The constant MAXOBJECTS is set to 100 in the current code, which limits the number of C++ objects created by the Delphi code to 100. Feel free to modify the code if you need to create more objects. The following code fragment shows three public functions implemented by the IndexAllocator class. The Allocate function takes a pointer obj, stores it in the cache, and returns its index in the deviceIndex parameter. The result of the function is FALSE if the cache is full and TRUE otherwise. The Release function accepts an index (which was previously returned from Allocate) and marks the cache slot at that index as empty. This function returns FALSE if the index is invalid (does not represent a value returned from Allocate) or if the cache slot for that index is already empty. The last function, Get, also accepts an index and returns the pointer associated with that index. It returns NULL if the index is invalid or if the cache slot for that index is empty: bool Allocate(int& deviceIndex, void* obj) bool Release(int deviceIndex) void* Get(int deviceIndex) Let's move now to functions that are exported from the DLL. The first two—Initialize and Finalize—are used to initialize internal structures, namely the GAllocator of type IndexAllocator and to clean up before the DLL is unloaded. Instead of looking into them, I'd rather show you the more interesting stuff, namely functions that deal with CppClass. The CreateCppClass function creates an instance of CppClass, stores it in the cache, and returns its index. The important three parts of the declaration are: extern "C", WINAPI, and #pragma EXPORT. extern "C" is there to guarantee that CreateCppClass name will not be changed when it is stored in the library. The C++ compiler tends to mangle (change) function names to support method overloading (the same thing happens in Delphi) and this declaration prevents that. WINAPI changes the calling convention from cdecl, which is standard for C programs, to stdcall, which is commonly used in DLLs. Later, we'll see that we also have to specify the correct calling convention on the Delphi side. The last important part, #pragma EXPORT, uses the previously defined EXPORT macro to mark this function as exported. The CreateCppClass returns 0 if the operation was successful and -1 if it failed. The same approach is used in all functions exported from the demo DLL: extern "C" int WINAPI CreateCppClass (int& index) { #pragma EXPORT CppClass* instance = new CppClass; if (!GAllocator->Allocate(index, (void*)instance)) { delete instance; return -1; } else return 0; } Similarly, the DestroyCppClass function (not shown here) accepts an index parameter, fetches the object from the cache, and destroys it. The DLL also exports two functions that allow the DLL user to operate on an object. The first one, CppClass_setValue, accepts an index of the object and a value. It fetches the CppClass instance from the cache (given the index) and calls its setData method, passing it the value: extern "C" int WINAPI CppClass_setValue(int index, int value) { #pragma EXPORT CppClass* instance = (CppClass*)GAllocator->Get(index); if (instance == NULL) return -1; else { instance->setData(value); return 0; } } The second function, CppClass_getSquare also accepts an object index and uses it to access the CppClass object. After that, it calls the object's getSquare function and stores the result in the output parameter, value: extern "C" int WINAPI CppClass_getSquare(int index, int& value) { #pragma EXPORT CppClass* instance = (CppClass*)GAllocator->Get(index); if (instance == NULL) return -1; else { value = instance->getSquare(); return 0; } } A proxy DLL that uses a mapping table is a bit complicated and requires some work. We could also approach the problem in a much simpler manner—by treating an address of an object as its external identifier. In other words, the CreateCppClass function would create an object and then return its address as an untyped pointer type. A CppClass_getSquare, for example, would accept this pointer, cast it to a CppClass instance, and execute an operation on it. An alternative version of these two methods is shown in the following: extern "C" int WINAPI CreateCppClass2(void*& ptr) { #pragma EXPORT ptr = new CppClass; return 0; } extern "C" int WINAPI CppClass_getSquare2(void* index, int& value) { #pragma EXPORT value = ((CppClass*)index)->getSquare(); return 0; } This approach is simpler but offers far less security in the form of error checking. The table-based approach can check whether the index represents a valid value, while the latter version cannot know if the pointer parameter is valid or not. If we make a mistake on the Delphi side and pass in an invalid pointer, the code would treat it as an instance of a class, do some operations on it, possibly corrupt some memory, and maybe crash. Finding the source of such errors is very hard. That's why I prefer to write more verbose code that implements some safety checks on the code that returns pointers. Using a proxy DLL in Delphi To use any DLL from a Delphi program, we must firstly import functions from the DLL. There are different ways to do this—we could use static linking, dynamic linking, and static linking with delayed loading. There's plenty of information on the internet about the art of DLL writing in Delphi so I won't dig into this topic. I'll just stick with the most modern approach—delay loading. The code archive for this book includes two demo programs, which demonstrate how to use the DllLib1.dll library. The simpler one, CppClassImportDemo uses the DLL functions directly, while CppClassWrapperDemo wraps them in an easy-to-use class. Both projects use the CppClassImport unit to import the DLL functions into the Delphi program. The following code fragment shows the interface part of that unit which tells the Delphi compiler which functions from the DLL should be imported, and what parameters they have. As with the C++ part, there are three important parts to each declaration. Firstly, the stdcall specifies that the function call should use the stdcall (or what is known in C as  WINAPI) calling convention. Secondly, the name after the name specifier should match the exported function name from the C++ source. And thirdly, the delayed keyword specifies that the program should not try to find this function in the DLL when it is started but only when the code calls the function. This allows us to check whether the DLL is present at all before we call any of the functions: const CPP_CLASS_LIB = 'DllLib1.dll'; function Initialize: integer; stdcall; external CPP_CLASS_LIB name 'Initialize' delayed; function Finalize: integer; stdcall; external CPP_CLASS_LIB name 'Finalize' delayed; function CreateCppClass(var index: integer): integer; stdcall; external CPP_CLASS_LIB name 'CreateCppClass' delayed; function DestroyCppClass(index: integer): integer; stdcall; external CPP_CLASS_LIB name 'DestroyCppClass' delayed; function CppClass_setValue(index: integer; value: integer): integer; stdcall; external CPP_CLASS_LIB name 'CppClass_setValue' delayed; function CppClass_getSquare(index: integer; var value: integer): integer; stdcall; external CPP_CLASS_LIB name 'CppClass_getSquare' delayed; The implementation part of this unit (not shown here) shows how to catch errors that occur during delayed loading—that is, when the code that calls any of the imported functions tries to find that function in the DLL. If you get an External exception C06D007F  exception when you try to call a delay-loaded function, you have probably mistyped a name—either in C++ or in Delphi. You can use the tdump utility that comes with Delphi to check which names are exported from the DLL. The syntax is tdump -d <dll_name.dll>. If the code crashes when you call a DLL function, check whether both sides correctly define the calling convention. Also check if all the parameters have correct types on both sides and if the var parameters are marked as such on both sides. To use the DLL, the code in the CppClassMain unit firstly calls the exported Initialize function from the form's OnCreate handler to initialize the DLL. The cleanup function, Finalize is called from the OnDestroy handler to clean up the DLL. All parts of the code check whether the DLL functions return the OK status (value 0): procedure TfrmCppClassDemo.FormCreate(Sender: TObject); begin if Initialize <> 0 then ListBox1.Items.Add('Initialize failed') end; procedure TfrmCppClassDemo.FormDestroy(Sender: TObject); begin if Finalize <> 0 then ListBox1.Items.Add('Finalize failed'); end; When you click on the Use import library button, the following code executes. It uses the DLL to create a CppClass object by calling the CreateCppClass function. This function puts an integer value into the idxClass value. This value is used as an identifier that identifies a CppClass object when calling other functions. The code then calls CppClass_setValue to set the internal field of the CppClass object and CppClass_getSquare to call the getSquare method and to return the calculated value. At the end, DestroyCppClass destroys the CppClass object: procedure TfrmCppClassDemo.btnImportLibClick(Sender: TObject); var idxClass: Integer; value: Integer; begin if CreateCppClass(idxClass) <> 0 then ListBox1.Items.Add('CreateCppClass failed') else if CppClass_setValue(idxClass, SpinEdit1.Value) <> 0 then ListBox1.Items.Add('CppClass_setValue failed') else if CppClass_getSquare(idxClass, value) <> 0 then ListBox1.Items.Add('CppClass_getSquare failed') else begin ListBox1.Items.Add(Format('square(%d) = %d', [SpinEdit1.Value, value])); if DestroyCppClass(idxClass) <> 0 then ListBox1.Items.Add('DestroyCppClass failed') end; end; This approach is relatively simple but long-winded and error-prone. A better way is to write a wrapper Delphi class that implements the same public interface as the corresponding C++ class. The second demo, CppClassWrapperDemo contains a unit CppClassWrapper which does just that. This unit implements a TCppClass class, which maps to its C++ counterpart. It only has one internal field, which stores the index of the C++ object as returned from the CreateCppClass function: type TCppClass = class strict private FIndex: integer; public class procedure InitializeWrapper; class procedure FinalizeWrapper; constructor Create; destructor Destroy; override; procedure SetValue(value: integer); function GetSquare: integer; end; I won't show all of the functions here as they are all equally simple. One—or maybe two— will suffice. The constructor just calls the CreateCppClass function, checks the result, and stores the resulting index in the internal field: constructor TCppClass.Create; begin inherited Create; if CreateCppClass(FIndex) <> 0 then raise Exception.Create('CreateCppClass failed'); end; Similarly, GetSquare just forwards its job to the CppClass_getSquare function: function TCppClass.GetSquare: integer; begin if CppClass_getSquare(FIndex, Result) <> 0 then raise Exception.Create('CppClass_getSquare failed'); end; When we have this wrapper, the code in the main unit becomes very simple—and very Delphi-like. Once the initialization in the OnCreate event handler is done, we can just create an instance of the TCppClass and work with it: procedure TfrmCppClassDemo.FormCreate(Sender: TObject); begin TCppClass.InitializeWrapper; end; procedure TfrmCppClassDemo.FormDestroy(Sender: TObject); begin TCppClass.FinalizeWrapper; end; procedure TfrmCppClassDemo.btnWrapClick(Sender: TObject); var cpp: TCppClass; begin cpp := TCppClass.Create; try cpp.SetValue(SpinEdit1.Value); ListBox1.Items.Add(Format('square(%d) = %d', [SpinEdit1.Value, cpp.GetSquare])); finally FreeAndNil(cpp); end; end; To summarize, we learned about the C/C++ library that provides a solution for high-performance computing working with Delphi as the primary language. If you found this post useful, do check out the book Delphi High Performance to learn more about the intricacies of how to perform High-performance programming with Delphi. Exploring the Usages of Delphi Delphi: memory management techniques for parallel programming Delphi Cookbook
Read more
  • 0
  • 0
  • 14544

article-image-no-new-peps-will-be-approved-for-python-until-a-new-bdfl-is-chosen-next-year
Natasha Mathur
24 Jul 2018
3 min read
Save for later

No new PEPS will be approved for Python in 2018, with BDFL election pending.

Natasha Mathur
24 Jul 2018
3 min read
According to an email exchange, Python will not approve of any proposals or PEPs until January 2019 as the team is planning to choose a new leader, with a deadline set for January 1, 2019. The news comes after Python founder and “Benevolent Dictator for Life (BDFL)” Guido van Rossum, announced earlier this month that he was standing down from the decision-making process.   Here’s a quick look at Why Guido may have quit as the Python chief (BDFL). A PEP - or Python Enhancement Proposal - is a design document consisting of information important to the Python community. If a new feature is to be added to the language, it is first detailed in a PEP. It will include technical specifications as well as the rationale for the feature. A passionate discussion regarding PEP 576, 579 and 580 started last week on the python-dev community with Jeroen Demeyer, a Cython contributor, stating how he finally came up with real-life benchmarks proving why a faster C calling protocol is required in Python. He implemented Cython compilation of SageMath on Python 2.7.15 (SageMath is not yet ported to Python 3). But, he mentioned that the conclusions of his implementation should be valid for newer Python versions as well. Guido’s, in his capacity as a core developer, responded back to this implementation request. In the email thread, he writes that “Jeroen was asked to provide benchmarks but only provided them for Python 2. The reasoning that not much has changed could affect the benchmarks feels a bit optimistic, that's all. The new BDFL may be less demanding though.” Stefan Behnel, a core Cython developer, was disappointed with Rossum’s response.. He writes that because Demeyer has already done “the work to clean up the current implementation… it would be very sad if this chance was wasted, and we would have to remain with the current implementation” But it seems that Guido is not convinced. According to Guido, no PEPS will be approved until the Python core dev team have elected a “new BDFL or come up with some other process for accepting PEPs, no action will be taken on this PEP.” The mail says “right now, there is no-one who can approve this PEP, and you will have to wait until 2019 until there is”. Many Python users are worried about this decision:                    Reddit All that’s left to do now is wait for more updates from the Python-dev community. Behnel states “I just hope that Python development won't stall completely. Even if no formal action can be taken on this PEP (or any other), I hope that there will still be informal discussion”. For more coverage on this news, check out the official mail posts on python-dev. Top 7 Python programming books you need to read Python, Tensorflow, Excel and more – Data professionals reveal their top tools  
Read more
  • 0
  • 0
  • 4415
article-image-the-developer-tester-face-off-needs-to-end
Aaron Lazar
21 Jul 2018
6 min read
Save for later

The developer-tester face-off needs to end. It's putting our projects at risk.

Aaron Lazar
21 Jul 2018
6 min read
Penny and Leonard work at the same company as a tester and developer respectively. Penny arrives home late, to find Leonard on the couch with his legs up on the table, playing his favourite video game. Leonard: Oh hi sweety, it looks like you had a long day at work. Penny, throwing him a hostile, sideways glance, heads over to the refrigerator. Penny: Did you remember to take out the garbage? Leonard: Of course, sweety. I used 2 bags so Sheldon’s Szechuan sauce from Szechuan Palace doesn’t seep through. Penny: Did you buy new shampoo for the bathroom? Leonard: Yes, I picked up your regular one from the store on the way back. Penny: And did you slap on a last minute field on the SPA at work? Leonard, pausing his video game and answering in a soft, high pitched voice: Whaaaaaat? Source: giphy If you’re a developer or a tester, you’ve probably been in this situation at least once, if not more. Even if your husband or wife might not be on the other side of the source code. The war goes on... The funny thing is that this isn’t something that’s happened in the recent past. The war between Developers and Testers is a long standing, unresolved battle, that is usually brought up in bouts of unnecessary humor. The truth is that this battle is the cause of several projects slipping deadlines, teams not respecting each other’s views, etc. Here we’ll discuss some of the main reasons for this disconnect and try and address them in hope of making the office a better place. #1 You talkin’ to me? One of the main reasons that developers and testers are not on the same page is because neither bother to communicate effectively with the other. Each individual considers informing the other about the strategy/techniques used, a waste of effort. Obviously there are bound to be issues arising with such a disjointed team. The only way to resolve this problem is to toss egos out of the window, sit down and resolve problems like professionals. While tickets might be the most professional and efficient way to resolve things, walking up to the person (if possible), and discussing the best way forward lets you build a relationship, and resolve things more effectively. Moreover, the person on the receiving end will not consider the move offensive, or demeaning. #2 Is it ‘team’ or ‘teams’? You know the answer to this one, but you’re still not willing to accept it. IT managers and team leads need to create an environment in which developers and testers are not two separate teams. Rather, consider them all as engineers working in the same team, towards the same goal! There’s no better recipe to meet success. Use modern methods like Mob or Pair Programming, where both developers and testers work together closely. The ideal scenario would be to possibly have both team members work on the same machine, addressing and strategising to achieve the goal with continuous, real-time feedback. A good pairing station, Source: ministry of testing #3 On the same page? Which book you got there? If you’re a developer, this one’s especially for you, so listen carefully! Most developers aren’t aware of what tools the testers in their teams use, which is a sin. Being aware of testing tools, methodologies and processes, goes a long way in enabling a smooth and speedy testing process. A developer will be able to understand which parts of their code can probably be a tester’s target, what changes would give testers a tough time and on the other hand, what makes it easy. #4 One goal, two paths to achieve it Well, this is true a lot of times. Developers aim to “build” an application. Testers on the other hand aim to “break” the application. Now while this is not wrong, it’s the vision with which the tester is actually planning on breaking things. Testers, you should always keep the customer’s or end user’s requirements clear, while approaching the application. I may not be an actual tester, and you might wonder how I can empathise with other testers. Honestly, I don’t build software, neither do I test any. But I’ve been in a very similar role, earlier on in my Publishing career. As a Commissioning Gatekeeper, I was responsible for validating book and video ideas from the Commissioning Editors. Like a tester, my job was to identify why or how something wouldn’t work in the market. Now, I could easily approach a particular book idea from the perspective of ‘trashing’ it. But when I learned to approach it from the customer’s point of view, my perspective changed, and I was able to give better constructive feedback to the editor. Don’t aim to destroy, aim to improve. If you must kill off an idea or a feature, do it firmly but with kindness. #5 Trust the Developer’s testing skills Yes! Lack of trust is one of the main reasons why there’s so much friction between developers and testers. A tester needs to understand the developer and believe that they can also write tests with a clear goal in mind. Test Driven Development is a great approach to follow. Here, the developer will know better, what angles to test from, and this can help the tester write a mutually defined test case for the developer to run. At the same time, the tester can also provide insight into how to address bugs that might creep up while running the tests. With this combined knowledge, the developers will be able to minimize the number of bugs at the first go itself! Toss in a Business-Driven Development approach, and you’ve got yourself a team that delivers user stories that are more aligned to the business requirement than ever before! In the end, developers and testers, both need to set their egos aside and make peace with each other. If you really look at it, it’s not that hard at all. It’s all about how the two collaborate to create better software, rather than working in silos. IT managers can play an important role here, and they need to understand the advantages and limitations of their team. They need to ensure the unity of the team by encouraging more engaging ways of working as well as introducing modern methodologies that would assist a peaceful, collaborative effort. Why does more than half the IT industry suffer from Burnout? Abandoning Agile Unit testing with Java frameworks: JUnit and TestNG [Tutorial]
Read more
  • 0
  • 0
  • 2767

article-image-why-guido-van-rossum-quit
Amey Varangaonkar
20 Jul 2018
7 min read
Save for later

Why Guido van Rossum quit as the Python chief (BDFL)

Amey Varangaonkar
20 Jul 2018
7 min read
It was the proverbial ‘end of an era’ for Python as Guido van Rossum stepped down as the Python chief, almost 3 decades since he created the programming language. It came as a shock to many Python users, and left a few bewildered. Many core developers thought this day might come, but they didn’t expect it to come so soon. However, looking at the post that Guido shared with the community, does this decision really come as a surprise? In this article, we dive deep into the possibilities and the circumstances that could’ve played a major role in van Rossum’s resignation. *Disclaimer: The views presented in this article are based purely on our research. They are not to be considered as inputs directly received from the Python community or Guido van Rossum himself. What can we make of Guido’s post? I’m pretty sure you’ve already read the mailing list post that Guido shared with the community last week. Aptly titled as ‘Transfer of Power’, the mail straightaway begins on a negative note: “Now that PEP 572 is done, I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions.” Some way to start a mail. The anger, disappointment and the tiredness is quite evident. Guido goes on to state that he would be removing himself from all the decision-making processes and will be available only for a while as a core developer and a mentor. From the tone of the mail, the three main reasons for his departure can be figured out quite easily: Guido felt there were questions around his decision-making and overall administration capabilities. The backlash on the PEP 572 is a testament to this. van Rossum is 62 now. Maybe the stress of leading this project for close to 30 years has finally taken a toll on his health, as he wryly talked about the piling medical issues. This is also quite evident from the last sentence of his mail: “I'm tired, and need a very long break” Guido thinks this is the right time for the baton to be passed over to the other core committers. He leaves everything for the core developers to figure out - from finalizing the PEPs (Python Enhancement Proposal) to deciding how the new core developers are inducted. Understanding the backlash behind PEP 572 For a mature language such as Python, you’d think there wouldn’t be much left to get excited about. However, a proposal to add a new feature to Python - PEP 572 - has caused a furore in the Python community in the last few months. What PEP 572 is all about The idea behind PEP 572 is quite simple - to allow assignment to variables within expressions. To make things simpler, consider the following lines of code in Python: a = b  - this is a simple assignment statement, while: a == b - this is a test for equality With PEP 572 comes a brand new operator := which is available in some other programming languages, and is an equivalent of the in-expression. So the way you would use this operator would be: while a:=b.read(10): print(a) Looks like a simple statement, isn’t it? Keep printing a while it is in a certain range of b. So what’s all the hue and cry about? In principle, the way := is used signifies that the value of an expression is assigned and returned to whatever code is using it, almost as if no assignment ever happened. This can get really tricky when complex expressions are involved. Ideally, an expression assignment is useful when one needs to retain the result of that expression while it is being used for some other purposes. The use of := is against this best practice, and has therefore led to many disagreements. The community response to PEP 572 Many Python users thought PEP 572 was a bad idea due to the reasons mentioned above. They did not hide their feelings regarding this too. In fact, some of the comments were quite brutal: Even some of the core developers were unhappy with this proposal, saying it did not fit the fundamental Python best practice, i.e. preference for simplicity over complexity. This practice is a part of the PEP 20, titled ‘The Zen of the Python’. As the Python BDFL, van Rossum personally signed off each PEP. This is in stark contrast to how other programming languages such as PHP finalize their proposals, i.e., by voting on them. On the PEP 572 objections, Guido’s response befitted that of a BDFL perfectly: Some developers still disagreed with this proposal, believing that it deviated from the standard best practices and rather reflected van Rossum’s preferred style of coding. So much so that van Rossum had to ask the committers to give him time to respond to the queries. Eventually the PEP 572 was accepted by Guido van Rossum, as he settled the matter with the following note: Thank you all. I will accept the PEP as is. I am happy to accept *clarification* updates to the PEP if people care to submit them as PRs to the peps repo, and that could even (to some extent) include summaries of discussion we've had, or outright rejected ideas. But even without any of those I think the PEP is very clear so I will not wait very long (maybe a week). Normally, in case of some other language, such an argument could have gone on forever, with both the sides reluctant to give in. The progress of the language would be stuck in a limbo as a result of this polarity. With Guido gone now, one cannot help but wonder if this is going to be case with Python going forward. Could van Rossum been pressurized less if he had adopted a consensus-based voting system to sign proposals off too? And if that was the case, would the proposal still have gone through an opposing majority of core developers? “Tired of the hatred” It would be wrong to say that the BDFL quit mainly because of how working on PEP 572 left a bitter taste in his mouth. However, it is fair to say that the negativity surrounding PEP 572 must’ve pushed van Rossum off the ledge finally. The fact that he thinks stepping down from his role as Python chief would mean people would not ‘despise his decisions’ - must’ve played a major role in his announcement. Guido’s decision to quit was rather an inevitable outcome of a series of past bad experiences accrued over the years with backlashes over his decisions on Python’s direction. Leading one of the most successful and long running open source projects in the world is no joke, and it brings more than its fair share of burden to carry. In many ways, CEOs of big tech companies have it easier. For starters, they’ve a lot of funding and they mainly worry about how to make their shareholders happy (make more money). More importantly, they aren’t directly exposed to the end users the way open source leaders are, for every decision they make. What’s next for Guido? Guido van Rossum isn’t going away for good. His mail states that he will still be around as a core dev, and as a mentor to other budding developers for some time. He says just wants to move away from the leadership role, away from all the responsibilities that once made him the BDFL. His tweet corroborates this: https://twitter.com/gvanrossum/status/1017546023227424768?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet Call him a dictator if you will, his contributions to Python cannot be taken away. From being a beginner’s coding language to being used in enterprise applications - Python’s rise under Van Rossum as one of the most popular and versatile programming languages in the world has been incredible. Perhaps the time was right for the sun to set, and the PEP 572 scenario and the circumstances surrounding it might just have given Guido the platform to ride away into the sunset. Read more Python founder resigns – Guido van Rossum goes ‘on a permanent vacation from being BDFL’ Top 7 Python programming books you need to read Python, Tensorflow, Excel and more – Data professionals reveal their top tools
Read more
  • 0
  • 2
  • 20065