Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Languages

135 Articles
article-image-how-to-dockerize-asp-net-core-application
Aaron Lazar
27 Apr 2018
5 min read
Save for later

How to dockerize an ASP.NET Core application

Aaron Lazar
27 Apr 2018
5 min read
There are many reasons why you might want to dockerize an ASP.NET Core application. But ultimately, it's simply going to make life much easier for you. It's great for isolating components, especially if you're building a microservices or planning to deploy your application on the cloud. So, if you want an easier life (possibly) follow this tutorial to learn how to dockerize an ASP.NET Core application. Get started: Dockerize an ASP.NET Core application Create a new ASP.NET Core Web Application in Visual Studio 2017 and click OK: On the next screen, select Web Application (Model-View-Controller) or any type you like, while ensuring that ASP.NET Core 2.0 is selected from the drop-down list. Then check the Enable Docker Support checkbox. This will enable the OS drop-down list. Select Windows here and then click on the OK button: If you see the following message, you need to switch to Windows containers. This is because you have probably kept the default container setting for Docker as Linux: If you right-click on the Docker icon in the taskbar, you will see that you have an option to enable Windows containers there too. You can switch to Windows containers from the Docker icon in the taskbar by clicking on the Switch to Windows containers option: Switching to Windows containers may take several minutes to complete, depending on your line speed and the hardware configuration of your PC.If, however, you don't click on this option, Visual Studio will ask you to change to Windows containers when selecting the OS platform as Windows.There is a good reason that I am choosing Windows containers as the target OS. This reason will become clear later on in the chapter when working with Docker Hub and automated builds. After your ASP.NET Core application is created, you will see the following project setup in Solution Explorer: The Docker support that is added to Visual Studio comes not only in the form of the Dockerfile, but also in the form of the Docker configuration information. This information is contained in the global docker-compose.yml file at the solution level: 3. Clicking on the Dockerfile in Solution Explorer, you will see that it doesn't look complicated at all. Remember, the Dockerfile is the file that creates your image. The image is a read-only template that outlines how to create a Docker container. The Dockerfile, therefore, contains the steps needed to generate the image and run it. The instructions in the Dockerfile create layers in the image. This means that if anything changes in the Dockerfile, only the layers that have changed will be rebuilt when the image is rebuilt. The Dockerfile looks as follows: FROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0-nanoserver-1709 AS build WORKDIR /src COPY *.sln ./ COPY DockerApp/DockerApp.csproj DockerApp/ RUN dotnet restore COPY . . WORKDIR /src/DockerApp RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "DockerApp.dll"] When you have a look at the menu in Visual Studio 2017, you will notice that the Run button has been changed to Docker: Clicking on the Docker button to debug your ASP.NET Core application, you will notice that there are a few things popping up in the Output window. Of particular interest is the IP address at the end. In my case, it reads Launching http://172.24.12.112 (yours will differ): When the browser is launched, you will see that the ASP.NET Core application is running at the IP address listed previously in the Output window. Your ASP.NET Core application is now running inside of a Windows Docker container: This is great and really easy to get started with. But what do you need to do to Dockerize an ASP.NET Core application that already exists? As it turns out, this isn't as difficult as you may think. How to add Docker support to an existing .NET Core application Imagine that you have an ASP.NET Core application without Docker support. To add Docker support to this existing application, simply add it from the context menu: To add Docker support to an existing ASP.NET Core application, you need to do the following: Right-click on your project in Solution Explorer Click on the Add menu item Click on Docker Support in the fly-out menu: Visual Studio 2017 now asks you what the target OS is going to be. In our case, we are going to target Windows: After clicking on the OK button, Visual Studio 2017 will begin to add the Docker support to your project: It's actually extremely easy to create ASP.NET Core applications that have Docker support baked in, and even easier to add Docker support to existing ASP.NET Core applications. Lastly, if you experience any issues, such as file access issues, ensure that your antivirus software has excluded your Dockerfile from scanning. Also, make sure that you run Visual Studio as Administrator. This tutorial has been taken from C# 7 and .NET Core Blueprints. More Docker tutorials Building Docker images using Dockerfiles How to install Keras on Docker and Cloud ML
Read more
  • 0
  • 0
  • 8288

article-image-r6-classes-retrieve-live-data-markets-wallets
Pravin Dhandre
23 Apr 2018
11 min read
Save for later

Using R6 classes in R to retrieve live data for markets and wallets

Pravin Dhandre
23 Apr 2018
11 min read
In this tutorial, you will learn to create a simple requester to request external information from an API over the internet. You will also learn to develop exchange and wallet infrastructure using R programming. Creating a simple requester to isolate API calls Now, we will focus on how we actually retrieve live data. This functionality will also be implemented using R6 classes, as the interactions can be complex. First of all, we create a simple Requester class that contains the logic to retrieve data from JSON APIs found elsewhere in the internet and that will be used to get our live cryptocurrency data for wallets and markets. We don't want logic that interacts with external APIs spread all over our classes, so we centralize it here to manage it as more specialized needs come into play later. As you can see, all this object does is offer the public request() method, and all it does is use the formJSON() function from the jsonlite package to call a URL that is being passed to it and send the data it got back to the user. Specifically, it sends it as a dataframe when the data received from the external API can be coerced into dataframe-form. library(jsonlite) Requester <- R6Class( "Requester", public = list( request = function(URL) { return(fromJSON(URL)) } ) ) Developing our exchanges infrastructure Our exchanges have multiple markets inside, and that's the abstraction we will define now. A Market has various private attributes, as we saw before when we defined what data is expected from each file, and that's the same data we see in our constructor. It also offers a data() method to send back a list with the data that should be saved to a database. Finally, it provides setters and getters as required. Note that the setter for the price depends on what units are requested, which can be either usd or btc, to get a market's asset price in terms of US Dollars or Bitcoin, respectively: Market <- R6Class( "Market", public = list( initialize = function(timestamp, name, symbol, rank, price_btc, price_usd) { private$timestamp <- timestamp private$name <- name private$symbol <- symbol private$rank <- rank private$price_btc <- price_btc private$price_usd <- price_usd }, data = function() { return(list( timestamp = private$timestamp, name = private$name, symbol = private$symbol, rank = private$rank, price_btc = private$price_btc, price_usd = private$price_usd )) }, set_timestamp = function(timestamp) { private$timestamp <- timestamp }, get_symbol = function() { return(private$symbol) }, get_rank = function() { return(private$rank) }, get_price = function(base) { if (base == 'btc') { return(private$price_btc) } else if (base == 'usd') { return(private$price_usd) } } ), private = list( timestamp = NULL, name = "", symbol = "", rank = NA, price_btc = NA, price_usd = NA ) ) Now that we have our Market definition, we proceed to create our Exchange definition. This class will receive an exchange name as name and will use the exchange_requester_factory() function to get an instance of the corresponding ExchangeRequester. It also offers an update_markets() method that will be used to retrieve market data with the private markets() method and store it to disk using the timestamp and storage objects being passed to it. Note that instead of passing the timestamp through the arguments for the private markets() method, it's saved as a class attribute and used within the private insert_metadata() method. This technique provides cleaner code, since the timestamp does not need to be passed through each function and can be retrieved when necessary. The private markets() method calls the public markets() method in the ExchangeRequester instance saved in the private requester attribute (which was assigned to by the factory) and applies the private insert_metadata() method to update the timestamp for such objects with the one sent to the public update_markets() method call before sending them to be written to the database: source("./requesters/exchange-requester-factory.R", chdir = TRUE) Exchange <- R6Class( "Exchange", public = list( initialize = function(name) { private$requester <- exchange_requester_factory(name) }, update_markets = function(timestamp, storage) { private$timestamp <- unclass(timestamp) storage$write_markets(private$markets()) } ), private = list( requester = NULL, timestamp = NULL, markets = function() { return(lapply(private$requester$markets(), private$insert_metadata)) }, insert_metadata = function(market) { market$set_timestamp(private$timestamp) return(market) } ) ) Now, we need to provide a definition for our ExchangeRequester implementations. As in the case of the Database, this ExchangeRequester will act as an interface definition that will be implemented by the CoinMarketCapRequester. We see that the ExchangeRequester specifies that all exchange requester instances should provide a public markets() method, and that a list is expected from such a method. From context, we know that this list should contain Market instances. Also, each ExchangeRequester implementation will contain a Requester object by default, since it's being created and assigned to the requester private attribute upon class instantiation. Finally, each implementation will also have to provide a create_market() private method and will be able to use the request() private method to communicate to the Requester method request() we defined previously: source("../../../utilities/requester.R") KNOWN_ASSETS = list( "BTC" = "Bitcoin", "LTC" = "Litecoin" ) ExchangeRequester <- R6Class( "ExchangeRequester", public = list( markets = function() list() ), private = list( requester = Requester$new(), create_market = function(resp) NULL, request = function(URL) { return(private$requester$request(URL)) } ) ) Now we proceed to provide an implementation for CoinMarketCapRequester. As you can see, it inherits from ExchangeRequester, and it provides the required method implementations. Specifically, the markets() public method calls the private request() method from ExchangeRequester, which in turn calls the request() method from Requester, as we have seen, to retrieve data from the private URL specified. If you request data from CoinMarketCap's API by opening a web browser and navigating to the URL shown (https:/​/​api.​coinmarketcap.​com/​v1/​ticker), you will get a list of market data. That is the data that will be received in our CoinMarketCapRequester instance in the form of a dataframe, thanks to the Requester object, and will be transformed into numeric data where appropriate using the private clean() method, so that it's later used to create Market instances with the apply() function call, which in turn calls the create_market() private method. Note that the timestamp is set to NULL for all markets created this way because, as you may remember from our Exchange class, it's set before writing it to the database. There's no need to send the timestamp information all the way down to the CoinMarketCapRequester, since we can simply write at the Exchange level right before we send the data to the database: source("./exchange-requester.R") source("../market.R") CoinMarketCapRequester <- R6Class( "CoinMarketCapRequester", inherit = ExchangeRequester, public = list( markets = function() { data <- private$clean(private$request(private$URL)) return(apply(data, 1, private$create_market)) } ), private = list( URL = "https://api.coinmarketcap.com/v1/ticker", create_market = function(row) { timestamp <- NULL return(Market$new( timestamp, row[["name"]], row[["symbol"]], row[["rank"]], row[["price_btc"]], row[["price_usd"]] )) }, clean = function(data) { data$price_usd <- as.numeric(data$price_usd) data$price_btc <- as.numeric(data$price_btc) data$rank <- as.numeric(data$rank) return(data) } ) ) Finally, here's the code for our exchange_requester_factory(). As you can see, it's basically the same idea we have used for our other factories, and its purpose is to easily let us add more implementations for our ExchangeRequeseter by simply adding else-if statements in it: source("./coinmarketcap-requester.R") exchange_requester_factory <- function(name) { if (name == "CoinMarketCap") { return(CoinMarketCapRequester$new()) } else { stop("Unknown exchange name") } } Developing our wallets infrastructure Now that we are able to retrieve live price data from exchanges, we turn to our Wallet definition. As you can see, it specifies the type of private attributes we expect for the data that it needs to handle, as well as the public data() method to create the list of data that needs to be saved to a database at some point. It also provides getters for email, symbol, and address, and the public pudate_assets() method, which will be used to get and save assets into the database, just as we did in the case of Exchange. As a matter of fact, the techniques followed are exactly the same, so we won't explain them again: source("./requesters/wallet-requester-factory.R", chdir = TRUE) Wallet <- R6Class( "Wallet", public = list( initialize = function(email, symbol, address, note) { private$requester <- wallet_requester_factory(symbol, address) private$email <- email private$symbol <- symbol private$address <- address private$note <- note }, data = function() { return(list( email = private$email, symbol = private$symbol, address = private$address, note = private$note )) }, get_email = function() { return(as.character(private$email)) }, get_symbol = function() { return(as.character(private$symbol)) }, get_address = function() { return(as.character(private$address)) }, update_assets = function(timestamp, storage) { private$timestamp <- timestamp storage$write_assets(private$assets()) } ), private = list( timestamp = NULL, requester = NULL, email = NULL, symbol = NULL, address = NULL, note = NULL, assets = function() { return (lapply ( private$requester$assets(), private$insert_metadata)) }, insert_metadata = function(asset) { timestamp(asset) <- unclass(private$timestamp) email(asset) <- private$email return(asset) } ) ) Implementing our wallet requesters The WalletRequester will be conceptually similar to the ExchangeRequester. It will be an interface, and will be implemented in our BTCRequester and LTCRequester interfaces. As you can see, it requires a public method called assets() to be implemented and to return a list of Asset instances. It also requires a private create_asset() method to be implemented, which should return individual Asset instances, and a private url method that will build the URL required for the API call. It offers a request() private method that will be used by implementations to retrieve data from external APIs: source("../../../utilities/requester.R") WalletRequester <- R6Class( "WalletRequester", public = list( assets = function() list() ), private = list( requester = Requester$new(), create_asset = function() NULL, url = function(address) "", request = function(URL) { return(private$requester$request(URL)) } ) ) The BTCRequester and LTCRequester implementations are shown below for completeness, but will not be explained. If you have followed everything so far, they should be easy to understand: source("./wallet-requester.R") source("../../asset.R") BTCRequester <- R6Class( "BTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/btc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Bitcoin", symbol = "BTC", total = total, address = private$address )) } ) ) source("./wallet-requester.R") source("../../asset.R") LTCRequester <- R6Class( "LTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/ltc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Litecoin", symbol = "LTC", total = total, address = private$address )) } ) ) The wallet_requester_factory() works just as the other factories; the only difference is that in this case, we have two possible implementations that can be returned, which can be seen in the if statement. If we decided to add a WalletRequester for another cryptocurrency, such as Ether, we could simply add the corresponding branch here, and it should work fine: source("./btc-requester.R") source("./ltc-requester.R") wallet_requester_factory <- function(symbol, address) { if (symbol == "BTC") { return(BTCRequester$new(address)) } else if (symbol == "LTC") { return(LTCRequester$new(address)) } else { stop("Unknown symbol") } } Hope you enjoyed this interesting tutorial and were able to retrieve live data for your application. To know more, do check out the R Programming By Example and start handling data efficiently with modular, maintainable and expressive codes. Read More Introduction to R Programming Language and Statistical Environment 20 ways to describe programming in 5 words  
Read more
  • 0
  • 0
  • 2633

article-image-boost-r-codes-using-c-fortran
Pravin Dhandre
18 Apr 2018
16 min read
Save for later

How to boost R codes using C++ and Fortran

Pravin Dhandre
18 Apr 2018
16 min read
Sometimes, R code just isn't fast enough. Maybe you've used profiling to figure out where your bottlenecks are, and you've done everything you can think of within R, but your code still isn't fast enough. In those cases, a useful alternative can be to delegate some parts of the implementation to Fortran or C++. This is an advanced technique that can often prove to be quite useful if know how to program in such languages. In today’s tutorial, we will try to explore techniques to boost R codes and calculations using efficient languages like Fortran and C++. Delegating code to other languages can address bottlenecks such as the following: Loops that can't be easily vectorized due to iteration dependencies Processes that involve calling functions millions of times Inefficient but necessary data structures that are slow in R Delegating code to other languages can provide great performance benefits, but it also incurs the cost of being more explicit and careful with the types of objects that are being moved around. In R, you can get away with simple things such as being imprecise about a number being an integer or a real. In these other languages, you can't; every object must have a precise type, and it remains fixed for the entire execution. Boost R codes using an old-school approach with Fortran We will start with an old-school approach using Fortran first. If you are not familiar with it, Fortran is the oldest programming language still under use today. It was designed to perform lots of calculations very efficiently and with very few resources. There are a lot of numerical libraries developed with it, and many high-performance systems nowadays still use it, either directly or indirectly. Here's our implementation, named sma_fortran(). The syntax may throw you off if you're not used to working with Fortran code, but it's simple enough to understand. First, note that to define a function technically known as a subroutine in Fortran, we use the subroutine keyword before the name of the function. As our previous implementations do, it receives the period and data (we use the dataa name with an extra a at the end because Fortran has a reserved keyword data, which we shouldn't use in this case), and we will assume that the data is already filtered for the correct symbol at this point. Next, note that we are sending new arguments that we did not send before, namely smas and n. Fortran is a peculiar language in the sense that it does not return values, it uses side effects instead. This means that instead of expecting something back from a call to a Fortran subroutine, we should expect that subroutine to change one of the objects that was passed to it, and we should treat that as our return value. In this case, smas fulfills that role; initially, it will be sent as an array of undefined real values, and the objective is to modify its contents with the appropriate SMA values. Finally, the n represents the number of elements in the data we send. Classic Fortran doesn't have a way to determine the size of an array being passed to it, and it needs us to specify the size manually; that's why we need to send n. In reality, there are ways to work around this, but since this is not a tutorial about Fortran, we will keep the code as simple as possible. Next, note that we need to declare the type of objects we're dealing with as well as their size in case they are arrays. We proceed to declare pos (which takes the place of position in our previous implementation, because Fortran imposes a limit on the length of each line, which we don't want to violate), n, endd (again, end is a keyword in Fortran, so we use the name endd instead), and period as integers. We also declare dataa(n), smas(n), and sma as reals because they will contain decimal parts. Note that we specify the size of the array with the (n) part in the first two objects. Once we have declared everything we will use, we proceed with our logic. We first create a for loop, which is done with the do keyword in Fortran, followed by a unique identifier (which are normally named with multiples of tens or hundreds), the variable name that will be used to iterate, and the values that it will take, endd and 1 to n in this case, respectively. Within the for loop, we assign pos to be equal to endd and sma to be equal to 0, just as we did in some of our previous implementations. Next, we create a while loop with the do…while keyword combination, and we provide the condition that should be checked to decide when to break out of it. Note that Fortran uses a very different syntax for the comparison operators. Specifically, the .lt. operator stand for less-than, while the .ge. operator stands for greater-than-or-equal-to. If any of the two conditions specified is not met, then we will exit the while loop. Having said that, the rest of the code should be self-explanatory. The only other uncommon syntax property is that the code is indented to the sixth position. This indentation has meaning within Fortran, and it should be kept as it is. Also, the number IDs provided in the first columns in the code should match the corresponding looping mechanisms, and they should be kept toward the left of the logic-code. For a good introduction to Fortran, you may take a look at Stanford's Fortran 77 Tutorial (https:/​/​web.​stanford.​edu/​class/​me200c/​tutorial_​77/​). You should know that there are various Fortran versions, and the 77 version is one of the oldest ones. However, it's also one of the better supported ones: subroutine sma_fortran(period, dataa, smas, n) integer pos, n, endd, period real dataa(n), smas(n), sma do 10 endd = 1, n pos = endd sma = 0.0 do 20 while ((endd - pos .lt. period) .and. (pos .ge. 1)) sma = sma + dataa(pos) pos = pos - 1 20 end do if (endd - pos .eq. period) then sma = sma / period else sma = 0 end if smas(endd) = sma 10 continue end Once your code is finished, you need to compile it before it can be executed within R. Compilation is the process of translating code into machine-level instructions. You have two options when compiling Fortran code: you can either do it manually outside of R or you can do it within R. The second one is recommended since you can take advantage of R's tools for doing so. However, we show both of them. The first one can be achieved with the following code: $ gfortran -c sma-delegated-fortran.f -o sma-delegated-fortran.so This code should be executed in a Bash terminal (which can be found in Linux or Mac operating systems). We must ensure that we have the gfortran compiler installed, which was probably installed when R was. Then, we call it, telling it to compile (using the -c option) the sma-delegated-fortran.f file (which contains the Fortran code we showed before) and provide an output file (with the -o option) named sma-delegatedfortran.so. Our objective is to get this .so file, which is what we need within R to execute the Fortran code. The way to compile within R, which is the recommended way, is to use the following line: system("R CMD SHLIB sma-delegated-fortran.f") It basically tells R to execute the command that produces a shared library derived from the sma-delegated-fortran.f file. Note that the system() function simply sends the string it receives to a terminal in the operating system, which means that you could have used that same command in the Bash terminal used to compile the code manually. To load the shared library into R's memory, we use the dyn.load() function, providing the location of the .so file we want to use, and to actually call the shared library that contains the Fortran implementation, we use the .Fortran() function. This function requires type checking and coercion to be explicitly performed by the user before calling it. To provide a similar signature as the one provided by the previous functions, we will create a function named sma_delegated_fortran(), which receives the period, symbol, and data parameters as we did before, also filters the data as we did earlier, calculates the length of the data and puts it in n, and uses the .Fortran() function to call the sma_fortran() subroutine, providing the appropriate parameters. Note that we're wrapping the parameters around functions that coerce the types of these objects as required by our Fortran code. The results list created by the .Fortran() function contains the period, dataa, smas, and n objects, corresponding to the parameters sent to the subroutine, with the contents left in them after the subroutine was executed. As we mentioned earlier, we are interested in the contents of the sma object since they contain the values we're looking for. That's why we send only that part back after converting it to a numeric type within R. The transformations you see before sending objects to Fortran and after getting them back is something that you need to be very careful with. For example, if instead of using single(n) and as.single(data), we use double(n) and as.double(data), our Fortran implementation will not work. This is something that can be ignored within R, but it can't be ignored in the case of Fortran: system("R CMD SHLIB sma-delegated-fortran.f") dyn.load("sma-delegated-fortran.so") sma_delegated_fortran <- function(period, symbol, data) { data <- data[which(data$symbol == symbol), "price_usd"] n <- length(data) results <- .Fortran( "sma_fortran", period = as.integer(period), dataa = as.single(data), smas = single(n), n = as.integer(n) ) return(as.numeric(results$smas)) } Just as we did earlier, we benchmark and test for correctness: performance <- microbenchmark( sma_12 <- sma_delegated_fortran(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_12 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$me In this case, our median time is of 148.0335 microseconds, making this the fastest implementation up to this point. Note that it's barely over half of the time from the most efficient implementation we were able to come up with using only R. Take a look at the following table: Boost R codes using a modern approach with C++ Now, we will show you how to use a more modern approach using C++. The aim of this section is to provide just enough information for you to start experimenting using C++ within R on your own. We will only look at a tiny piece of what can be done by interfacing R with C++ through the Rcpp package (which is installed by default in R), but it should be enough to get you started. If you have never heard of C++, it's a language used mostly when resource restrictions play an important role and performance optimization is of paramount importance. Some good resources to learn more about C++ are Meyer's books on the topic, a popular one being Effective C++ (Addison-Wesley, 2005), and specifically for the Rcpp package, Eddelbuettel's Seamless R and C++ integration with Rcpp by Springer, 2013, is great. Before we continue, you need to ensure that you have a C++ compiler in your system. On Linux, you should be able to use gcc. On Mac, you should install Xcode from the  application store. O n Windows, you should install Rtools. Once you test your compiler and know that it's working, you should be able to follow this section. We'll cover more on how to do this in Appendix, Required Packages. C++ is more readable than Fortran code because it follows more syntax conventions we're used to nowadays. However, just because the example we will use is readable, don't think that C++ in general is an easy language to use; it's not. It's a very low-level language and using it correctly requires a good amount of knowledge. Having said that, let's begin. The #include line is used to bring variable and function definitions from R into this file when it's compiled. Literally, the contents of the Rcpp.h file are pasted right where the include statement is. Files ending with the .h extensions are called header files, and they are used to provide some common definitions between a code's user and its developers. The using namespace Rcpp line allows you to use shorter names for your function. Instead of having to specify Rcpp::NumericVector, we can simply use NumericVector to define the type of the data object. Doing so in this example may not be too beneficial, but when you start developing for complex C++ code, it will really come in handy. Next, you will notice the // [[Rcpp::export(sma_delegated_cpp)]] code. This is a tag that marks the function right below it so that R know that it should import it and make it available within R code. The argument sent to export() is the name of the function that will be accessible within R, and it does not necessarily have to match the name of the function in C++. In this case, sma_delegated_cpp() will be the function we call within R, and it will call the smaDelegated() function within C++: #include using namespace Rcpp; // [[Rcpp::export(sma_delegated_cpp)]] NumericVector smaDelegated(int period, NumericVector data) { int position, n = data.size(); NumericVector result(n); double sma; for (int end = 0; end < n; end++) { position = end; sma = 0; while(end - position < period && position >= 0) { sma = sma + data[position]; position = position - 1; } if (end - position == period) { sma = sma / period; } else { sma = NA_REAL; } result[end] = sma; } return result; } Next, we will explain the actual smaDelegated() function. Since you have a good idea of what it's doing at this point, we won't explain its logic, only the syntax that is not so obvious. The first thing to note is that the function name has a keyword before it, which is the type of the return value for the function. In this case, it's NumericVector, which is provided in the Rcpp.h file. This is an object designed to interface vectors between R and C++. Other types of vector provided by Rcpp are IntegerVector, LogicalVector, and CharacterVector. You also have IntegerMatrix, NumericMatrix, LogicalMatrix, and CharacterMatrix available. Next, you should note that the parameters received by the function also have types associated with them. Specifically, period is an integer (int), and data is NumericVector, just like the output of the function. In this case, we did not have to pass the output or length objects as we did with Fortran. Since functions in C++ do have output values, it also has an easy enough way of computing the length of objects. The first line in the function declare a variables position and n, and assigns the length of the data to the latter one. You may use commas, as we do, to declare various objects of the same type one after another instead of splitting the declarations and assignments into its own lines. We also declare the vector result with length n; note that this notation is similar to Fortran's. Finally, instead of using the real keyword as we do in Fortran, we use the float or double keyword here to denote such numbers. Technically, there's a difference regarding the precision allowed by such keywords, and they are not interchangeable, but we won't worry about that here. The rest of the function should be clear, except for maybe the sma = NA_REAL assignment. This NA_REAL object is also provided by Rcpp as a way to denote what should be sent to R as an NA. Everything else should result familiar. Now that our function is ready, we save it in a file called sma-delegated-cpp.cpp and use R's sourceCpp() function to bring compile it for us and bring it into R. The .cpp extension denotes contents written in the C++ language. Keep in mind that functions brought into R from C++ files cannot be saved in a .Rdata file for a later session. The nature of C++ is to be very dependent on the hardware under which it's compiled, and doing so will probably produce various errors for you. Every time you want to use a C++ function, you should compile it and load it with the sourceCpp() function at the moment of usage. library(Rcpp) sourceCpp("./sma-delegated-cpp.cpp") sma_delegated_cpp <- function(period, symbol, data) { data <- as.numeric(data[which(data$symbol == symbol), "price_usd"]) return(sma_cpp(period, data)) } If everything worked fine, our function should be usable within R, so we benchmark and test for correctness. I promise this is the last one: performance <- microbenchmark( sma_13 <- sma_delegated_cpp(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_13 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$median #> [1] 80.6415 This time, our median time was 80.6415 microseconds, which is three orders of magnitude faster than our first implementation. Think about it this way: if you provide an input for sma_delegated_cpp() so that it took around one hour for it to execute, sma_slow_1() would take around 1,000 hours, which is roughly 41 days. Isn't that a surprising difference? When you are in situations that take that much execution time, it's definitely worth it to try and make your implementations as optimized as possible. You may use the cppFunction() function to write your C++ code directly inside an .R file, but you should not do so. Keep that just for testing small pieces of code. Separating your C++ implementation into its own files allows you to use the power of your editor of choice (or IDE) to guide you through the development as well as perform deeper syntax checks for you. You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book provides step-by-step guide to build simple-to-advanced applications through examples in R using modern tools. Getting Inside a C++ Multithreaded Application Understanding the Dependencies of a C++ Application    
Read more
  • 0
  • 0
  • 3947
Banner background image

article-image-26-new-java-9-enhancements-you-will-love
Aarthi Kumaraswamy
09 Apr 2018
11 min read
Save for later

26 new Java 9 enhancements you will love

Aarthi Kumaraswamy
09 Apr 2018
11 min read
Java 9 represents a major release and consists of a large number of internal changes to the Java platform. Collectively, these internal changes represent a tremendous set of new possibilities for Java developers, some stemming from developer requests, others from Oracle-inspired enhancements. In this post, we will review 26 of the most important changes. Each change is related to a JDK Enhancement Proposal (JEP). JEPs are indexed and housed at openjdk.java.net/jeps/0. You can visit this site for additional information on each JEP. [box type="note" align="" class="" width=""]The JEP program is part of Oracle's support for open source, open innovation, and open standards. While other open source Java projects can be found, OpenJDK is the only one supported by Oracle. [/box] These changes have several impressive implications, including: Heap space efficiencies Memory allocation Compilation process improvements Type testing Annotations Automated runtime compiler tests Improved garbage collection 26 Java 9 enhancements you should know Improved Contended Locking [JEP 143] The general goal of JEP 143 was to increase the overall performance of how the JVM manages contention over locked Java object monitors. The improvements to contended locking were all internal to the JVM and do not require any developer actions to benefit from them. The overall improvement goals were related to faster operations. These include faster monitor enter, faster monitor exit, and faster notifications. 2. Segmented code cache [JEP 197] The segmented code cache JEP (197) upgrade was completed and results in faster, more efficient execution time. At the core of this change was the segmentation of the code cache into three distinct segments--non-method, profiled, and non-profiled code. 3. Smart Java compilation, phase two [JEP 199] The JDK Enhancement Proposal 199 is aimed at improving the code compilation process. All Java developers will be familiar with the javac tool for compiling source code to bytecode, which is used by the JVM to run Java programs. Smart Java Compilation, also referred to as Smart Javac and sjavac, adds a smart wrapper around the javac process. Perhaps the core improvement sjavac adds is that only the necessary code is recompiled. [box type="shadow" align="" class="" width=""]Check out this tutorial to know how you can recognize patterns with neural networks in Java.[/box] 4. Resolving Lint and Doclint warnings [JEP 212] Both Lint and Doclint report errors and warnings during the compile process. Resolution of these warnings was the focus of JEP 212. When using core libraries, there should not be any warnings. This mindset led to JEP 212, which has been resolved and implemented in Java 9. 5. Tiered attribution for javac [JEP 215] JEP 215 represents an impressive undertaking to streamline javac's type checking schema. In Java 8, type checking of poly expressions is handled by a speculative attribution tool. The goal with JEP 215 was to change the type checking schema to create faster results. The new approach, released with Java 9, uses a tiered attribution tool. This tool implements a tiered approach for type checking argument expressions for all method calls. Permissions are also made for method overriding. 6. Annotations pipeline 2.0 [JEP 217] Java 8 related changes impacted Java annotations but did not usher in a change to how javac processed them. There were some hardcoded solutions that allowed javac to handle the new annotations, but they were not efficient. Moreover, this type of coding (hardcoding workarounds) is difficult to maintain. So, JEP 217 focused on refactoring the javac annotation pipeline. This refactoring was all internal to javac, so it should not be evident to developers. 7. New version-string scheme [JEP 223] Prior to Java 9, the release numbers did not follow industry standard versioning--semantic versioning. Oracle has embraced semantic versioning for Java 9 and beyond. For Java, a major-minor-security schema will be used for the first three elements of Java version numbers: Major: A major release consisting of a significant new set of features Minor: Revisions and bug fixes that are backward compatible Security: Fixes deemed critical to improving security 8. Generating run-time compiler tests automatically [JEP 233] The purpose of JEP 233 was to create a tool that could automate the runtime compiler tests. The tool that was created starts by generating a random set of Java source code and/or byte code. The generated code will have three key characteristics: Be syntactically correct Be semantically correct Use a random seed that permits reusing the same randomly-generated code 9. Testing class-file attributes generated by Javac [JEP 235] Prior to Java 9, there was no method of testing a class-file's attributes. Running a class and testing the code for anticipated or expected results was the most commonly used method of testing javac generated class-files. This technique falls short of testing to validate the file's attributes. The lack of, or insufficient, capability to create tests for class-file attributes was the impetus behind JEP 235. The goal is to ensure javac creates a class-file's attributes completely and correctly. 10. Storing interned strings in CDS archives [JEP 250] CDS archives now allocate specific space on the heap for strings: The string space is mapped using a shared-string table, hash tables, and deduplication. 11. Preparing JavaFX UI controls and CSS APIs for modularization [JEP 253] Prior to Java 9, JavaFX controls as well as CSS functionality were only available to developers by interfacing with internal APIs. Java 9's modularization has made the internal APIs inaccessible. Therefore, JEP 253 was created to define public, instead of internal, APIs. This was a larger undertaking than it might seem. Here are a few actions that were taken as part of this JEP: Moving javaFX control skins from the internal to public API (javafx.scene.skin) Ensuring API consistencies Generation of a thorough javadoc 12. Compact strings [JEP 254] The string data type is an important part of nearly every Java app. While JEP 254's aim was to make strings more space-efficient, it was approached with caution so that existing performance and compatibilities would not be negatively impacted. Starting with Java 9, strings are now internally represented using a byte array along with a flag field for encoding references. 13. Merging selected Xerces 2.11.0 updates into JAXP [JEP 255] Xerces is a library used for parsing XML in Java. It was updated to 2.11.0 in late 2010, so JEP 255's aim was to update JAXP to incorporate changes in Xerces 2.11.0. 14. Updating JavaFX/Media to a newer version of GStreamer [JEP 257] The purpose of JEP 257 was to ensure JavaFX/Media was updated to include the latest release of GStreamer for stability, performance, and security assurances. GStreamer is a multimedia processing framework that can be used to build systems that take in media from several different formats and, after processing, export them in selected formats. 15. HarfBuzz Font-Layout Engine [JEP 258] Prior to Java 9, the layout engine used to handle font complexities; specifically, fonts that have rendering behaviors beyond what the common Latin fonts have. Java used the uniform client interface, also referred to as ICU, as the defacto text rendering tool. The ICU layout engine has been depreciated and, in Java 9, has been replaced with the HarfBuzz font layout engine. HarfBuzz is an OpenType text rendering engine. This type of layout engine has the characteristic of providing script-aware code to help ensure text is laid out as desired. 16. HiDPI graphics on Windows and Linux [JEP 263] JEP 263 was focused on ensuring the crispness of on-screen components, relative to the pixel density of the display. The following terms are relevant to this JEP and are provided along with the below listed descriptive information: DPI-aware application: An application that is able to detect and scale images for the display's specific pixel density DPI-unaware application: An application that makes no attempt to detect and scale images for the display's specific pixel density HiDPI graphics: High dots-per-inch graphics Retina display: This term was created by Apple to refer to displays with a pixel density of at least 300 pixels per inch Prior to Java 9, automatic scaling and sizing were already implemented in Java for the Mac OS X operating system. This capability was added in Java 9 for Windows and Linux operating systems. 17. Marlin graphics renderer [JEP 265] JEP 265 replaced the Pisces graphics rasterizer with the Marlin graphics renderer in the Java 2D API. This API is used to draw 2D graphics and animations. The goal was to replace Pisces with a rasterizer/renderer that was much more efficient and without any quality loss. This goal was realized in Java 9. An intended collateral benefit was to include a developer-accessible API. Previously, the means of interfacing with the AWT and Java 2D was internal. 18. Unicode 8.0.0 [JEP 267] Unicode 8.0.0 was released on June 17, 2015. JEP 267 focused on updating the relevant APIs to support Unicode 8.0.0. In order to fully comply with the new Unicode standard, several Java classes were updated. The following listed classes were updated for Java 9 to comply with the new Unicode standard: java.awt.font.NumericShaper java.lang.Character java.lang.String java.text.Bidi java.text.BreakIterator java.text.Normalizer 19. Reserved stack areas for critical sections [JEP 270] The goal of JEP 270 was to mitigate problems stemming from stack overflows during the execution of critical sections. This mitigation took the form of reserving additional thread stack space. [box type="shadow" align="" class="" width=""]Are you looking out for running parallel data operations using Java streams, check out this post for more details.[/box] 20. Dynamic linking of language-defined object models [JEP 276] Java interoperability was enhanced with JEP 276. The necessary JDK changes were made to permit runtime linkers from multiple languages to coexist in a single JVM instance. This change applies to high-level operations, as you would expect. An example of a relevant high-level operation is the reading or writing of a property with elements such as accessors and mutators. The high-level operations apply to objects of unknown types. They can be invoked with INVOKEDYNAMIC instructions. Here is an example of calling an object's property when the object's type is unknown at compile time:   INVOKEDYNAMIC "dyn:getProp:age" 21. Additional tests for humongous objects in G1 [JEP 278] One of the long-favored features of the Java platform is the behind the scenes garbage collection. JEP 278's focus was to create additional WhiteBox tests for humongous objects as a feature of the G1 garbage collector. 22. Improving test-failure troubleshooting [JEP 279] For developers that do a lot of testing, JEP 279 is worth reading about. Additional functionality has been added in Java 9 to automatically collect information to support troubleshooting test failures as well as timeouts. Collecting readily available diagnostic information during tests stands to provide developers and engineers with greater fidelity in their logs and other output. 23. Optimizing string concatenation [JEP 280] JEP 280 is an interesting enhancement for the Java platform. Prior to Java 9, string concatenation was translated by javac into StringBuilder : : append chains. This was a sub-optimal translation methodology often requiring StringBuilder presizing. The enhancement changed the string concatenation bytecode sequence, generated by javac, so that it uses INVOKEDYNAMIC calls. The purpose of the enhancement was to increase optimization and to support future optimizations without the need to reformat the javac's bytecode. 24. HotSpot C++ unit-test framework [JEP 281] HotSpot is the name of the JVM. This Java enhancement was intended to support the development of C++ unit tests for the JVM. Here is a partial, non-prioritized, list of goals for this enhancement: Command-line testing Create appropriate documentation Debug compile targets Framework elasticity IDE support Individual and isolated unit testing Individualized test results Integrate with existing infrastructure Internal test support Positive and negative testing Short execution time testing Support all JDK 9 build platforms Test compile targets Test exclusion Test grouping Testing that requires the JVM to be initialized Tests co-located with source code Tests for platform-dependent code Write and execute unit testing (for classes and methods) This enhancement is evidence of the increasing extensibility. 25. Enabling GTK 3 on Linux [JEP 283] GTK+, formally known as the GIMP toolbox, is a cross-platform tool used for creating Graphical User Interfaces (GUI). The tool consists of widgets accessible through its API. JEP 283's focus was to ensure GTK 2 and GTK 3 were supported on Linux when developing Java applications with graphical components. The implementation supports Java apps that employ JavaFX, AWT, and Swing. 26. New HotSpot build system [JEP 284] The Java platform used, prior to Java 9, was a build system riddled with duplicate code, redundancies, and other inefficiencies. The build system has been reworked for Java 9 based on the build-infra framework. In this context, infra is short for infrastructure. The overarching goal for JEP 284 was to upgrade the build system to one that was simplified. Specific goals included: Leverage existing build system Maintainable code Minimize duplicate code Simplification Support future enhancements Summary We explored some impressive new features of the Java platform, with a specific focus on javac, JDK libraries, and various test suites. Memory management improvements, including heap space efficiencies, memory allocation, and improved garbage collection represent a powerful new set of Java platform enhancements. Changes regarding the compilation process resulting in greater efficiencies were part of this discussion We also covered important improvements, such as with the compilation process, type testing, annotations, and automated runtime compiler tests. You just enjoyed an excerpt from the book, Mastering Java 9 written by By Dr. Edward Lavieri and Peter Verhas.  
Read more
  • 0
  • 0
  • 4788

article-image-roslyn-cookbook
Packt
20 Feb 2018
6 min read
Save for later

Consuming Diagnostic Analyzers in .NET projects

Packt
20 Feb 2018
6 min read
We know how to write diagnostic analyzers to analyze and report issues about .NET source code and contribute them to the .NET developer community. In this article by the author Manish Vasani, of the book Roslyn Cookbook, we will show you how to search, install, view and configure the analyzers that have already been published by various analyzer authors on NuGet and VS Extension gallery. We will cover the following recipes: (For more resources related to this topic, see here.) Searching and installing analyzers through the NuGet package manager. Searching and installing VSIX analyzers through the VS extension gallery. Viewing and configuring analyzers in solution explorer in Visual Studio. Using ruleset file and ruleset editor to configure analyzers. Diagnostic analyzers are extensions to the Roslyn C# compiler and Visual Studio IDE to analyze user code and report diagnostics. User will see these diagnostics in the error list after building the project from Visual Studio and even when building the project on the command line. They will also see the diagnostics live while editing the source code in the Visual Studio IDE. Analyzers can report diagnostics to enforce specific code styles, improve code quality and maintenance, recommend design guidelines or even report very domain specific issues which cannot be covered by the core compiler. Analyzers can be installed to a .NET project either as a NuGet package or as a VSIX. To get a better understanding of these packaging schemes and learn about the differences in the analyzer experience when installed as a NuGet package versus a VSIX. Analyzers are supported on various different flavors of .NET standard, .NET core and .NET framework projects, for example, class library, console app, etc. Searching and installing analyzers through the NuGet package manager In this recipe we will show you how to search and install analyzer NuGet packages in the NuGet package manager in Visual Studio and see how the analyzer diagnostics from an installed NuGet package light up in project build and as live diagnostics during code editing in Visual Studio. Getting ready You will need to have Visual Studio 2017 installed on your machine to this recipe. You can install a free community version of Visual Studio 2017 from https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15.  How to do it… Create a C# class library project, say ClassLibrary, in Visual Studio 2017. In solution explorer, right click on the solution or project node and execute Manage NuGet Packages command.  This brings up the NuGet Package Manager, which can be used to search and install NuGet packages to the solution or project. In the search bar type the following text to find NuGet packages tagged as analyzers: Tags:"analyzers" Note that some of the well known packages are tagged as analyzer, so you may also want to search:Tags:"analyzer" Check or uncheck the Include prerelease checkbox to the right of the search bar to search or hide the prerelease analyzer packages respectively. The packages are listed based on the number of downloads, with the highest downloaded package at the top. Select a package to install, say System.Runtime.Analyzers, and pick a specific version, say 1.1.0, and click Install. Click on I Accept button on the License Acceptance dialog to install the NuGet package. Verify the installed analyzer(s) show up under the Analyzers node in the solution explorer. Verify the project file has a new ItemGroup with the following analyzer references from the installed analyzer package: <ItemGroup> <Analyzer Include="..packagesSystem.Runtime.Analyzers.1.1.0analyzersdotnetcsSystem.Runtime.Analyzers.dll" /> <Analyzer Include="..packagesSystem.Runtime.Analyzers.1.1.0analyzersdotnetcsSystem.Runtime.CSharp.Analyzers.dll" /> </ItemGroup> Add the following code to your C# project: namespace ClassLibrary { public class MyAttribute : System.Attribute { } } Verify the analyzer diagnostic from the installed analyzer is shown in the error list: Open a Visual Studio 2017 Developer Command Prompt and build the project to verify that the analyzer is executed on the command line build and the analyzer diagnostic is reported: Create a new C# project in VS2017 and add the same code to it as step 9 and verify no analyzer diagnostic shows up in error list or command line, confirming that the analyzer package was only installed to the selected project in steps 1-6. Note that CA1018 (Custom attribute should have AttributeUsage defined) has been moved to a separate analyzer assembly in future versions of FxCop/System.Runtime.Analyzers package. It is recommended that you install Microsoft.CodeAnalysis.FxCopAnalyzers NuGet package to get the latest group of Microsoft recommended analyzers. Searching and installing VSIX analyzers through the VS extension gallery In this recipe we will show you how to search and install analyzer VSIX packages in the Visual Studio Extension manager and see how the analyzer diagnostics from an installed VSIX light up as live diagnostics during code editing in Visual Studio. Getting ready You will need to have Visual Studio 2017 installed on your machine to this recipe. You can install a free community version of Visual Studio 2017 from https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15. How to do it… Create a C# class library project, say ClassLibrary, in Visual Studio 2017. From the top level menu, execute Tools | Extensions and Updates Navigate to Online | Visual Studio Marketplace on the left tab of the dialog to view the available VSIXes in the Visual Studio extension gallery/marketplace. Search analyzers in the search text box in the upper right corner of the dialog and download an analyzer VSIX, say Refactoring Essentials for Visual Studio. Once the download completes, you will get a message at the bottom of the dialog that the install will be scheduled to execute once Visual Studio and related windows are closed. Close the dialog and then close the Visual Studio instance to start the install. In the VSIX Installer dialog, click Modify to start installation. The subsequent message prompts you to kill all the active Visual Studio and satellite processes. Save all your relevant work in all the open Visual Studio instances, and click End Tasks to kill these processes and install the VSIX. After installation, restart VS, click Tools | Extensions And Updates, and verify Refactoring Essentials VSIX is installed. Create a new C# project with the following source code and verify analyzer diagnostic RECS0085 (Redundant array creation expression) in the error list: namespace ClassLibrary { public class Class1 { void Method() { int[] values = new int[] { 1, 2, 3 }; } } } Build the project from Visual Studio 2017 or command line and confirm no analyzer diagnostic shows up in the Output Window or the command line respectively, confirming that the VSIX analyzer did not execute as part of the build. Resources for Article: Further resources on this subject: C++, SFML, Visual Studio, and Starting the first game [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article] Creating efficient reports with Visual Studio [article]
Read more
  • 0
  • 0
  • 2306

article-image-games-and-exercises
Packt
09 Aug 2017
3 min read
Save for later

Games and Exercises

Packt
09 Aug 2017
3 min read
In this article by Shishira Bhat and Ravi Wray, authors of the book, Learn Java in 7 days, we will study the following concepts: Making an object as the return type for a method Making an object as the parameter for a method (For more resources related to this topic, see here.) Let’s start this article by revisiting the reference variablesand custom data types: In the preceding program, p is a variable of datatype,Pen. Yes! Pen is a class, but it is also a datatype, a custom datatype. The pvariable stores the address of the Penobject, which is in heap memory. The pvariable is a reference that refers to a Penobject. Now, let’s get more comfortable by understanding and working with examples. How to return an Object from a method? In this section, let’s understand return types. In the following code, methods returnthe inbuilt data types (int and String), and the reason is explained after each method, as follows: int add () { int res = (20+50); return res; } The addmethod returns the res(70) variable, which is of the int type. Hence, the return type must be int: String sendsms () { String msg = "hello"; return msg; } The sendsmsmethod returns a variable by the name of msg, which is of the String type. Hence, the return type is String. The data type of the returning value and the return type must be the same. In the following code snippet, the return type of the givenPenmethod is not an inbuilt data type. However, the return type is a class (Pen) Let’s understand the following code: The givePen ()methodreturns a variable (reference variable) by the name of p, which is of the Pen type. Hence, the return type is Pen: In the preceding program, tk is a variable of the Ticket type. The method returns tk; hence, the return type of the method is Ticket. A method accepting an object (parameter) After seeing how a method can return an object/reference, let's understand how a method can take an object/reference as the input,that is, parameter. We already understood that if a method takes parameter(s), then we need to pass argument(s). Example In the preceding program,the method takestwo parameters,iandk. So, while calling/invoking the method, we need to pass two arguments, which are 20.5 and 15. The parameter type andthe argument type must be the same. Remember thatwhen class is the datatype, then object is the data. Consider the following example with respect toa non-primitive/class data type andthe object as its data: In the preceding code, the Kid class has the eat method, which takes ch as a parameter of the Chocolatetype, that is,the data type of ch is Chocolate, which is a class. When class is the data type then the object of that class is an actual data or argument. Hence,new Chocolate() is passed as an argument to the eat method. Let's see one more example: The drink method takes wtr as the parameter of the type,Water, which is a class/non-primitive type; hence, the argument must be an object of theWater class. Summary In this article we have learned what to return when a class is a return type for a method and what to pass as an argument for a method when a class is a parameter for the method.  Resources for Article: Further resources on this subject: Saying Hello to Java EE [article] Getting Started with Sorting Algorithms in Java [article] Debugging Java Programs using JDB [article]
Read more
  • 0
  • 0
  • 1118
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ruby-strings
Packt
06 Jul 2017
9 min read
Save for later

Ruby Strings

Packt
06 Jul 2017
9 min read
In this article by Jordan Hudgens, the author of the book Comprehensive Ruby Programming, you'll learn about the Ruby String data type and walk through how to integrate string data into a Ruby program. Working with words, sentences, and paragraphs are common requirements in many applications. Additionally you learn how to: Employ string manipulation techniques using core Ruby methods Demonstrate how to work with the string data type in Ruby (For more resources related to this topic, see here.) Using strings in Ruby A string is a data type in Ruby and contains set of characters, typically normal English text (or whatever natural language you're building your program for), that you would write. A key point for the syntax of strings is that they have to be enclosed in single or double quotes if you want to use them in a program. The program will throw an error if they are not wrapped inside quotation marks. Let's walk through three scenarios. Missing quotation marks In this code I tried to simply declare a string without wrapping it in quotation marks. As you can see, this results in an error. This error is because Ruby thinks that the values are classes and methods. Printing strings In this code snippet we're printing out a string that we have properly wrapped in quotation marks. Please note that both single and double quotation marks work properly. It's also important that you do not mix the quotation mark types. For example, if you attempted to run the code: puts "Name an animal' You would get an error, because you need to ensure that every quotation mark is matched with a closing (and matching) quotation mark. If you start a string with double quotation marks, the Ruby parser requires that you end the string with the matching double quotation marks. Storing strings in variables Lastly in this code snippet we're storing a string inside of a variable and then printing the value out to the console. We'll talk more about strings and string interpolation in subsequent sections. String interpolation guide for Ruby In this section, we are going to talk about string interpolation in Ruby. What is string interpolation? So what exactly is string interpolation? Good question. String interpolation is the process of being able to seamlessly integrate dynamic values into a string. Let's assume we want to slip dynamic words into a string. We can get input from the console and store that input into variables. From there we can call the variables inside of a pre-existing string. For example, let's give a sentence the ability to change based on a user's input. puts "Name an animal" animal = gets.chomp puts "Name a noun" noun= gets.chomp p "The quick brown #{animal} jumped over the lazy #{noun} " Note the way I insert variables inside the string? They are enclosed in curly brackets and are preceded by a # sign. If I run this code, this is what my output will look: So, this is how you insert values dynamically in your sentences. If you see sites like Twitter, it sometimes displays personalized messages such as: Good morning Jordan or Good evening Tiffany. This type of behavior is made possible by inserting a dynamic value in a fixed part of a string and leverages string interpolation. Now, let's use single quotes instead of double quotes, to see what happens. As you'll see, the string was printed as it is without inserting the values for animal and noun. This is exactly what happens when you try using single quotes—it prints the entire string as it is without any interpolation. Therefore it's important to remember the difference. Another interesting aspect is that anything inside the curly brackets can be a Ruby script. So, technically you can type your entire algorithm inside these curly brackets, and Ruby will run it perfectly for you. However, it is not recommended for practical programming purposes. For example, I can insert a math equation, and as you'll see it prints the value out. String manipulation guide In this section we are going to learn about string manipulation along with a number of examples of how to integrate string manipulation methods in a Ruby program. What is string manipulation? So what exactly is string manipulation? It's the process of altering the format or value of a string, usually by leveraging string methods. String manipulation code examples Let's start with an example. Let's say I want my application to always display the word Astros in capital letters. To do that, I simply write: "Astros".upcase Now if I always a string to be in lower case letters I can use the downcase method, like so: "Astros".downcase Those are both methods I use quite often. However there are other string methods available that we also have at our disposal. For the rare times when you want to literally swap the case of the letters you can leverage the swapcase method: "Astros".swapcase And lastly if you want to reverse the order of the letters in the string we can call the reverse method: "Astros".reverse These methods are built into the String data class and we can call them on any string values in Ruby. Method chaining Another neat thing we can do is join different methods together to get custom output. For example, I can run: "Astros".reverse.upcase The preceding code displays the value SORTSA. This practice of combining different methods with a dot is called method chaining. Split, strip, and join guides for strings In this section, we are going to walk through how to use the split and strip methods in Ruby. These methods will help us clean up strings and convert a string to an array so we can access each word as its own value. Using the strip method Let's start off by analyzing the strip method. Imagine that the input you get from the user or from the database is poorly formatted and contains white space before and after the value. To clean the data up we can use the strip method. For example: str = " The quick brown fox jumped over the quick dog " p str.strip When you run this code, the output is just the sentence without the white space before and after the words. Using the split method Now let's walk through the split method. The split method is a powerful tool that allows you to split a sentence into an array of words or characters. For example, when you type the following code: str = "The quick brown fox jumped over the quick dog" p str.split You'll see that it converts the sentence into an array of words. This method can be particularly useful for long paragraphs, especially when you want to know the number of words in the paragraph. Since the split method converts the string into an array, you can use all the array methods like size to see how many words were in the string. We can leverage method chaining to find out how many words are in the string, like so: str = "The quick brown fox jumped over the quick dog" p str.split.size This should return a value of 9, which is the number of words in the sentence. To know the number of letters, we can pass an optional argument to the split method and use the format: str = "The quick brown fox jumped over the quick dog" p str.split(//).size And if you want to see all of the individual letters, we can remove the size method call, like this: p str.split(//) And your output should look like this: Notice, that it also included spaces as individual characters which may or may not be what you want a program to return. This method can be quite handy while developing real-world applications. A good practical example of this method is Twitter. Since this social media site restricts users to 140 characters, this method is sure to be a part of the validation code that counts the number of characters in a Tweet. Using the join method We've walked through the split method, which allows you to convert a string into a collection of characters. Thankfully, Ruby also has a method that does the opposite, which is to allow you to convert an array of characters into a single string, and that method is called join. Let's imagine a situation where we're asked to reverse the words in a string. This is a common Ruby coding interview question, so it's an important concept to understand since it tests your knowledge of how string work in Ruby. Let's imagine that we have a string, such as: str = "backwards am I" And we're asked to reverse the words in the string. The pseudocode for the algorithm would be: Split the string into words Reverse the order of the words Merge all of the split words back into a single string We can actually accomplish each of these requirements in a single line of Ruby code. The following code snippet will perform the task: str.split.reverse.join(' ') This code will convert the single string into an array of strings, for the example it will equal ["backwards", "am", "I"]. From there it will reverse the order of the array elements, so the array will equal: ["I", "am", "backwards"]. With the words reversed, now we simply need to merge the words into a single string, which is where the join method comes in. Running the join method will convert all of the words in the array into one string. Summary In this article, we were introduced to the string data type and how it can be utilized in Ruby. We analyzed how to pass strings into Ruby processes by leveraging string interpolation. We also learned the methods of basic string manipulation and how to find and replace string data. We analyzed how to break strings into smaller components, along with how to clean up string based data. We even introduced the Array class in this article. Resources for Article: Further resources on this subject: Ruby and Metasploit Modules [article] Find closest mashup plugin with Ruby on Rails [article] Building tiny Web-applications in Ruby using Sinatra [article]
Read more
  • 0
  • 0
  • 4221

article-image-command-line-tools
Packt
06 Jul 2017
9 min read
Save for later

Command-Line Tools

Packt
06 Jul 2017
9 min read
In this article by Aaron Torres, author of the book, Go Cookbook, we will cover the following recipes: Using command-line arguments Working with Unix pipes An ANSI coloring application (For more resources related to this topic, see here.) Using command-line arguments This article will expand on other uses for these arguments by constructing a command that supports nested subcommands. This will demonstrate Flagsets and also using positional arguments passed into your application. This recipe requires a main function to run. There are a number of third-party packages for dealing with complex nested arguments and flags, but we'll again investigate doing so using only the standard library. Getting ready You need to perform the following steps for the installation: Download and install Go on your operating system at https://golang.org/doc/install and configure your GOPATH. Open a terminal/console application. Navigate to your GOPATH/src and create a project directory, for example, $GOPATH/src/github.com/yourusername/customrepo. All code will be run and modified from this directory. Optionally, install the latest tested version of the code using the go get github.com/agtorre/go-cookbook/ command. How to do it... From your terminal/console application, create and navigate to the chapter2/cmdargs directory. Copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/cmdargs or use this as an exercise to write some of your own. Create a file called cmdargs.go with the following content: package main import ( "flag" "fmt" "os" ) const version = "1.0.0" const usage = `Usage: %s [command] Commands: Greet Version ` const greetUsage = `Usage: %s greet name [flag] Positional Arguments: name the name to greet Flags: ` // MenuConf holds all the levels // for a nested cmd line argument type MenuConf struct { Goodbye bool } // SetupMenu initializes the base flags func (m *MenuConf) SetupMenu() *flag.FlagSet { menu := flag.NewFlagSet("menu", flag.ExitOnError) menu.Usage = func() { fmt.Printf(usage, os.Args[0]) menu.PrintDefaults() } return menu } // GetSubMenu return a flag set for a submenu func (m *MenuConf) GetSubMenu() *flag.FlagSet { submenu := flag.NewFlagSet("submenu", flag.ExitOnError) submenu.BoolVar(&m.Goodbye, "goodbye", false, "Say goodbye instead of hello") submenu.Usage = func() { fmt.Printf(greetUsage, os.Args[0]) submenu.PrintDefaults() } return submenu } // Greet will be invoked by the greet command func (m *MenuConf) Greet(name string) { if m.Goodbye { fmt.Println("Goodbye " + name + "!") } else { fmt.Println("Hello " + name + "!") } } // Version prints the current version that is // stored as a const func (m *MenuConf) Version() { fmt.Println("Version: " + version) } Create a file called main.go with the following content: package main import ( "fmt" "os" "strings" ) func main() { c := MenuConf{} menu := c.SetupMenu() menu.Parse(os.Args[1:]) // we use arguments to switch between commands // flags are also an argument if len(os.Args) > 1 { // we don't care about case switch strings.ToLower(os.Args[1]) { case "version": c.Version() case "greet": f := c.GetSubMenu() if len(os.Args) < 3 { f.Usage() return } if len(os.Args) > 3 { if.Parse(os.Args[3:]) } c.Greet(os.Args[2]) default: fmt.Println("Invalid command") menu.Usage() return } } else { menu.Usage() return } } Run the go build command. Run the following commands and try a few other combinations of arguments: $./cmdargs -h Usage: ./cmdargs [command] Commands: Greet Version $./cmdargs version Version: 1.0.0 $./cmdargs greet Usage: ./cmdargs greet name [flag] Positional Arguments: name the name to greet Flags: -goodbye Say goodbye instead of hello $./cmdargs greet reader Hello reader! $./cmdargs greet reader -goodbye Goodbye reader! If you copied or wrote your own tests go up one directory and run go test, and ensure all tests pass. How it works... Flagsets can be used to set up independent lists of expected arguments, usage strings, and more. The developer is required to do validation on a number of arguments, parsing in the right subset of arguments to commands, and defining usage strings. This can be error prone and requires a lot of iteration to get it completely correct. The flag package makes parsing arguments much easier and includes convenience methods to get the number of flags, arguments, and more. This recipe demonstrates basic ways to construct a complex command-line application using arguments, including a package-level config, required positional arguments, multi-leveled command usage, and how to split these things into multiple files or packages if needed. Working with Unix pipes Unix pipes are useful when passing the output of one program to the input of another. Consider the following example: $ echo "test case" | wc -l 1 In a Go application, the left-hand side of the pipe can be read in using os.Stdin and acts like a file descriptor. To demonstrate this, this recipe will take an input on the left-hand side of a pipe and return a list of words and their number of occurrences. These words will be tokenized on white space. Getting ready Refer to the Getting Ready section of the Using command-line arguments recipe. How to do it... From your terminal/console application, create a new directory, chapter2/pipes. Navigate to that directory and copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/pipes or use this as an exercise to write some of your own. Create a file called pipes.go with the following content: package main import ( "bufio" "fmt" "os" ) // WordCount takes a file and returns a map // with each word as a key and it's number of // appearances as a value func WordCount(f *os.File) map[string]int { result := make(map[string]int) // make a scanner to work on the file // io.Reader interface scanner := bufio.NewScanner(f) scanner.Split(bufio.ScanWords) for scanner.Scan() { result[scanner.Text()]++ } if err := scanner.Err(); err != nil { fmt.Fprintln(os.Stderr, "reading input:", err) } return result } func main() { fmt.Printf("string: number_of_occurrencesnn") for key, value := range WordCount(os.Stdin) { fmt.Printf("%s: %dn", key, value) } }   Run echo "some string" | go run pipes.go. You may also run: go build echo "some string" | ./pipes You should see the following output: $ echo "test case" | go run pipes.go string: number_of_occurrences test: 1 case: 1 $ echo "test case test" | go run pipes.go string: number_of_occurrences test: 2 case: 1 If you copied or wrote your own tests, go up one directory and run go test, and ensure that all tests pass. How it works... Working with pipes in go is pretty simple, especially if you're familiar with working with files. This recipe uses a scanner to tokenize the io.Reader interface of the os.Stdin file object. You can see how you must check for errors after completing all of the reads. An ANSI coloring application Coloring an ANSI terminal application is handled by a variety of code before and after a section of text that you want colored. This recipe will explore a basic coloring mechanism to color the text red or keep it plain. For a more complete application, take a look at https://github.com/agtorre/gocolorize, which supports many more colors and text types implements the fmt.Formatter interface for ease of printing. Getting ready Refer to the Getting Ready section of the Using command line arguments recipe. How to do it... From your terminal/console application, create and navigate to the chapter2/ansicolor directory. Copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/ansicolor or use this as an exercise to write some of your own. Create a file called color.go with the following content: package ansicolor import "fmt" //Color of text type Color int const ( // ColorNone is default ColorNone = iota // Red colored text Red // Green colored text Green // Yellow colored text Yellow // Blue colored text Blue // Magenta colored text Magenta // Cyan colored text Cyan // White colored text White // Black colored text Black Color = -1 ) // ColorText holds a string and its color type ColorText struct { TextColor Color Text string } func (r *ColorText) String() string { if r.TextColor == ColorNone { return r.Text } value := 30 if r.TextColor != Black { value += int(r.TextColor) } return fmt.Sprintf("33[0;%dm%s33[0m", value, r.Text) } Create a new directory named example. Navigate to example and then create a file named main.go with the following content. Ensure that you modify the ansicolor import to use the path you set up in step 1: package main import ( "fmt" "github.com/agtorre/go-cookbook/chapter2/ansicolor" ) func main() { r := ansicolor.ColorText{ansicolor.Red, "I'm red!"} fmt.Println(r.String()) r.TextColor = ansicolor.Green r.Text = "Now I'm green!" fmt.Println(r.String()) r.TextColor = ansicolor.ColorNone r.Text = "Back to normal..." fmt.Println(r.String()) } Run go run main.go. Alternatively, you may also run the following: go build ./example You should see the following with the text colored if your terminal supports the ANSI coloring format: $ go run main.go I'm red! Now I'm green! Back to normal... If you copied or wrote your own tests, go up one directory and run go test, and ensure that all the tests pass. How it works... This application makes use of a struct keyword to maintain state of the colored text. In this case, it stores the color of the text and the value of the text. The final string is rendered when you call the String() method, which will either return colored text or plain text depending on the values stored in the struct. By default, the text will be plain. Summary In this article, we demonstrated basic ways to construct a complex command-line application using arguments, including a package-level config, required positional arguments, multi-leveled command usage, and how to split these things into multiple files or packages if needed. We saw how to work with Unix pipes and explored a basic coloring mechanism to color text red or keep it plain. Resources for Article: Further resources on this subject: Building a Command-line Tool [article] A Command-line Companion Called Artisan [article] Scaffolding with the command-line tool [article]
Read more
  • 0
  • 0
  • 2124

article-image-basics-python-absolute-beginners
Packt
19 Jun 2017
5 min read
Save for later

Basics of Python for Absolute Beginners

Packt
19 Jun 2017
5 min read
In this article by Bhaskar Das and Mohit Raj, authors of the book, Learn Python in 7 days, we will learn basics of Python. The Python language had a humble beginning in the late 1980s when a Dutchman, Guido Von Rossum, started working on a fun project that would be a successor to the ABC language with better exception handling and capability to interface with OS Amoeba at Centrum Wiskunde and Informatica. It first appeared in 1991. Python 2.0 was released in the year 2000 and Python 3.0 was released in the year 2008. The language was named Python after the famous British television comedy show Monty Python's Flying Circus, which was one of the favorite television programmes of Guido. Here, we will see why Python has suddenly influenced our lives, various applications that use Python, and Python's implementations. In this article, you will be learning the basic installation steps required to perform on different platforms (that is Windows, Linux and Mac), about environment variables, setting up environment variables, file formats, Python interactive shell, basic syntaxes, and, finally, printing out the formatted output. (For more resources related to this topic, see here.) Why Python? Now you might be suddenly bogged with the question, why Python? According to the Institute of Electrical and Electronics Engineers (IEEE) 2016 ranking, Python ranked third after C and Java. As per Indeed.com's data of 2016, Python job market search ranked fifth. Clearly, all the data points to the ever-rising demand in the job market for Python. It's a cool language if you want to learn it just for fun. Also, you will adore the language if you want to build your career around Python. At the school level, many schools have started including Python programming for kids. With new technologies taking the market by surprise, Python has been playing a dominant role. Whether it's cloud platform, mobile app development, BigData, IoT with Raspberry Pi, or the new Blockchain technology, Python is being seen as a niche language platform to develop and deliver scalable and robust applications. Some key features of the language are: Python programs can run on any platform, you can carry code created in a Windows machine and run it on Mac or Linux Python has a large inbuilt library with prebuilt and portable functionality, known as the standard library Python is an expressive language Python is free and open source Python code is about one third of the size of equivalent C++ and Java code. Python can be both dynamically and strongly typed In dynamically typed, the type of a variable is interpreted at runtime, which means that there is no need to define the type (int, float) of a variable in Python Python applications One of the most famous platform where Python is extensively used is YouTube. Other places where you will find Python being extensively used are special effects in Hollywood movies, drug evolution and discovery, traffic control systems, ERP systems, cloud hosting, e-commerce platform, CRM systems, and whichever field you can think of. Versions At the time of writing this book, the two main versions of the Python programming language available in the market were Python 2.x and Python 3.x. The stable releases at the time of writing this book were Python 2.7.13 and Python 3.6.0. Implementations of Python Major implementations include CPython, Jython, IronPython, MicroPython and PyPy. Installation Here, we will look forward to the installation of Python on three different OS platforms, namely Windows, Linux, and Mac OS. Let's begin with the Windows platform. Installation on Windows platform Python 2.x can be downloaded from https://www.python.org/downloads. The installer is simple and easy to install. Follow these steps to install the setup: Once you click on the setup installer, you will get a small window on your desktop screen as shown. Click onNext: Provide a suitable installation folder to install Python. If you don't provide the installation folder, then the installer will automatically create an installation folder for you as shown in the screenshot shown. Click on Next: After the completion of Step 2, you will get a window to customize Python as shown in the following screenshot. Note that theAdd python.exe to Path option has been markedx. Select this option to add it to system path variable. Click onNext: Finally, clickFinish to complete the installation: Summary So far, we did a walk through on the beginning and brief history of Python. We looked at the various implementations and flavors of Python. You also learned about installing on Windows OS. Hope this article has incited enough interest in Python and serves as your first step in the kingdom of Python, with enormous possibilities! Resources for Article: Further resources on this subject: Layout Management for Python GUI [article] Putting the Fun in Functional Python [article] Basics of Jupyter Notebook and Python [article]
Read more
  • 0
  • 0
  • 2018

article-image-exploring-functions
Packt
16 Jun 2017
12 min read
Save for later

Exploring Functions

Packt
16 Jun 2017
12 min read
In this article by Marius Bancila, author of the book Modern C++ Programming Cookbook covers the following recipes: Defaulted and deleted functions Using lambdas with standard algorithms (For more resources related to this topic, see here.) Defaulted and deleted functions In C++, classes have special members (constructors, destructor and operators) that may be either implemented by default by the compiler or supplied by the developer. However, the rules for what can be default implemented are a bit complicated and can lead to problems. On the other hand, developers sometimes want to prevent objects to be copied, moved or constructed in a particular way. That is possible by implementing different tricks using these special members. The C++11 standard has simplified many of these by allowing functions to be deleted or defaulted in the manner we will see below. Getting started For this recipe, you need to know what special member functions are, and what copyable and moveable means. How to do it... Use the following syntax to specify how functions should be handled: To default a function use =default instead of the function body. Only special class member functions that have defaults can be defaulted. struct foo { foo() = default; }; To delete a function use =delete instead of the function body. Any function, including non-member functions, can be deleted. struct foo { foo(foo const &) = delete; }; void func(int) = delete; Use defaulted and deleted functions to achieve various design goals such as the following examples: To implement a class that is not copyable, and implicitly not movable, declare the copy operations as deleted. class foo_not_copiable { public: foo_not_copiable() = default; foo_not_copiable(foo_not_copiable const &) = delete; foo_not_copiable& operator=(foo_not_copiable const&) = delete; }; To implement a class that is not copyable, but it is movable, declare the copy operations as deleted and explicitly implement the move operations (and provide any additional constructors that are needed). class data_wrapper { Data* data; public: data_wrapper(Data* d = nullptr) : data(d) {} ~data_wrapper() { delete data; } data_wrapper(data_wrapper const&) = delete; data_wrapper& operator=(data_wrapper const &) = delete; data_wrapper(data_wrapper&& o) :data(std::move(o.data)) { o.data = nullptr; } data_wrapper& operator=(data_wrapper&& o) { if (this != &o) { delete data; data = std::move(o.data); o.data = nullptr; } return *this; } }; To ensure a function is called only with objects of a specific type, and perhaps prevent type promotion, provide deleted overloads for the function (the example below with free functions can also be applied to any class member functions). template <typename T> void run(T val) = delete; void run(long val) {} // can only be called with long integers How it works... A class has several special members that can be implemented by default by the compiler. These are the default constructor, copy constructor, move constructor, copy assignment, move assignment and destructor. If you don't implement them, then the compiler does it, so that instances of a class can be created, moved, copied and destructed. However, if you explicitly provide one or more, then the compiler will not generate the others according to the following rules: If a user defined constructor exists, the default constructor is not generated by default. If a user defined virtual destructor exists, the default constructor is not generated by default. If a user-defined move constructor or move assignment operator exist, then the copy constructor and copy assignment operator are not generated by default. If a user defined copy constructor, move constructor, copy assignment operator, move assignment operator or destructor exist, then the move constructor and move assignment operator are not generated by default. If a user defined copy constructor or destructor exists, then the copy assignment operator is generated by default. If a user-defined copy assignment operator or destructor exists, then the copy constructor is generated by default. Note that the last two are deprecated rules and may no longer be supported by your compiler. Sometimes developers need to provide empty implementations of these special members or hide them in order to prevent the instances of the class to be constructed in a specific manner. A typical example is a class that is not supposed to be copyable. The classical pattern for this is to provide a default constructor and hide the copy constructor and copy assignment operators. While this works, the explicitly defined default constructor makes the class to no longer be considered trivial and therefore a POD type (that can be constructed with reinterpret_cast). The modern alternative to this is using deleted function as shown in the previous section. When the compiler encounters the =default in the definition of a function it will provide the default implementation. The rules for special member functions mentioned earlier still apply. Functions can be declared =default outside the body of a class if and only if they are inlined. class foo     {      public:      foo() = default;      inline foo& operator=(foo const &);     };     inline foo& foo::operator=(foo const &) = default;     When the compiler encounters the =delete in the definition of a function it will prevent the calling of the function. However, the function is still considered during overload resolution and only if the deleted function is the best match the compiler generates an error. For example, giving the previously defined overloads for function run() only calls with long integers are possible. Calls with arguments of any other type, including int, for which an automatic type promotion to long exists, would determine a deleted overload to be considered the best match and therefore the compiler will generate an error: run(42); // error, matches a deleted overload     run(42L); // OK, long integer arguments are allowed     Note that previously declared functions cannot be deleted, as the =delete definition must be the first declaration in a translation unit: void forward_declared_function();     // ...     void forward_declared_function() = delete; // error     The rule of thumb (also known as The Rule of Five) for class special member functions is: if you explicitly define any of copy constructor, move constructor, copy assignment, move assignment or destructor then you must either explicitly define or default all of them. Using lambdas with standard algorithms One of the most important modern features of C++ is lambda expressions, also referred as lambda functions or simply lambdas. Lambda expressions enable us to define anonymous function objects that can capture variables in the scope and be invoked or passed as arguments to functions. Lambdas are useful for many purposes and in this recipe, we will see how to use them with standard algorithms. Getting ready In this recipe, we discuss standard algorithms that take an argument that is a function or predicate that is applied to the elements it iterates through. You need to know what unary and binary functions are, and what are predicates and comparison functions. You also need to be familiar with function objects because lambda expressions are syntactic sugar for function objects. How to do it... Prefer to use lambda expressions to pass callbacks to standard algorithms instead of functions or function objects: Define anonymous lambda expressions in the place of the call if you only need to use the lambda in a single place. auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](int const n) {return n > 0; }); Define a named lambda, that is, assigned to a variable (usually with the auto specifier for the type), if you need to call the lambda in multiple places. auto ispositive = [](int const n) {return n > 0; }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), ispositive); Use generic lambda expressions if you need lambdas that only differ in their argument types (available since C++14). auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](auto const n) {return n > 0; }); How it works... The non-generic lambda expression shown above takes a constant integer and returns true if it is greater than 0, or false otherwise. The compiler defines an unnamed function object with the call operator having the signature of the lambda expression. struct __lambda_name__     {     bool operator()(int const n) const { return n > 0; }     };     The way the unnamed function object is defined by the compiler depends on the way we define the lambda expression, that can capture variables, use the mutable specifier or exception specifications or may have a trailing return type. The __lambda_name__ function object shown earlier is actually a simplification of what the compiler generates because it also defines a default copy and move constructor, a default destructor, and a deleted assignment operator. It must be well understood that the lambda expression is actually a class. In order to call it, the compiler needs to instantiate an object of the class. The object instantiated from a lambda expression is called a lambda closure. In the next example, we want to count the number of elements in a range that are greater or equal to 5 and less or equal than 10. The lambda expression, in this case, will look like this: auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 };     auto start{ 5 };     auto end{ 10 };     auto inrange = std::count_if(      std::begin(numbers), std::end(numbers),      [start,end](int const n) {return start <= n && n <= end;});     This lambda captures two variables, start and end, by copy (that is, value). The result unnamed function object created by the compiler looks very much like the one we defined above. With the default and deleted special members mentioned earlier, the class looks like this: class __lambda_name_2__     {    int start_; int end_; public: explicit __lambda_name_2__(int const start, int const end) : start_(start), end_(end) {}    __lambda_name_2__(const __lambda_name_2__&) = default;    __lambda_name_2__(__lambda_name_2__&&) = default;    __lambda_name_2__& operator=(const __lambda_name_2__&)     = delete;    ~__lambda_name_2__() = default;      bool operator() (int const n) const    { return start_ <= n && n <= end_; }     };     The lambda expression can capture variables by copy (or value) or by reference, and different combinations of the two are possible. However, it is not possible to capture a variable multiple times and it is only possible to have & or = at the beginning of the capture list. A lambda can only capture variables from an enclosing function scope. It cannot capture variables with static storage duration (that means variables declared in namespace scope or with the static or external specifier). The following table shows various combinations for the lambda captures semantics. Lambda Description [](){} Does not capture anything [&](){} Captures everything by reference [=](){} Captures everything by copy [&x](){} Capture only x by reference [x](){} Capture only x by copy [&x...](){} Capture pack extension x by reference [x...](){} Capture pack extension x by copy [&, x](){} Captures everything by reference except for x that is captured by copy [=, &x](){} Captures everything by copy except for x that is captured by reference [&, this](){} Captures everything by reference except for pointer this that is captured by copy (this is always captured by copy) [x, x](){} Error, x is captured twice [&, &x](){} Error, everything is captured by reference, cannot specify again to capture x by reference [=, =x](){} Error, everything is captured by copy, cannot specify again to capture x by copy [&this](){} Error, pointer this is always captured by copy [&, =](){} Error, cannot capture everything both by copy and by reference The general form of a lambda expression, as of C++17, looks like this:  [capture-list](params) mutable constexpr exception attr -> ret { body }    All parts shown in this syntax are actually optional except for the capture list, that can, however, be empty, and the body, that can also be empty. The parameter list can actually be omitted if no parameters are needed. The return type does not need to be specified as the compiler can infer it from the type of the returned expression. The mutable specifier (that tells the compiler the lambda can actually modify variables captured by copy), the constexpr specifier (that tells the compiler to generate a constexpr call operator) and the exception specifiers and attributes are all optional. The simplest possible lambda expression is []{}, though it is often written as [](){}. There's more... There are cases when lambda expressions only differ in the type of their arguments. In this case, the lambdas can be written in a generic way, just like templates, but using the auto specifier for the type parameters (no template syntax is involved). Summary Functions are a fundamental concept in programming; regardless the topic we discussed we end up writing functions. This article contains recipes related to functions. This article, however, covers modern language features related to functions and callable objects. Resources for Article: Further resources on this subject: Understanding the Dependencies of a C++ Application [article] Boost.Asio C++ Network Programming [article] Application Development in Visual C++ - The Tetris Application [article]
Read more
  • 0
  • 0
  • 1113
article-image-getting-started-c-features
Packt
05 Apr 2017
7 min read
Save for later

Getting started with C++ Features

Packt
05 Apr 2017
7 min read
In this article by Jacek Galowicz author of the book C++ STL Cookbook, we will learn new C++ features and how to use structured bindings to return multiple values at once. (For more resources related to this topic, see here.) Introduction C++ got a lot of additions in C++11, C++14, and most recently C++17. By now, it is a completely different language than it was just a decade ago. The C++ standard does not only standardize the language, as it needs to be understood by the compilers, but also the C++ standard template library (STL). We will see how to access individual members of pairs, tuples, and structures comfortably with structured bindings, and how to limit variable scopes with the new if and switch variable initialization capabilities. The syntactical ambiguities, which were introduced by C++11 with the new bracket initialization syntax, which looks the same for initializer lists, were fixed by new bracket initializer rules. The exact type of template class instances can now be deduced from the actual constructor arguments, and if different specializations of a template class shall result in completely different code, this is now easily expressible with constexpr-if. The handling of variadic parameter packs in template functions became much easier in many cases with the new fold expressions. At last, it became more comfortable to define static globally accessible objects in header-only libraries with the new ability to declare inline variables, which was only possible for functions before. Using structured bindings to return multiple values at once C++17 comes with a new feature which combines syntactic sugar and automatic type deduction: Structured bindings. These help assigning values from pairs, tuples, and structs into individual variables. How to do it... Applying a structured binding in order to assign multiple variables from one bundled structure is always one step: Accessing std::pair: Imagine we have a mathematical function divide_remainder, which accepts a dividend and a divisor parameter, and returns the fraction of both as well as the remainder. It returns those values using an std::pair bundle: std::pair<int, int> divide_remainder(int dividend, int divisor); Instead of accessing the individual values of the resulting pair like this: const auto result (divide_remainder(16, 3)); std::cout << "16 / 3 is " << result.first << " with a remainder of " << result.second << "n"; We can now assign them to individual variables with expressive names, which is much better to read: auto [fraction, remainder] = divide_remainder(16, 3); std::cout << "16 / 3 is " << fraction << " with a remainder of " << remainder << "n"; Structured bindings also work with std::tuple: Let's take the following example function, which gets us online stock information: std::tuple<std::string, std::chrono::time_point, double> stock_info(const std::string &name); Assigning its result to individual variables looks just like in the example before: const auto [name, valid_time, price] = stock_info("INTC"); Structured bindings also work with custom structures: Let's assume a structure like the following: struct employee { unsigned id; std::string name; std::string role; unsigned salary; }; Now we can access these members using structured bindings. We will even do that in a loop, assuming we have a whole vector of those: int main() { std::vector<employee> employees {/* Initialized from somewhere */}; for (const auto &[id, name, role, salary] : employees) { std::cout << "Name: " << name << "Role: " << role << "Salary: " << salary << "n"; } } How it works... Structured bindings are always applied with the same pattern: auto [var1, var2, ...] = <pair, tuple, struct, or array expression>; The list of variables var1, var2, ... must exactly match the number of variables which are contained by the expression being assigned from. The <pair, tuple, struct, or array expression> must be one of the following: An std::pair. An std::tuple. A struct. All members must be non-static and be defined in the same base class. An array of fixed size. The type can be auto, const auto, const auto& and even auto&&. Not only for the sake of performance, always make sure to minimize needless copies by using references when appropriate. If we write too many or not enough variables between the square brackets, the compiler will error out, telling us about our mistake: std::tuple<int, float, long> tup {1, 2.0, 3}; auto [a, b] = tup; This example obviously tries to stuff a tuple variable with three members into only two variables. The compiler immediately chokes on this and tells us about our mistake: error: type 'std::tuple<int, float, long>' decomposes into 3 elements, but only 2 names were provided auto [a, b] = tup; There's more... A lot of fundamental data structures from the STL are immediately accessible using structured bindings without us having to change anything. Consider for example a loop, which prints all items of an std::map: std::map<std::string, size_t> animal_population { {"humans", 7000000000}, {"chickens", 17863376000}, {"camels", 24246291}, {"sheep", 1086881528}, /* … */ }; for (const auto &[species, count] : animal_population) { std::cout << "There are " << count << " " << species << " on this planet.n"; } This particular example works, because when we iterate over a std::map container, we get std::pair<key_type, value_type> items on every iteration step. And exactly those are unpacked using the structured bindings feature (Assuming that the species string is the key, and the population count the value being associated with the key), in order to access them individually in the loop body. Before C++17, it was possible to achieve a similar effect using std::tie: int remainder; std::tie(std::ignore, remainder) = divide_remainder(16, 5); std::cout << "16 % 5 is " << remainder << "n"; This example shows how to unpack the result pair into two variables. std::tie is less powerful than structured bindings in the sense that we have to define all variables we want to bind to before. On the other hand, this example shows a strength of std::tie which structured bindings do not have: The value std::ignore acts as a dummy variable. The fraction part of the result is assigned to it, which leads to that value being dropped because we do not need it in that example. Back in the past, the divide_remainder function would have been implemented the following way, using output parameters: bool divide_remainder(int dividend, int divisor, int &fraction, int &remainder); Accessing it would have looked like the following: int fraction, remainder; const bool success {divide_remainder(16, 3, fraction, remainder)}; if (success) { std::cout << "16 / 3 is " << fraction << " with a remainder of " << remainder << "n"; } A lot of people will still prefer this over returning complex structures like pairs, tuples, and structs, arguing that this way the code would be faster, due to avoided intermediate copies of those values. This is not true any longer for modern compilers, which optimize intermediate copies away. Apart from the missing language features in C, returning complex structures via return value was considered slow for a long time, because the object had to be initialized in the returning function, and then copied into the variable which shall contain the return value on the caller side. Modern compilers support return value optimization (RVO), which enables for omitting intermediate copies. Summary Thus we successfully studied how to use structured bindings to return multiple values at once in C++ 17 using code examples. Resources for Article: Further resources on this subject: Creating an F# Project [article] Hello, C#! Welcome, .NET Core! [article] Exploring Structure from Motion Using OpenCV [article]
Read more
  • 0
  • 0
  • 3268

article-image-understanding-dependencies-c-application
Packt
05 Apr 2017
9 min read
Save for later

Understanding the Dependencies of a C++ Application

Packt
05 Apr 2017
9 min read
This article by Richard Grimes, author of the book, Beginning C++ Programming explains the dependencies of a C++ application. A C++ project will produce an executable or library, and this will be built by the linker from object files. The executable or library is dependent upon these object files. An object file will be compiled from a C++ source file (and potentially one or more header files). The object file is dependent upon these C++ source and header files. Understanding dependencies is important because it helps you understand the order to compile the files in your project, and it allows you to make your project builds quicker by only compiling those files that have changed. (For more resources related to this topic, see here.) Libraries When you include a file within your source file the code within that header file will be accessible to your code. Your include file may contain whole function or class definitions (these will be covered in later chapters) but this will result in a problem: multiple definitions of a function or class. Instead, you can declare a class or function prototype, which indicates how calling code will call the function without actually defining it. Clearly the code will have to be defined elsewhere, and this could be a source file or a library, but the compiler will be happy because it only sees one definition. A library is code that has already been defined, it has been fully debugged and tested, and therefore users should not need to have access to the source code. The C++ Standard Library is mostly shared through header files, which helps you when you debug your code, but you must resist any temptation to edit these files. Other libraries will be provided as compiled libraries. There are essentially two types of compiled libraries: static libraries and dynamic link libraries. If you use a static library then the compiler will copy the compiled code that you use from the static library and place it in your executable. If you use a dynamic link (or shared) library then the linker will add information used during runtime (it may be when the executable is loaded, or it may even be delayed until the function is called) to load the shared library into memory and access the function. Windows uses the extension lib for static libraries and dll for dynamic link libraries. GNU gcc uses the extension a for static libraries and so for shared libraries. If you use library code in a static or dynamic link library the compiler will need to know that you are calling a function correctly—to make sure your code calls a function with the correct number of parameters and correct types. This is the purpose of a function prototype—it gives the compiler the information it needs to know about calling the function without providing the actual body of the function, the function definition. In general, the C++ Standard Library will be included into your code through the standard header files. The C Runtime Library (which provides some code for the C++ Standard Library) will be static linked, but if the compiler provides a dynamic linked version you will have a compiler option to use this. Pre-compiled Headers When you include a file into your source file the preprocessor will include the contents of that file (after taking into account any conditional compilation directives) and recursively any files included by that file. As illustrated earlier, this could result in thousands of lines of code. As you develop your code you will often compile the project so that you can test the code. Every time you compile your code the code defined in the header files will also be compiled even though the code in library header files will not have changed. With a large project this can make the compilation take a long time. To get around this problem compilers often offer an option to pre-compile headers that will not change. Creating and using precompiled headers is compiler specific. For example, with gcc you compile a header as if it is a C++ source file (with the /x switch) and the compiler creates a file with an extension of gch. When gcc compiles source files that use the header it will search for the gch file and if it finds the precompiled header it will use that, otherwise it will use the header file. In Visual C++ the process is a bit more complicated because you have to specifically tell the compiler to look for a precompiled header when it compiles a source file. The convention in Visual C++ projects is to have a source file called stdafx.cpp which has a single line that includes the file stdafx.h. You put all your stable header file includes in stdafx.h. Next, you create a precompiled header by compiling stdafx.cpp using the /Yc compiler option to specify that stdafx.h contains the stable headers to compile. This will create a pch file (typically, Visual C++ will name it after your project) containing the code compiled up to the point of the inclusion of the stdafx.h header file. Your other source files must include the stdafx.h header file as the first header file, but it may also include other files. When you compile your source files you use the /Yu switch to specify the stable header file (stdafx.h) and the compiler will use the precompiled header pch file instead of the header. When you examine large projects you will often find precompiled headers are used, and as you can see, it alters the file structure of the project. The example later in this chapter will show how to create and use precompiled headers. Project Structure It is important to organize your code into modules to enable you to maintain it effectively. Even if you are writing C-like procedural code (that is, your code involves calls to functions in a linear way) you will also benefit from organizing it into modules. For example, you may have functions that manipulate strings and other functions that access files, so you may decide to put the definition of the string functions in one source file, string.cpp, and the definition of the file functions in another file, file.cpp. So that other modules in the project can use these files you must declare the prototypes of the functions in a header file and include that header in the module that uses the functions. There is no absolute rule in the language about the relationship between the header files and the source files that contain the definition of the functions. You may have a header file called string.h for the functions in string.cpp and a header file called file.h for the functions in file.cpp. Or you may have just one file called utilities.h that contains the declarations for all the functions in both files. The only rule that you have to abide by is that at compile time the compiler must have access to a declaration of the function in the current source file, either through a header file, or the function definition itself. The compiler will not look forward in a source file, so if a function calls another function in the same source file that called function must have already been defined before the calling function, or there must be a prototype declaration. This leads to a typical convention of having a header file associated with each source file that contains the prototypes of the functions in the source file, and the source file includes this header. This convention becomes more important when you write classes. Managing Dependencies When a project is built with a building tool, checks are performed to see if the output of the build exist and if not, perform the appropriate actions to build it. Common terminology is that the output of a build step is called a target and the inputs of the build step (for example, source files) are the dependencies of that target. Each target's dependencies are the files used to make them. The dependencies may themselves be a target of a build action and have their own dependencies. For example, the following picture shows the dependencies in a project: In this project there are three source files (main.cpp, file1.cpp, file2.cpp) each of these includes the same header utils.h which is precompiled (and hence why there is a fourth source file, utils.cpp, that only contains utils.h). All of the source files depend on utils.pch, which in turn depends upon utils.h. The source file main.cpp has the main function and calls functions in the other two source files (file1.cpp and file2.cpp), and accesses the functions through the associated header files file1.h and file2.h. On the first compilation the build tool will see that the executable depends on the four object files and so it will look for the rule to build each one. In the case of the three C++ source files this means compiling the cpp files, but since utils.obj is used to support the precompiled header, the build rule will be different to the other files. When the build tool has made these object files it will then link them together along with any library code (not shown here). Subsequently, if you change file2.cpp and build the project, the build tool will see that only file2.cpp has changed and since only file2.obj depends on file2.cpp all the make tool needs to do is compile file2.cpp and then link the new file2.obj with the existing object files to create the executable. If you change the header file, file2.h, the build tool will see that two files depend on this header file, file2.cpp and main.cpp and so the build tool will compile these two source files and link the new two object files file2.obj and main.obj with the existing object files to form the executable. If, however, the precompiled header source file, util.h, changes it means that all of the source files will have to be compiled. Summary For a small project, dependencies are easy to manage, and as you have seen, for a single source file project you do not even have to worry about calling the linker because the compiler will do that automatically. As a C++ project gets bigger, managing dependencies gets more complex and this is where development environments like Visual C++ become vital. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Preparing to Build Your Own GIS Application [article] Writing a Fully Native Application [article]
Read more
  • 0
  • 0
  • 17307

Packt
15 Mar 2017
13 min read
Save for later

About Java Virtual Machine – JVM Languages

Packt
15 Mar 2017
13 min read
In this article by Vincent van der Leun, the author of the book, Introduction to JVM Languages, you will learn the history of the JVM and five important languages that run on the JVM. (For more resources related to this topic, see here.) While many other programming languages have come in and gone out of the spotlight, Java always managed to return to impressive spots, either near to, and lately even on, the top of the list of the most used languages in the world. It didn't take language designers long to realize that they as well could run their languages on the JVM—the virtual machine that powers Java applications—and take advantage of its performance, features, and extensive class library. In this article, we will take a look at common JVM use cases and various JVM languages. The JVM was designed from the ground up to run anywhere. Its initial goal was to run on set-top boxes, but when Sun Microsystems found out the market was not ready in the mid '90s, they decided to bring the platform to desktop computers as well. To make all those use cases possible, Sun invented their own binary executable format and called it Java bytecode. To run programs compiled to Java bytecode, a Java Virtual Machine implementation must be installed on the system. The most popular JVM implementations nowadays are Oracle's free but partially proprietary implementation and the fully open source OpenJDK project (Oracle's Java runtime is largely based on OpenJDK). This article covers the following subjects: Popular JVM use cases Java language Scala language Clojure language Kotlin language Groovy The Java platform as published by Google on Android phones and tablets is not covered in this article. One of the reasons is that the Java version used on Android is still based on the Java 6 SE platform from 2006. However, some of the languages covered in this article can be used with Android. Kotlin, in particular, is a very popular choice for modern Android development. Popular use cases Since the JVM platform was designed with a lot of different use cases in mind, it will be no surprise that the JVM can be a very viable choice for very different scenarios. We will briefly look at the following use cases: Web applications Big data Internet of Things (IoT) Web applications With its focus on performance, the JVM is a very popular choice for web applications. When built correctly, applications can scale really well if needed across many different servers. The JVM is a well-understood platform, meaning that it is predictable and many tools are available to debug and profile problematic applications. Because of its open nature, the monitoring of JVM internals is also very well possible. For web applications that have to serve thousands of users concurrently, this is an important advantage. The JVM already plays a huge role in the cloud. Popular examples of companies that use the JVM for core parts of their cloud-based services include Twitter (famously using Scala), Amazon, Spotify, and Netflix. But the actual list is much larger. Big data Big data is a hot topic. When data is regarded too big for traditional databases to be analyzed, one can set up multiple clusters of servers that will process the data. Analyzing the data in this context can, for example, be searching for something specific, looking for patterns, and calculating statistics. This data could have been obtained from data collected from web servers (that, for example, logged visitor's clicks), output obtained from external sensors at a manufacturer plant, legacy servers that have been producing log files over many years, and so forth. Data sizes can vary wildly as well, but often, will take up multiple terabytes in total. Two popular technologies in the big data arena are: Apache Hadoop (provides storage of data and takes care of data distribution to other servers) Apache Spark (uses Hadoop to stream data and makes it possible to analyze the incoming data) Both Hadoop and Spark are for the most part written in Java. While both offer interfaces for a lot of programming languages and platforms, it will not be a surprise that the JVM is among them. The functional programming paradigm focuses on creating code that can run safely on multiple CPU cores, so languages that are fully specialized in this style, such as Scala or Clojure, are very appropriate candidates to be used with either Spark or Hadoop. Internet of Things - IoT Portable devices that feature internet connectivity are very common these days. Since Java was created with the idea of running on embedded devices from the beginning, the JVM is, yet again, at an advantage here. For memory constrained systems, Oracle offers Java Micro Edition Embedded platform. It is meant for commercial IoT devices that do not require a standard graphical or console-based user interface. For devices that can spare more memory, the Java SE Embedded edition is available. The Java SE Embedded version is very close to the Java Standard Edition discussed in this article. When running a full Linux environment, it can be used to provide desktop GUIs for full user interaction. Java SE Embedded is installed by default on Raspbian, the standard Linux distribution of the popular Raspberry Pi low-cost, credit card-sized computers. Both Java ME Embedded and Java SE Embedded can access the General Purpose input/output (GPIO) pins on the Raspberry Pi, which means that sensors and other peripherals connected to these ports can be accessed by Java code. Java Java is the language that started it all. Source code written in Java is generally easy to read and comprehend. It started out as a relatively simple language to learn. As more and more features were added to the language over the years, its complexity increased somewhat. The good news is that beginners don't have to worry about the more advanced topics too much, until they are ready to learn them. Programmers that want to choose a different JVM language from Java can still benefit from learning the Java syntax, especially once they start using libraries or frameworks that provide Javadocs as API documentation. Javadocs is a tool that generates HTML documentation based on special comments in the source code. Many libraries and frameworks provide the HTML documents generated by Javadocs as part of their documentation. While Java is not considered a pure Object Orientated Programming (OOP) language because of its support for primitive types, it is still a serious OOP language. Java is known for its verbosity, it has strict requirements for its syntax. A typical Java class looks like this: package com.example; import java.util.Date; public class JavaDemo { private Date dueDate = new Date(); public void getDueDate(Date dueDate) { this.dueDate = dueDate; } public Date getValue() { return this.dueDate; } } A real-world example would implement some other important additional methods that were omitted for readability. Note that declaring the dueDate variable, the Date class name has to be specified twice; first, when declaring the variable type and the second time, when instantiating an object of this class. Scala Scala is a rather unique language. It has a strong support for functional programming, while also being a pure object orientated programming language at the same time. While a lot more can be said about functional programming, in a nutshell, functional programming is about writing code in such a way that existing variables are not modified while the program is running. Values are specified as function parameters and output is generated based on their parameters. Functions are required to return the same output when specifying the same parameters on each call. A class is supposed to not hold internal states that can change over time. When data changes, a new copy of the object must be returned and all existing copies of the data must be left alone. When following the rules of functional programming, which requires a specific mindset of programmers, the code is safe to be executed on multiple threads on different CPU cores simultaneously. The Scala installation offers two ways of running Scala code. It offers an interactive shell where code can directly be entered and is run right away. This program can also be used to run Scala source code directly without manually first compiling it. Also offered is scalac, a traditional compiler that compiles Scala source code to Java bytecode and compiles to files with the .class extension. Scala comes with its own Scala Standard Library. It complements the Java Class Library that is bundled with the Java Runtime Environment (JRE) and installed as part of the Java Developers Kit (JDK). It contains classes that are optimized to work with Scala's language features. Among many other things, it implements its own collection classes, while still offering compatibility with Java's collections. Scala's equivalent of the code shown in the Java section would be something like the following: package com.example import java.util.Date class ScalaDemo(var dueDate: Date) { } Scala will generate the getter and setter methods automatically. Note that this class does not follow the rules of functional programming as the dueDate variable is mutable (it can be changed at any time). It would be better to define the class like this: class ScalaDemo(val dueDate: Date) { } By defining dueDate with the val keyword instead of the var keyword, the variable has become immutable. Now Scala only generates a getter method and the dueDate can only be set when creating an instance of ScalaDemo class. It will never change during the lifetime of the object. Clojure Clojure is a language that is rather different from the other languages covered in this article. It is a language largely inspired by the Lisp programming language, which originally dates from the late 1950s. Lisp stayed relevant by keeping up to date with technology and times. Today, Common Lisp and Scheme are arguably the two most popular Lisp dialects in use today and Clojure is influenced by both. Unlike Java and Scala, Clojure is a dynamic language. Variables do not have fixed types and when compiling, no type checking is performed by the compiler. When a variable is passed to a function that it is not compatible with the code in the function, an exception will be thrown at run time. Also noteworthy is that Clojure is not an object orientated language, unlike all other languages in this article. Clojure still offers interoperability with Java and the JVM as it can create instances of objects and can also generate class files that other languages on the JVM can use to run bytecode compiled by Clojure. Instead of demonstrating how to generate a class in Clojure, let's write a function in Clojure that would consume a javademo instance and print its dueDate: (defn consume-javademo-instance [d] (println (.getDueDate d))) This looks rather different from the other source code in this article. Code in Clojure is written by adding code to a list. Each open parenthesis and the corresponding closing parenthesis in the preceding code starts and ends a new list. The first entry in the list is the function that will be called, while the other entries of that list are its parameters. By nesting the lists, complex evaluations can be written. The defn macro defines a new function that will be called consume-javademo-instance. It takes one parameter, called d. This parameter should be the javademo instance. The list that follows is the body of the function, which prints the value of the getDueDate function of the passed javademo instance in the variable, d. Kotlin Like Java and Scala, Kotlin, is a statically typed language. Kotlin is mainly focused on object orientated programming but supports procedural programming as well, so the usage of classes and objects is not required. Kotlin's syntax is not compatible with Java; the code in Kotlin is much less verbose. It still offers a very strong compatibility with Java and the JVM platform. The Kotlin equivalent of the Java code would be as follows: import java.util.Date data class KotlinDemo(var dueDate: Date) One of the more noticeable features of Kotlin is its type system, especially its handling of null references. In many programming languages, a reference type variable can hold a null reference, which means that a reference literally points to nothing. When accessing members of such null reference on the JVM, the dreaded NullPointerException is thrown. When declaring variables in the normal way, Kotlin does not allow references to be assigned to null. If you want a variable that can be null, you'll have to add the question mark (?)to its definition: var thisDateCanBeNull: Date? = Date() When you now access the variable, you'll have to let the compiler know that you are aware that the variable can be null: if (thisDateCanBeNull != null) println("${thisDateCanBeNull.toString()}") Without the if check, the code would refuse to compile. Groovy Groovy was an early alternative language for the JVM. It offers, for a large degree, Java syntax compatibility, but the code in Groovy can be much more compact because many source code elements that are required in Java are optional in Groovy. Like Clojure and mainstream languages such as Python, Groovy is a dynamic language (with a twist, as we will discuss next). Unusually, while Groovy is a dynamic language (types do not have to be specified when defining variables), it still offers optional static compilation of classes. Since statically compiled code usually performs better than dynamic code, this can be used when the performance is important for a particular class. You'll give up some convenience when switching to static compilation, though. Some other differences with Java is that Groovy supports operator overloading. Because Groovy is a dynamic language, it offers some tricks that would be very hard to implement with Java. It comes with a huge library of support classes, including many wrapper classes that make working with the Java Class Library a much more enjoyable experience. A JavaDemo equivalent in Groovy would look as follows: @Canonical class GroovyDemo { Date dueDate } The @Canonical annotation is not necessary but recommended because it will generate some support methods automatically that are used often and required in many use cases. Even without it, Groovy will automatically generate the getter and setter methods that we had to define manually in Java. Summary We started by looking at the history of the Java Virtual Machine and studied some important use cases of the Java Virtual Machine: web applications, big data, and IoT (Internet of Things). We then looked at five important languages that run on the JVM: Java (a very readable, but also very verbose statically typed language), Scala (both a strong functional and OOP programming language), Clojure (a non-OOP functional programming language inspired by Lisp and Haskell), Kotlin (a statically typed language, that protects the programmer from very common NullPointerException errors), and Groovy (a dynamic language with static compiler support that offers a ton of features). Resources for Article: Further resources on this subject: Using Spring JMX within Java Applications [article] Tuning Solr JVM and Container [article] So, what is Play? [article]
Read more
  • 0
  • 0
  • 2971
article-image-hello-c-welcome-net-core
Packt
11 Jan 2017
10 min read
Save for later

Hello, C#! Welcome, .NET Core!

Packt
11 Jan 2017
10 min read
In this article by Mark J. Price, author of the book C# 7 and .NET Core: Modern Cross-Platform Development-Second Edition, we will discuss about setting up your development environment; understanding the similarities and differences between .NET Core, .NET Framework, .NET Standard Library, and .NET Native. Most people learn complex topics by imitation and repetition rather than reading a detailed explanation of theory. So, I will not explain every keyword and step. This article covers the following topics: Setting up your development environment Understanding .NET (For more resources related to this topic, see here.) Setting up your development environment Before you start programming, you will need to set up your Interactive Development Environment (IDE) that includes a code editor for C#. The best IDE to choose is Microsoft Visual Studio 2017, but it only runs on the Windows operating system. To develop on alternative operating systems such as macOS and Linux, a good choice of IDE is Microsoft Visual Studio Code. Using alternative C# IDEs There are alternative IDEs for C#, for example, MonoDevelop and JetBrains Project Rider. They each have versions available for Windows, Linux, and macOS, allowing you to write code on one operating system and deploy to the same or a different one. For MonoDevelop IDE, visit http://www.monodevelop.com/ For JetBrains Project Rider, visit https://www.jetbrains.com/rider/ Cloud9 is a web browser-based IDE, so it's even more cross-platform than the others. Here is the link: https://c9.io/web/sign-up/free Linux and Docker are popular server host platforms because they are relatively lightweight and more cost-effectively scalable when compared to operating system platforms that are more for end users, such as Windows and macOS. Using Visual Studio 2017 on Windows 10 You can use Windows 7 or later to run code, but you will have a better experience if you use Windows 10. If you don't have Windows 10, and then you can create a virtual machine (VM) to use for development. You can choose any cloud provider, but Microsoft Azure has preconfigured VMs that include properly licensed Windows 10 and Visual Studio 2017. You only pay for the minutes your VM is running, so it is a way for users of Linux, macOS, and older Windows versions to have all the benefits of using Visual Studio 2017. Since October 2014, Microsoft has made a professional-quality edition of Visual Studio available to everyone for free. It is called the Community Edition. Microsoft has combined all its free developer offerings in a program called Visual Studio Dev Essentials. This includes the Community Edition, the free level of Visual Studio Team Services, Azure credits for test and development, and free training from Pluralsight, Wintellect, and Xamarin. Installing Microsoft Visual Studio 2017 Download and install Microsoft Visual Studio Community 2017 or later: https://www.visualstudio.com/vs/visual-studio-2017/. Choosing workloads On the Workloads tab, choose the following: Universal Windows Platform development .NET desktop development Web development Azure development .NET Core and Docker development On the Individual components tab, choose the following: Git for Windows GitHub extension for Visual Studio Click Install. You can choose to install everything if you want support for languages such as C++, Python, and R. Completing the installation Wait for the software to download and install. When the installation is complete, click Launch. While you wait for Visual Studio 2017 to install, you can jump to the Understanding .NET section in this article. Signing in to Visual Studio The first time that you run Visual Studio 2017, you will be prompted to sign in. If you have a Microsoft account, for example, a Hotmail, MSN, Live, or Outlook e-mail address, you can use that account. If you don't, then register for a new one at the following link: https://signup.live.com/ You will see the Visual Studio user interface with the Start Page open in the central area. Like most Windows desktop applications, Visual Studio has a menu bar, a toolbar for common commands, and a status bar at the bottom. On the right is the Solution Explorer window that will list your open projects: To have quick access to Visual Studio in the future, right-click on its entry in the Windows taskbar and select Pin this program to taskbar. Using older versions of Visual Studio The free Community Edition has been available since Visual Studio 2013 with Update 4. If you want to use a free version of Visual Studio older than 2013, then you can use one of the more limited Express editions. A lot of the code in this book will work with older versions if you bear in mind when the following features were introduced: Year C# Features 2005 2 Generics with <T> 2008 3 Lambda expressions with => and manipulating sequences with LINQ (from, in, where, orderby, ascending, descending, select, group, into) 2010 4 Dynamic typing with dynamic and multithreading with Task 2012 5 Simplifying multithreading with async and await 2015 6 string interpolation with $"" and importing static types with using static 2017 7 Tuples (with deconstruction), patterns, out variables, literal improvements Understanding .NET .NET Framework, .NET Core, .NET Standard Library, and .NET Native are related and overlapping platforms for developers to build applications and services upon. Understanding the .NET Framework platform Microsoft's .NET Framework is a development platform that includes a Common Language Runtime (CLR) that manages the execution of code and a rich library of classes for building applications. Microsoft designed the .NET Framework to have the possibility of being cross-platform, but Microsoft put their implementation effort into making it work best with Windows. Practically speaking, the .NET Framework is Windows-only. Understanding the Mono and Xamarin projects Third parties developed a cross-platform .NET implementation named the Mono project (http://www.mono-project.com/). Mono is cross-platform, but it fell well behind the official implementation of .NET Framework. It has found a niche as the foundation of the Xamarin mobile platform. Microsoft purchased Xamarin and now includes what used to be an expensive product for free with Visual Studio 2017. Microsoft has renamed the Xamarin Studio development tool to Visual Studio for the Mac. Xamarin is targeted at mobile development and building cloud services to support mobile apps. Understanding the .NET Core platform Today, we live in a truly cross-platform world. Modern mobile and cloud development have made Windows a much less important operating system. So, Microsoft has been working on an effort to decouple the .NET Framework from its close ties with Windows. While rewriting .NET to be truly cross-platform, Microsoft has taken the opportunity to refactor .NET to remove major parts that are no longer considered core. This new product is branded as the .NET Core, which includes a cross-platform implementation of the CLR known as CoreCLR and a streamlined library of classes known as CoreFX. Streamlining .NET .NET Core is much smaller than the current version of the .NET Framework because a lot has been removed. For example, Windows Forms and Windows Presentation Foundation (WPF) can be used to build graphical user interface (GUI) applications, but they are tightly bound to Windows, so they have been removed from the .NET Core. The latest technology for building Windows apps is the Universal Windows Platform (UWP). ASP.NET Web Forms and Windows Communication Foundation (WCF) are old web applications and service technologies that fewer developers choose to use today, so they have also been removed from the .NET Core. Instead, developers prefer to use ASP.NET MVC and ASP.NET Web API. These two technologies have been refactored and combined into a new product that runs on the .NET Core named ASP.NET Core. The Entity Framework (EF) 6.x is an object-relational mapping technology for working with data stored in relational databases such as Oracle and Microsoft SQL Server. It has gained baggage over the years, so the cross-platform version has been slimmed down and named Entity Framework Core. Some data types in .NET that are included with both the .NET Framework and the .NET Core have been simplified by removing some members. For example, in the .NET Framework, the File class has both a Close and Dispose method, and either can be used to release the file resources. In .NET Core, there is only the Dispose method. This reduces the memory footprint of the assembly and simplifies the API you have to learn. The .NET Framework 4.6 is about 200 MB. The .NET Core is about 11 MB. Eventually, the .NET Core may grow to a similar larger size. Microsoft's goal is not to make the .NET Core smaller than the .NET Framework. The goal is to componentize .NET Core to support modern technologies and to have fewer dependencies so that deployment requires only those components that your application really needs. Understanding the .NET Standard The situation with .NET today is that there are three forked .NET platforms, all controlled by Microsoft: .NET Framework, Xamarin, and .NET Core. Each have different strengths and weaknesses. This has led to the problem that a developer must learn three platforms, each with annoying quirks and limitations. So, Microsoft is working on defining the .NET Standard 2.0: a set of APIs that all .NET platforms must implement. Today, in 2016, there is the .NET Standard 1.6, but only .NET Core 1.0 supports it; .NET Framework and Xamarin do not! .NET Standard 2.0 will be implemented by .NET Framework, .NET Core, and Xamarin. For .NET Core, this will add many of the missing APIs that developers need to port old code written for .NET Framework to the new cross-platform .NET Core. .NET Standard 2.0 will probably be released towards the end of 2017, so I hope to write a third edition of this book for when that's finally released. The future of .NET The .NET Standard 2.0 is the near future of .NET, and it will make it much easier for developers to share code between any flavor of .NET, but we are not there yet. For cross-platform development, .NET Core is a great start, but it will take another version or two to become as mature as the current version of the .NET Framework. This book will focus on the .NET Core, but will use the .NET Framework when important or useful features have not (yet) been implemented in the .NET Core. Understanding the .NET Native compiler Another .NET initiative is the .NET Native compiler. This compiles C# code to native CPU instructions ahead-of-time (AoT) rather than using the CLR to compile IL just-in-time (JIT) to native code later. The .NET Native compiler improves execution speed and reduces the memory footprint for applications. It supports the following: UWP apps for Windows 10, Windows 10 Mobile, Xbox One, HoloLens, and Internet of Things (IoT) devices such as Raspberry Pi Server-side web development with ASP.NET Core Console applications for use on the command line Comparing .NET technologies The following table summarizes and compares the .NET technologies: Platform Feature set C# compiles to Host OSes .NET Framework Mature and extensive IL executed by a runtime Windows only Xamarin Mature and limited to mobile features iOS, Android, Windows Mobile .NET Core Brand-new and somewhat limited Windows, Linux, macOS, Docker .NET Native Brand-new and very limited Native code Summary In this article, we have learned how to set up the development environment, and discussed in detail about .NET technologies. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Reactive Programming with C# [article] Functional Programming in C# [article]
Read more
  • 0
  • 0
  • 1779

article-image-r-statistical-package-interfacing-python
Janu Verma
17 Nov 2016
8 min read
Save for later

The R Statistical Package Interfacing with Python

Janu Verma
17 Nov 2016
8 min read
One of my coding hobbies is to explore different Python packages and libraries. In this post, I'll talk about the package rpy2, which is used to call R inside python. Being an avid user of R and a huge supporter of R graphical packages, I had always desired to call R inside my Python code to be able to produce beautiful visualizations. The R framework offers machinery for a variety of statistical and data mining tasks. Let's review the basics of R before we delve into R-Python interfacing. R is a statistical language which is free, is open source, and has comprehensive support for various statistical, data mining, and visualization tasks. Quick-R describes it as: "R is an elegant and comprehensive statistical and graphical programming language." R is one of the fastest growing languages, mainly due to the surge in interest in statistical learning and data science. The Data Science Specialization on Coursera has all courses taught in R. There are R packages for machine learning, graphics, text mining, bioinformatics, topics modeling, interactive visualizations, markdown, and many others. In this post, I'll give a quick introduction to R. The motivation is to acquire some knowledge of R to be able to follow the discussion on R-Python interfacing. Installing R R can be downloaded from one of the Comprehensive R Archive Network (CRAN) mirror sites. Running R To run R interactively on the command line, type r. Launch the standard GUI (which should have been included in the download) and type R code in it. RStudio is the most popular IDE for R. It is recommended, though not required, to install RStudio and run R on it. To write a file with R code, create a file with the .r extension (for example, myFirstCode.r). And run the code by typing the following on the terminal: Rscript file.r Basics of R The most fundamental data structure in R is a vector; actually everything in R is a vector (even numbers are 1-dimensional vectors). This is one of the strangest things about R. Vectors contain elements of the same type. A vector is created by using the c() function. a = c(1,2,5,9,11) a [1] 1 2 5 9 11 strings = c("aa", "apple", "beta", "down") strings [1] "aa" "apple" "beta" "down" The elements in a vector are indexed, but the indexing starts at 1 instead of 0, as in most major languages (for example, python). strings[1] [1] "aa" The fact that everything in R is a vector and that the indexing starts at 1 are the main reasons for people's initial frustration with R (I forget this all the time). Data Frames A lot of R packages expect data as a data frame, which are essentially matrices but the columns can be accessed by names. The columns can be of different types. Data frames are useful outside of R also. The Python package Pandas was written primarily to implement data frames and to do analysis on them. In R, data frames are created (from vectors) as follows: students = c("Anne", "Bret", "Carl", "Daron", "Emily") scores = c(7,3,4,9,8) grades = c('B', 'D', 'C', 'A', 'A') results = data.frame(students, scores, grades) results students scores grades 1 Anne 7 B 2 Bret 3 D 3 Carl 4 C 4 Daron 9 A 5 Emily 8 A The elements of a data frame can be accessed as: results$students [1] Anne Bret Carl Daron Emily Levels: Anne Bret Carl Daron Emily This gives a vector, the elements of which can be called by indexing. results$students[1] [1] Anne Levels: Anne Bret Carl Daron Emily Reading Files Most of the times the data is given as a comma-separated values (csv) file or a tab-separated values (tsv) file. We will see how to read a csv/tsv file in R and create a data frame from it. (Aside: The datasets in most Kaggle competitions are given as csv files and we are required to do machine learning on them. In Python, one creates a pandas data frame or a numpy array from this csv file.) In R, we use a read.csv or read.table command to load a csv file into memory, for example, for the Titanic competition on Kaggle: training_data <- read.csv("train.csv", header=TRUE) train <- data.frame(survived=train_all$Survived, age=train_all$Age, fare=train_all$Fare, pclass=train_all$Pclass) Similarly, a tsv file can be loaded as: data <- read.csv("file.tsv";, header=TRUE, delimiter="t") Thus given a csv/tsv file with or without headers, we can read it using the read.csv function and create a data frame using: data.frame(vector_1, vector_2, ... vector_n). This should be enough to start exploring R packages. Another command that is very useful in R is head(), which is similar to the less command on Unix. rpy2 First things first, we need to have both Python and R installed. Then install rpy2 from the Python package index (Pypi). To do this, simply type the following on the command line: pip install rpy2 We will use the high-level interface to R, the robjects subpackage of rpy2. import rpy2.robjects as ro We can pass commands to the R session by putting the R commands in the ro.r() method as strings. Recall that everything in R is a vector. Let's create a vector using robjects: ro.r('x=c(2,4,6,8)') print(ro.r('x')) [1] 2 4 6 8 Keep in mind that though x is an R object (vector), ro.r('x') is a Python object (rpy2 object). This can be checked as follows: type(ro.r('x')) <class 'rpy2.robjects.vectors.FloatVector'> The most important data types in R are data frames, which are essentially matrices. We can create a data frame using rpy2: ro.r('x=c(2,4,6,8)') ro.r('y=c(4,8,12,16)') ro.r('rdf=data.frame(x,y)') This created an R data frame, rdf. If we want to manipulate this data frame using Python, we need to convert it to a python object. We will convert the R data frame to a pandas data frame. The Python package pandas contains efficient implementations of data frame objects in python. import pandas.rpy.common as com df = com.load_data('rdf') print type(df) <class 'pandas.core.frame.DataFrame'> df.x = 2*df.x Here we have doubled each of the elements of the x vector in the data frame df. But df is a Python object, which we can convert back to an R data frame using pandas as: rdf = com.convert_to_r_dataframe(df) print type(rdf) <class 'rpy2.robjects.vectors.DataFrame'> Let's use the plotting machinery of R, which is the main purpose of studying rpy2: ro.r('plot(x,y)') Not only R data types, but rpy2 lets us import R packages as well (given that these packages are installed on R) and use them for analysis. Here we will build a linear model on x and y using the R package stats: from rpy2.robjects.packages import importr stats = importr('stats') base = importr('base') fit = stats.lm('y ~ x', data=rdf) print(base.summary(fit)) We get the following results: Residuals: 1 2 3 4 0 0 0 0 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0 0 NA NA x 2 0 Inf <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0 on 2 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: 1 F-statistic: Inf on 1 and 2 DF, p-value: < 2.2e-16 R programmers will immediately recognize the output as coming from applying linear model function lm() on data. I'll end this discussion with an example using my favorite R package ggplot2. I have written a lot of posts on data visualization using ggplot2. The following example is borrowed from the official documentation of rpy2. import math, datetime import rpy2.robjects.lib.ggplot2 as ggplot2 import rpy2.robjects as ro from rpy2.robjects.packages import importr base = importr('base') datasets = importr('datasets') mtcars = datasets.data.fetch('mtcars')['mtcars'] pp = ggplot2.ggplot(mtcars) + ggplot2.aes_string(x='wt', y='mpg', col='factor(cyl)') + ggplot2.geom_point() + ggplot2.geom_smooth(ggplot2.aes_string(group = 'cyl'), method = 'lm') pp.plot() Author: Janu Verma is a researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology, and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and the Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals and so on. His current focus is on the development of visual analytics systems for prediction and understanding. He advises start-ups and other companies on data science and machine learning in the Delhi-NCR area. He can be found at Here.
Read more
  • 0
  • 0
  • 5023