Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-building-efficient-web-apis-with-net-8-and-visual-studio-2022
Jonathan R. Danylko
30 Oct 2024
15 min read
Save for later

Building Efficient Web APIs with .NET 8 and Visual Studio 2022

Jonathan R. Danylko
30 Oct 2024
15 min read
This article is an excerpt from the book, ASP.NET 8 Best Practices, by Jonathan R. Danylko. With the latest version of .NET 8.0 Core in LTS (Long-Term-Support), best practices are becoming harder to find as the technology continues to evolve. This book will guide you through coding practices and various aspects of software development.Introduction In the ever-evolving landscape of web development, .NET 8 has emerged as a game-changer, especially in the realm of Web APIs. With new features and enhancements, .NET 8 prioritizes the ease and efficiency of building Web APIs, supported by robust tools in Visual Studio 2022. This chapter explores the innovations in .NET 8, focusing on creating and testing Web APIs seamlessly. From leveraging minimal APIs to utilizing Visual Studio's new features, developers can now build powerful REST-based services with simplicity and speed. We'll guide you through the process, demonstrating how to create a minimal API and highlighting the benefits of this approach. Technical requirements In .NET 8, Web APIs take a front seat. Visual Studio has added new features to make Web APIs easier to build and test. For this chapter, we recommend using Visual Studio 2022, but the only requirement to view the GitHub repository is a simple text editor. The code for Chapter 09 is located in Packt Publishing’s GitHub repository, found at https:// github.com/PacktPublishing/ASP.NET-Core-8-Best-Practices. Creating APIs quickly With .NET 8, APIs are integrated into the framework, making it easier to create, test, and document. In this section, we’ll learn a quick and easy way to create a minimal API using Visual Studio 2022 and walk through the code it generates. We’ll also learn why minimal APIs are the best approach to building REST-based services. Using Visual Studio One of the features of .NET 8 is the ability to create minimal R EST APIs extremely fast. One way is to use the dotnet command-line tool and the other way is to use Visual Studio. To do so, follow these steps: 1. Open Visual Studio 2022 and create an ASP.NET Core Web API project. 2. After selecting the directory for the project, click Next. 3. Under the project options, make the following changes: Uncheck the Use Controllers option to use minimal APIs Check Enable OpenAPI support to include support for API documentation using Swagger:  Figure 9.1 – Options for a web minimal API project 4. Click Create. That’s it – we have a simple API! It may not be much of one, but it’s still a complete API with Swagger documentation. Swagger is a tool for creating documentation for APIs and implementing the OpenAPI specification, whereas Swashbuckle is a NuGet package that uses Swagger for implementing Microsoft  APIs. If we look at the project, there’s a single file called Program.cs. Opening Program.cs will show the entire application. This is one of the strong points of .NET – the ability to create a scaffolded REST API relatively quickly: var builder = WebApplication.CreateBuilder(args); // Add services to the container. // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); var summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; app.MapGet("/weatherforecast", () => { var forecast = Enumerable.Range(1, 5).Select(index => new WeatherForecast ( DateOnly.FromDateTime(DateTime.Now.AddDays (index)), Random.Shared.Next(-20, 55), summaries[Random.Shared.Next( summaries.Length)] )) .ToArray(); return forecast; }) .WithName("GetWeatherForecast") .WithOpenApi(); app.Run(); internal record WeatherForecast(DateOnly Date, int TemperatureC, string? Summary) { public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); } In the preceding code, we created our “application” through the .CreateBuilder() method. We also added the EndpointsAPIExplorer and SwaggerGen services. EndpointsAPIExplorer enables the developer to view all endpoints in Visual Studio, which we’ll cover later. The SwaggerGen service, on the other hand, creates the documentation for the API when accessed through the browser. The next line creates our application instance using the .Build() method. Once we have our app instance and we are in development mode, we can add Swagger and the Swagger UI. .UseHttpsRedirection() is meant to redirect to HTTPS when the protocol of a web page is HTTP to make the API secure. The next line creates our GET weatherforecast route using .MapGet(). We added the .WithName() and .WithOpenApi() methods to identify the primary method to call and let .NET know it uses the OpenAPI standard, respectively. Finally, we called app.Run(). If we run the application, we will see the documented  API on how to use our API and what’s available. Running the application produces the following output:  Figure 9.2 – Screenshot of our documented Web API If we call the /weatherforecast API, we see that we receive JSON back with a 200 HTTP status.  Figure 9.3 – Results of our /weatherforecast API Think of this small  API as middleware with API controllers combined into one compact file (Program. cs).  Why minimal APIs? I consider minimal APIs to be a feature in .NET 8, even though it’s a language concept. If the application is extremely large, adding minimal APIs should be an appealing feature in four ways: Self-contained: Simple API functionality inside one file is easy to follow for other developers Performance: Since we aren’t using controllers, the MVC overhead isn’t necessary when using these APIs Cross-platform: With .NET, APIs can now be deployed on any platform Self-documenting: While we can add Swashbuckle to other APIs, it also builds the documentation for minimal APIs Moving forward, we’ll take these minimal APIs and start looking at Visual Studio’s testing capabilities. Conclusion In conclusion, .NET 8 has revolutionized the process of building Web APIs by integrating them more deeply into the framework, making it easier than ever to create, test, and document APIs. By harnessing the power of Visual Studio 2022, developers can quickly set up minimal APIs, offering a streamlined and efficient approach to building REST-based services. The advantages of minimal APIs—being self-contained, performant, cross-platform, and self-documenting—make them an invaluable tool in a developer's arsenal. As we continue to explore the capabilities of .NET 8, the potential for creating robust and scalable web applications is limitless, paving the way for innovative and efficient software solutions. Author BioJonathan "JD" Danylko is an award-winning, full-stack ASP.NET architect. He's used ASP.NET as his primary way to build websites since 2002 and before that, Classic ASP.Jonathan contributes to his blog (DanylkoWeb.com) on a weekly basis, has built a custom CMS, is a founder of Tuxboard (an open-source ASP.NET dashboard library), has been on various podcasts, and guest posted on the C# Advent Calendar for 6 years. Jonathan has worked in various industries for small, medium, and Fortune 100 companies, but currently works as an Architect at Insight Enterprise. The best way to contact Jonathan is through GitHub, LinkedIn, Twitter, email, or through the website.
Read more
  • 0
  • 0
  • 1050

article-image-how-to-install-ruby-on-rails-a-comprehensive-guide-for-macos-windows-and-linux
Bernard Pineda
25 Oct 2024
10 min read
Save for later

How to Install Ruby on Rails: A Comprehensive Guide for macOS, Windows, and Linux

Bernard Pineda
25 Oct 2024
10 min read
This article is an excerpt from the book, From PHP to Ruby on Rails, by Bernard Pineda. This book will help you adopt the Ruby mindset and get to grips with Ruby-related concepts. You'll learn about setting up your local environment, Ruby syntax, popular frameworks, and more. A language-agnostic approach will help you avoid common pitfalls and start integrating Ruby into your projects. Introduction Just like the libraries we’ve seen so far, Rails is an open source gem. It behaves a little differently than the gems we’ve seen so far as it uses many dependencies and can generate code examples, but at the end of the day, it’s still a gem. This means that we can either install it by itself, or we can include it in a Gemfile. For this section, we will have to divide the process into three separate sections – macOS installation, Windows installation, and Linux installation – as each operating system behaves differently. Installing Ruby on Rails on macOS The first step of setting up our local environment is to install rbenv. For most Mac installations, brew will simplify this process. Let’s get started with the steps: 1. Let’s open a shell and run the following command: brew install rbenv 2. This should install the rbenv program. Now, you’ll need to add the following line to your bash profile: eval "$(rbenv init -)" 3. Once you’ve added this line to your profile, you should activate the change by either opening a new shell or running the following command: source ~/.bash_profile Note that this command may differ if you’re using another shell, such as zsh or fish. 4. With rbenv installed, we need to install Ruby 2.6.10 with the following command: rbenv install 2.6.10 5. Once Ruby 2.6.10 has been installed, we must set the default Ruby version with the following command: rbenv global 2.6.10 6. Now, we need to install the program to manage gems, called bundler. Let’s install it with the following command: gem install bundler With that, our environment is ready for the next steps in this chapter. If you wish to see more details about this installation, please refer to the following web page: https:// www.digitalocean.com/community/tutorials/how-to-install-ruby-onrails-with-rbenv-on-macos. Installing Ruby on Rails on Windows Follow these steps to install Ruby on Rails on Windows: 1. To set up our local environment, first, we must install Git for Windows. We can download the package from https://gitforwindows.org/. Once downloaded, we can run the installer; it should open the installer application:  Figure 7.1 – Git installer You can safely accept the default options unless you want to change any of the specific behavior from Git. At the end of the installation process, you may just deselect all the options of the wizard and move on to the next step:  Figure 7.2 – Git finalized installation We will also need the Git SDK installed for some dependencies that Ruby on Rails requires.  We can get the installer from https://github.com/git-for-windows/buildextra/releases/tag/git-sdk-1.0.8. Be careful and select the correct option for your platform (32 or 64 bits). In my case, I had to choose 64 bits, so I downloaded the git-sdk-installer-1.0.8.0-64.7z.exe binary:  Figure 7.3 – Git SDK download Once this package has been downloaded, run it; we will be asked where we want the Git SDK to be installed. The default option is fine (C:\git-sdk-64):  Figure 7.4 – Git SDK installation location This package might take a while to complete as it has to download other additional packages but it will do so on its own. Please be patient. Once this package has finished installing the SDK, it will open a Git Bash console, which looks similar to Windows PowerShell. We can close this Git Bash console window and open another Windows PowerShell. Once we have the new window open, we must type the following command: new-item -type file -path $profile -force This command will help us create a Windows PowerShell profile, which will allow us to execute commands every time we open a Windows PowerShell console. Once we’ve run the previous command, we may also close the Windows PowerShell window, and move on to the next step. At this point, we will install rbenv, which allows us to install multiple versions of Ruby. However, this program wasn’t created for Windows, so its installation is a little different than in other operating systems. Let’s open a browser and go to the rbenv for Windows web page: https://github.com/ ccmywish/rbenv-for-windows. On that page, we will find instructions on how to install rbenv, which we will do now. Let’s open a new Windows PowerShell and type the following command: $env:RBENV_ROOT = "C:\Ruby-on-Windows " This command will set a special environment variable that will be used for the rbenv installation. 6. Once we’ve run this command, we must download the rest of the required files with the following command: iwr -useb "https://github.com/ccmywish/rbenv-for-windows/raw/ main/tools/install.ps1" | iex 7. Once this command has finished downloading the files from GitHub, modify the user’s profile with the following command from within the Windows PowerShell: notepad $profile This will open the Notepad application and the profile we previously set. On the rbenv-for-windows web page, we can see what the content of the file should be. Let’s add it with Notepad so that the profile file now looks like this:  $env:RBENV_ROOT = "C:\Ruby-on-Windows" & "$env:RBENV_ROOT\rbenv\bin\rbenv.ps1" initSave and close Notepad, and close all Windows PowerShell windows that we may have open. We should open a new Windows PowerShell to make these changes take effect. As this is the first time rbenv is running, our console will automatically install a default Ruby version. This might take a while and will put our patience to the test. Once the process has finished, we should see an output similar to this one:  Figure 7.5 – rbenv post-installation script Now, we are ready to install other versions of Ruby. For Ruby on Rails 5, we will install Ruby 2.6.10. Let’s install it by running the following command on the same Windows Powershell window that we just opened: rbenv install 2.6.10 The program will ask us whether we want to install the Lite version or the Full version. Choose the Full version. Once again, this might take a while, so please be patient. Once this command has finished running, we must set this Ruby version for our whole system.  We can do this by running the following command: rbenv global 2.6.10 To confirm that this version of Ruby has been installed and enabled, use the following command: ruby --version This should give us the following output: ruby 2.6.10-1 (set by C: \Ruby-on-Windows \global.txt) 12. Ruby needs a program called bundler to manage all the dependencies on our system. So, let’s install this program with the following command: gem install bundler 13. Once this gem has been installed, we must update the RubyGem system with the following command:  gem update –-system 3.2.3 This command will also take a while to compute, but once it’s finished, we will be ready to use Ruby on Rails on Windows. Next, let’s see the steps for installing Ruby on Rails on Linux. Installing Ruby on Rails on Linux For Ubuntu and Debian Linux distributions, we must also install rbenv and the dependencies necessary for Ruby on Rails to run correctly: 1. Let’s start by opening a terminal and running the following command: sudo apt update 2. Once this command has finished updating apt, we must install our dependencies for Ruby, Ruby on Rails, and some gems that require compiling. We’ll do so by running the following command: sudo apt install git curl libssl-dev libreadline-dev zlib1gdev autoconf bison build-essential libyaml-dev libreadline-dev libncurses5-dev libffi-dev libgdbm-dev pkg-config sqlite3 nodejs This command might take a while. Once it has finished running, we can install rbenv with the following command: curl -fsSL https://github.com/rbenv/rbenv-installer/raw/HEAD/ bin/rbenv-installer | bash We should add rbenv to our $PATH. Let’s do so by running the following command: echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc Now, let’s add the initialize command for rbenv to our bash profile with the following command: echo 'eval "$(rbenv init -)"' >> ~/.bashrc Next, run the bash profile with the following command: source ~/.bashrc 7. This command will (among other things) make the rbenv executable available to us. Now, we can install Ruby 2.6.10 on our system with the following command: rbenv install 2.6.10 This command might take a little while as it installs openssl and that process will take some time. Once this command has finished installing Ruby 2.6.10, we need to set it as the default Ruby version for the whole machine. We can do so by running the following command: rbenv global 2.6.10 We can confirm that this version of Ruby has been installed by running the following command: ruby --version This will result in the following output: ruby 2.6.10p210 (2022-04-12 revision 67958) [x86_64-linux] Ruby needs a program called bundler to manage all the dependencies on our system. So, let’s install this program with the following command: gem install bundler Once this gem has been installed, we can update the RubyGems system with the following command:  gem update –-system 3.2.3 This command will also take a while to compute, but once it’s finished, we will be ready to use Ruby on Rails on Linux. For other Linux distributions and other operating systems, please refer to the official Ruby-lang page: https://www.ruby-lang.org/en/documentation/installation/. Conclusion In conclusion, installing Ruby on Rails varies across operating systems, but the general process involves setting up a version manager like rbenv, installing Ruby, and then using Bundler to manage gems. Whether you're on macOS, Windows, or Linux, each system has specific steps to ensure Rails and its dependencies run smoothly. By following the detailed instructions for your platform, you'll have your development environment ready for Rails in no time. For further guidance and platform-specific nuances, refer to the official documentation and resources linked throughout this guide. Author BioBernard Pineda is a seasoned developer with 20 years of web development experience. Proficient in PHP, Ruby, Python, and other backend technologies, he has taught PHP and PHP-based frameworks through video courses on platforms like LinkedIn Learning. His extensive work with Ruby and Ruby on Rails, along with curiosity in frontend development and game development, bring a diverse perspective to this book. Currently working as a Site Reliability Engineer in Silicon Valley, Bernard is always seeking new adventures.
Read more
  • 0
  • 0
  • 683

article-image-wasmers-first-postgres-extension-to-run-webassembly-is-here
Vincy Davis
30 Aug 2019
2 min read
Save for later

Wasmer's first Postgres extension to run WebAssembly is here!

Vincy Davis
30 Aug 2019
2 min read
Wasmer, the WebAssembly runtime have been successfully embedded in many languages like Rust, Python, Ruby, PHP, and Go. Yesterday, Ivan Enderlin, a PhD Computer Scientist at Wasmer, announced a new Postgres extension version 0.1 for WebAssembly. Since the project is still under heavy development, the extension only supports integers (on 32- and 64-bits) and works on Postgres 10 only. It also does not support strings, records, views or any other Postgres types yet. https://twitter.com/WasmWeekly/status/1167330724787171334 The official post states, “The goal is to gather a community and to design a pragmatic API together, discover the expectations, how developers would use this new technology inside a database engine.” The Postgres extension provides two foreign data wrappers wasm.instances and wasm.exported_functions in the wasm foreign schema. The wasm.instances is a table with an id and wasm_file columns. The wasm.exported_functions is a table with the instance_id, name, inputs, and outputs columns. Enderlin says that these information are enough for the wasm Postgres extension “to generate the SQL function to call the WebAssembly exported functions.” Read Also: Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module The Wasmer team ran a basic benchmark of computing the Fibonacci sequence computation to compare the execution time between WebAssembly and PL/pgSQL. The benchmark was run on a MacBook Pro 15" from 2016, 2.9Ghz Core i7 with 16Gb of memory. Image Source: Wasmer The result was that, “Postgres WebAssembly extension is faster to run numeric computations. The WebAssembly approach scales pretty well compared to the PL/pgSQL approach, in this situation.” The Wasmer team believes that though it is too soon to consider WebAssembly as an alternative to PL/pgSQL, the above result makes them hopeful that it can be explored more. To know more about Postgres extension, check out its Github page. 5 barriers to learning and technology training for small software development teams Mozilla CEO Chris Beard to step down by the end of 2019 after five years in the role JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 3691

article-image-apple-announces-webkit-tracking-prevention-policy-that-considers-web-tracking-as-a-security-vulnerability
Bhagyashree R
19 Aug 2019
5 min read
Save for later

Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability

Bhagyashree R
19 Aug 2019
5 min read
Inspired by Mozilla’s anti-tracking policy, Apple has announced its intention to implement the WebKit Tracking Prevention Policy into Safari, the details of which it shared last week. This policy outlines the types of tracking techniques that will be prevented in WebKit to ensure user privacy. The anti-tracking mitigations listed in this policy will be applied “universally to all websites, or based on algorithmic, on-device classification.” https://twitter.com/webkit/status/1161782001839607809 Web tracking is the collection of user data over multiple web pages and websites, which can be linked to individual users via a unique user identifier. All your previous interactions with any website could be recorded and recalled with the help of a tracking system like cookies. Among the data tracked include the things you have searched, the websites you visited, the things you have clicked on, the movements of your mouse around a web page, and more. Organizations and companies rely heavily on web tracking to gain insight into their user behavior and preferences. One of the main purposes of these insights is user profiling and targeted marketing. While this user tracking helps businesses, it can be pervasive and used for other sinister purposes. In the recent past, we have seen many companies including the big tech like Facebook and Google involved in several scandals related to violating user online privacy. For instance, Facebook’s Cambridge Analytica scandal and Google’s cookie case. Apple aims to create “a healthy web ecosystem, with privacy by design” The WebKit Prevention Policy will prevent several tracking techniques including cross-site tracking, stateful tracking, covert stateful tracking, navigational tracking, fingerprinting, covert tracking, and other unknown techniques that do not fall under these categories. WebKit will limit the capability of using a tracking technique in case it is not possible to prevent it without any undue harm to the user. If this also does not help, users will be asked for their consent. Apple will treat any attempt to subvert the anti-tracking methods as a security vulnerability. “We treat circumvention of shipping anti-tracking measures with the same seriousness as an exploitation of security vulnerabilities,” Apple wrote. It warns to add more restrictions without prior notice against parties who attempt to circumvent the tracking prevention methods. Apple further mentioned that there won’t be any exception even if you have a valid use for a technique that is also used for tracking. The announcement reads, “But WebKit often has no technical means to distinguish valid uses from tracking, and doesn’t know what the parties involved will do with the collected data, either now or in the future.” WebKit Tracking Prevention Policy’s unintended impact With the implementation of this policy, Apple warns of certain unintended repercussions as well. Among the possibly affected features are funding websites using targeted or personalized advertising, federated login using a third-party login provider, fraud prevention, and more. In cases of tradeoffs, WebKit will prioritize user benefits over current website practices. Apple promises to limit this unintended impact and might update the tracking prevention methods to permit certain use cases. In the future, it will also come up with new web technologies that will allow these practices without comprising the user online privacy such as Storage Access API and Privacy-Preserving Ad Click Attribution. What users are saying about Apple’s anti-tracking policy A time when there is increasing concern regarding user online privacy, this policy comes as a blessing. Many users are appreciating this move, while some do fear that this will affect some of the user-friendly features. In an ongoing discussion on Hacker News, a user commented, “The fact that this makes behavioral targeting even harder makes me very happy.” Some others also believe that focusing on online tracking protection methods will give browsers an edge over Google’s Chrome. A user said, “One advantage of Google's dominance and their business model being so reliant on tracking, is that it's become the moat for its competitors: investing energy into tracking protection is a good way for them to gain a competitive advantage over Google, since it's a feature that Google will not be able to copy. So as long as Google's competitors remain in business, we'll probably at least have some alternatives that take privacy seriously.” When asked about the added restrictions that will be applied if a party is found circumventing tracking prevention, a member of the WebKit team commented, “We're willing to do specifically targeted mitigations, but only if we have to. So far, nearly everything we've done has been universal or algorithmic. The one exception I know of was to delete tracking data that had already been planted by known circumventors, at the same time as the mitigation to stop anyone else from using that particular hole (HTTPS supercookies).” Some users had questions about the features that will be impacted by the introduction of this policy. A user wrote, “While I like the sentiment, I hate that Safari drops cookies after a short period of non-use. I wind up having to re-login to sites constantly while Chrome does it automatically.” Another user added, “So what is going to happen when Apple succeeds in making it impossible to make any money off advertisements shown to iOS users on the web? I'm currently imagining a future where publishers start to just redirect iOS traffic to install their app, where they can actually make money. Good news for the walled garden, I guess?” Read Apple’s official announcement, to know more about the WebKit Tracking Prevention Policy. Firefox Nightly now supports Encrypted Server Name Indication (ESNI) to prevent 3rd parties from tracking your browsing history All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users  
Read more
  • 0
  • 0
  • 3053

article-image-laracon-us-2019-highlights-laravel-6-release-update-laravel-vapor-and-more
Bhagyashree R
26 Jul 2019
5 min read
Save for later

Laracon US 2019 highlights: Laravel 6 release update, Laravel Vapor, and more

Bhagyashree R
26 Jul 2019
5 min read
Laracon US 2019, probably the biggest Laravel conference, wrapped up yesterday. Its creator, Tylor Otwell kick-started the event by talking about the next major release, Laravel 6. He also showcased a project that he has been working on called Laravel Vapor, a full-featured serverless management and deployment dashboard for PHP/Laravel. https://twitter.com/taylorotwell/status/1154168986180997125 This was a two-day event from July 24-25 hosted at Time Square, NYC. The event brings together people passionate about creating applications with Laravel, the open-source PHP web framework. Many exciting talks were hosted at this game-themed event. Evan You, the creator of Vue, was there presenting what’s coming in Vue.js 3.0. Caleb Porzio, a developer at Tighten Co., showcased a Laravel framework named Livewire that enables you to build dynamic user interfaces with vanilla PHP.  Keith Damiani, a Principal Programmer at Tighten, talked about graph database use cases. You can watch this highlights video compiled by Romega Digital to get a quick overview of the event: https://www.youtube.com/watch?v=si8fHDPYFCo&feature=youtu.be Laravel 6 coming next month Since its birth, Laravel has followed a custom versioning system. It has been on 5.x release version for the past four years now. The team has now decided to switch to semantic versioning system. The framework currently stands at version 5.8, and instead of calling the new release 5.9 the team has decided to go with Laravel 6, which is scheduled for the next month. Otwell emphasized that they have decided to make this change to bring more consistency in the ecosystem as all optional Laravel components such as Cashier, Dusk, Valet, Socialite use semantic versioning. This does not mean that there will be any “paradigm shift” and developers have to rewrite their stuff. “This does mean that any breaking change would necessitate a new version which means that the next version will be 7.0,” he added. With the new version comes new branding Laravel gets a fresh look with every major release ranging from updated logos to a redesigned website. Initially, this was a volunteer effort with different people pitching in to help give Laravel a fresh look. Now that Laravel has got some substantial backing, Otwell has worked with Focus Lab, one of the top digital design agencies in the US. They have together come up with a new logo and a brand new website. The website looks easy to navigate and also provides improved documentation to give developers a good reading experience. Source: Laravel Laravel Vapor, a robust serverless deployment platform for Laravel After giving a brief on version 6 and the updated branding, Otwell showcased his new project named Laravel Vapor. Currently, developers use Forge for provisioning and deploying their PHP applications on DigitalOcean, Linode, AWS, and more. It provides painless Virtual Private Server (VPS) management. It is best suited for medium and small projects and performs well with basic load balancing. However, it does lack a few features that could have been helpful for building bigger projects such as autoscaling. Also, developers have to worry about updating their operating systems and PHP versions. To address these limitations, Otwell created this deployment platform. Here are some of the advantages Laravel Vapor comes with: Better scalability: Otwell’s demo showed that it can handle over half a million requests with an average response time of 12 ms. Facilitates collaboration: Vapor is built around teams. You can create as many teams as you require by just paying for one single plan. Fine-grained control: It gives you fine-grained control over what each team member can do. You can set what all they can do across all the resources Vapor manages. A “vanity URL” for different environments: Vapor gives you a staging domain, which you can access with what Otwell calls a “vanity URL.” It enables you to immediately access your applications with “a nice domain that you can share with your coworkers until you are ready to assign a custom domain,“ says Otwell. Environment metrics: Vapor provides many environment metrics that give you an overview of an application environment. These metrics include how many HTTP requests have the application got in the last 24 hours, how many CLI invocations, what’s the average duration of those things, how much these cost on lambda, and more. Logs: You can review and search your recent logs right from the Vapor UI. It also auto-updates when any new entry comes in the log. Databases: With Vapor, you can provision two types of databases: fixed-sized database and serverless database. The fixed-sized database is the one where you have to pick its specifications like VCPU, RAM, etc. In the serverless one, however, if you do not select these specifications and it will automatically scale according to the demand. Caches: You can create Redis clusters right from the Vapor UI with as many nodes as you want. It supports the creation and management of elastic Redis cache clusters, which can be scaled without experiencing any downtime. You can attach them to any of the team’s projects and use them with multiple projects at the same time. To watch the entire demonstration by Otwell check out this video: https://www.youtube.com/watch?v=XsPeWjKAUt0&feature=youtu.be Laravel 5.7 released with support for email verification, improved console testing Building a Web Service with Laravel 5 Symfony leaves PHP-FIG, the framework interoperability group
Read more
  • 0
  • 0
  • 5531

article-image-4-key-findings-from-the-state-of-javascript-2018-developer-survey
Prasad Ramesh
20 Nov 2018
4 min read
Save for later

4 key findings from The State of JavaScript 2018 developer survey

Prasad Ramesh
20 Nov 2018
4 min read
Three JavaScript developers surveyed over 20,000 JavaScript developers to find out what’s happening within the language and its huge ecosystem. From usage to satisfaction to learning habits, this State if JavaScript 2018 report offered another valuable insight on a community that is still going strong, despite the landscape continuing to change. You can check out the results of the State of JavaScript 2018 survey in detail here but keep reading to find out 4 things we found interesting about the State of JavaScript 2018 survey. JavaScript developers love ES6 and TypeScript ES6 and TypeScript were the most well received. 86.3% and 46.7% developers respectively have used and would use these languages again. ClojureScript, Elm, and Flow, however, don’t seem to pique many developers' interests these days (unsurprisingly). React rules the front-end frameworks - Angular's popularity may be dwindling There has been a big battle between a couple of frameworks in the front-end side of web development - namely between React, Vue, and Angular. The State of JavaScript 2018 survey suggests that React is winning out, with Vue in second position. 64.8% and 28.8% developers said that they would use React and Vue.js respectively, again. However, Vue is growing in popularity - 46.6% of respondents expressed an interest in learning it. However, news wasn't great for Angular - 33.8% of respondents said that they wouldn't use Angular again. Vue is gaining popularity as. Ember and polymer were less than well received as more than 50% of the responses for both indicated no interest in learning them. Preact and Polymer, meanwhile, are perhaps still a little new on the scene: 28.1% and 18.5% respondents had never even heard of these frameworks. Vue.js 3.0 is ditching JavaScript for TypeScript. Learn more here. Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL When it comes to data, Redux is the most popular library with 47.2% developers saying that they would use it again. GraphQL is second with 20.4% of respondents vouching for it. But Redux shouldn’t be complacent - 62.5% developers also want to learn GraphQL. It looks like the Redux and GraphQL debate is going to continue well into 2019. What the consensus will be in 12 months time is anyone’s guess. Why do React developers love Redux? Find out here. Express.js popularity confirms Node.js as JavaScript’s quiet hero It was observed that there haven’t been any major break breakthroughs in this area in recent years. But that is, perhaps, a good thing when you consider the frantic pace of change in other areas of JavaScript. It probably also has a lot to do with the dominance of Node.js in this area. Express, a Node.js framework, is by far the most popular, with 64.7% of developers taking the survey saying they would use it again. Sadly, it appears Meteor is languishing despite its meteoric hype just a few years ago. 49.4% of developers had heard of it, but said they had no interest in learning it. In conclusion: The landscape is becoming more clearly defined, but the JavaScript developer role is changing A few years ago, the JavaScript ecosystem was chaotic and almost incoherent. Every week seemed to bring a new framework demanding your attention. It looks, as we move towards the end of the decade, that things are a lot different now - React has established itself at the forefront of the front end, while TypeScript appears to have embedded itself within the ecosystem too. With GraphQL also generating interest, and competing with Redux, we're seeing a clear shift in what JavaScript developers are doing, and what they're being asked to do. As the stack expands, managing data sources and building for speed and scalability is now a problem right at the heart of JavaScript development, not just on its fringes.
Read more
  • 0
  • 0
  • 4315
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-what-is-the-difference-between-oauth-1-0-and-2-0
Pavan Ramchandani
13 Jun 2018
11 min read
Save for later

What's the difference between OAuth 1.0 and OAuth 2.0?

Pavan Ramchandani
13 Jun 2018
11 min read
The OAuth protocol specifies a process for resource owners to authorize third-party applications in accessing their server resources without sharing their credentials. This tutorial will take you through understanding OAuth protocol and introduce you to the offerings of OAuth 2.0 in a practical manner. This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. Consider a scenario where Jane (the user of an application) wants to let an application access her private data, which is stored in a third-party service provider. Before OAuth 1.0 or other similar open source protocols, such as Google AuthSub and FlickrAuth, if Jane wanted to let a consumer service use her data stored on some third-party service provider, she would need to give her user credentials to the consumer service to access data from the third-party service via appropriate service calls. Instead of Jane passing her login information to multiple consumer applications, OAuth 1.0 solves this problem by letting the consumer applications request authorization from the service provider on Jane's behalf. Jane does not divulge her login information; authorization is granted by the service provider, where both her data and credentials are stored. The consumer application (or consumer service) only receives an authorization token that can be used to access data from the service provider. Note that the user (Jane) has full control of the transaction and can invalidate the authorization token at any time during the signup process, or even after the two services have been used together. The typical example used to explain OAuth 1.0 is that of a service provider that stores pictures on the web (let's call the service StorageInc) and a fictional consumer service that is a picture printing service (let's call the service PrintInc). On its own, PrintInc is a full-blown web service, but it does not offer picture storage; its business is only printing pictures. For convenience, PrintInc has created a web service that lets its users download their pictures from StorageInc for printing. This is what happens when a user (the resource owner) decides to use PrintInc (the client application) to print his/her images stored in StorageInc (the service provider): The user creates an account in PrintInc. Let's call the user Jane, to keep things simple. PrintInc asks whether Jane wants to use her pictures stored in StorageInc and presents a link to get the authorization to download her pictures (the protected resources). Jane is the resource owner here. Jane decides to let PrintInc connect to StorageInc on her behalf and clicks on the authorization link. Both PrintInc and StorageInc have implemented the OAuth protocol, so StorageInc asks Jane whether she wants to let PrintInc use her pictures. If she says yes, then StorageInc asks Jane to provide her username and password. Note, however, that her credentials are being used at StorageInc's site and PrintInc has no knowledge of her credentials. Once Jane provides her credentials, StorageInc passes PrintInc an authorization token, which is stored as a part of Jane's account on PrintInc. Now, we are back at PrintInc's web application, and Jane can now print any of her pictures stored in StorageInc's web service. Finally, every time Jane wants to print more pictures, all she needs to do is come back to PrintInc's website and download her pictures from StorageInc without providing the username and password again, as she has already authorized these two web services to exchange data on her behalf. The preceding example clearly portrays the authorization flow in OAuth 1.0 protocol. Before getting deeper into OAuth 1.0, here is a brief overview of the common terminologies and roles that we saw in this example: Client (consumer): This refers to an application (service) that tries to access a protected resource on behalf of the resource owner and with the resource owner's consent. A client can be a business service, mobile, web, or desktop application. In the previous example, PrintInc is the client application. Server (service provider): This refers to an HTTP server that understands the OAuth protocol. It accepts and responds to the requests authenticated with the OAuth protocol from various client applications (consumers). If you relate this with the previous example, StorageInc is the service provider. Protected resource: Protected resources are resources hosted on servers (the service providers) that are access-restricted. The server validates all incoming requests and grants access to the resource, as appropriate. Resource owner: This refers to an entity capable of granting access to a protected resource. Mostly, it refers to an end user who owns the protected resource. In the previous example, Jane is the resource owner. Consumer key and secret (client credentials): These two strings are used to identify and authenticate the client application (the consumer) making the request. Request token (temporary credentials): This is a temporary credential provided by the server when the resource owner authorizes the client application to use the resource. As the next step, the client will send this request token to the server to get authorized. On successful authorization, the server returns an access token. The access token is explained next. Access token (token credentials): The server returns an access token to the client when the client submits the temporary credentials obtained from the server during the resource grant approval by the user. The access token is a string that identifies a client that requests for protected resources. Once the access token is obtained, the client passes it along with each resource request to the server. The server can then verify the identity of the client by checking this access token. The following sequence diagram shows the interactions between the various parties involved in the OAuth 1.0 protocol: You can get more information about the OAuth 1.0 protocol here. What is OAuth 2.0? OAuth 2.0 is the latest release of the OAuth protocol, mainly focused on simplifying the client-side development. Note that OAuth 2.0 is a completely new protocol, and this release is not backwards-compatible with OAuth 1.0. It offers specific authorization flows for web applications, desktop applications, mobile phones, and living room devices. The following are some of the major improvements in OAuth 2.0, as compared to the previous release: The complexity involved in signing each request: OAuth 1.0 mandates that the client must generate a signature on every API call to the server resource using the token secret. On the receiving end, the server must regenerate the same signature, and the client will be given access only if both the signatures match. OAuth 2.0 requires neither the client nor the server to generate any signature for securing the messages. Security is enforced via the use of TLS/SSL (HTTPS) for all communication. Addressing non-browser client applications: Many features of OAuth 1.0 are designed by considering the way a web client application interacts with the inbound and outbound messages. This has proven to be inefficient while using it with non-browser clients such as on-device mobile applications. OAuth 2.0 addresses this issue by accommodating more authorization flows suitable for specific client needs that do not use any web UI, such as on-device (native) mobile applications or API services. This makes the protocol very flexible. The separation of roles: OAuth 2.0 clearly defines the roles for all parties involved in the communication, such as the client, resource owner, resource server, and authorization server. The specification is clear on which parts of the protocol are expected to be implemented by the resource owner, authorization server, and resource server. The short-lived access token: Unlike in the previous version, the access token in OAuth 2.0 can contain an expiration time, which improves the security and reduces the chances of illegal access. The refresh token: OAuth 2.0 offers a refresh token that can be used for getting a new access token on the expiry of the current one, without going through the entire authorization process again. Before we get into the details of OAuth 2.0, let's take a quick look at how OAuth 2.0 defines roles for each party involved in the authorization process. Though you might have seen similar roles while discussing OAuth 1.0 in last section, it does not clearly define which part of the protocol is expected to be implemented by each one: The resource owner: This refers to an entity capable of granting access to a protected resource. In a real-life scenario, this can be an end user who owns the resource. The resource server: This hosts the protected resources. The resource server validates and authorizes the incoming requests for the protected resource by contacting the authorization server. The client (consumer): This refers to an application that tries to access protected resources on behalf of the resource owner. It can be a business service, mobile, web, or desktop application. Authorization server: This, as the name suggests, is responsible for authorizing the client that needs access to a resource. After successful authentication, the access token is issued to the client by the authorization server. In a real-life scenario, the authorization server may be either the same as the resource server or a separate entity altogether. The OAuth 2.0 specification does not really enforce anything on this part. It would be interesting to learn how these entities talk with each other to complete the authorization flow. The following is a quick summary of the authorization flow in a typical OAuth 2.0 implementation: Let's understand the diagram in more detail: The client application requests authorization to access the protected resources from the resource owner (user). The client can either directly make the authorization request to the resource owner or via the authorization server by redirecting the resource owner to the authorization server endpoint. The resource owner authenticates and authorizes the resource access request from the client application and returns the authorization grant to the client. The authorization grant type returned by the resource owner depends on the type of client application that tries to access the OAuth protected resource. Note that the OAuth 2.0 protocol defines four types of grants in order to authorize access to protected resources. The client application requests an access token from the authorization server by passing the authorization grant along with other details for authentication, such as the client ID, client secret, and grant type. On successful authentication, the authorization server issues an access token (and, optionally, a refresh token) to the client application. The client application requests the protected resource (RESTful web API) from the resource server by presenting the access token for authentication. On successful authentication of the client request, the resource server returns the requested resource. The sequence of interaction that we just discussed is of a very high level. Depending upon the grant type used by the client, the details of the interaction may change. The following section will help you understand the basics of grant types. Understanding grant types in OAuth 2.0 Grant types in the OAuth 2.0 protocol are, in essence, different ways to authorize access to protected resources using different security credentials (for each type). The OAuth 2.0 protocol defines four types of grants, as listed here; each can be used in different scenarios, as appropriate: Authorization code: This is obtained from the authentication server instead of directly requesting it from the resource owner. In this case, the client directs the resource owner to the authorization server, which returns the authorization code to the client. This is very similar to OAuth 1.0, except that the cryptographic signing of messages is not required in OAuth 2.0. Implicit: This grant is a simplified version of the authorization code grant type flow. In the implicit grant flow, the client is issued an access token directly as the result of the resource owner's authorization. This is less secure, as the client is not authenticated. This is commonly used for client-side devices, such as mobile, where the client credentials cannot be stored securely. Resource owner password credentials: The resource owner's credentials, such as username and password, are used by the client for directly obtaining the access token during the authorization flow. The access code is used thereafter for accessing resources. This grant type is only used with trusted client applications. This is suitable for legacy applications that use the HTTP basic authentication to incrementally transition to OAuth 2.0. Client credentials: These are used directly for getting access tokens. This grant type is used when the client is also the resource owner. This is commonly used for embedded services and backend applications, where the client has an account (direct access rights). Read Next: Understanding OAuth Authentication methods - Tutorial OAuth 2.0 – Gaining Consent - Tutorial
Read more
  • 0
  • 0
  • 13222

article-image-design-a-restful-web-api-with-java-tutorial
Pavan Ramchandani
12 Jun 2018
12 min read
Save for later

Design a RESTful web API with Java [Tutorial]

Pavan Ramchandani
12 Jun 2018
12 min read
In today's tutorial, you will learn to design REST services. We will break down the key design considerations you need to make when building RESTful web APIs. In particular, we will focus on the core elements of the REST architecture style: Resources and their identifiers Interaction semantics for RESTful APIs (HTTP methods) Representation of resources Hypermedia controls This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. This book will help you build robust, scalable and secure RESTful web services, making use of the JAX-RS and Jersey framework extensions. Let's start by discussing the guidelines for identifying resources in a problem domain. Richardson Maturity Model—Leonardo Richardson has developed a model to help with assessing the compliance of a service to REST architecture style. The model defines four levels of maturity, starting from level-0 to level-3 as the highest maturity level. The maturity levels are decided considering the aforementioned principle elements of the REST architecture. Identifying resources in the problem domain The basic steps that yoneed to take while building a RESTful web API for a specific problem domain are: Identify all possible objects in the problem domain. This can be done by identifying all the key nouns in the problem domain. For example, if you are building an application to manage employees in a department, the obvious nouns are department and employee. The next step is to identify the objects that can be manipulated using CRUD operations. These objects can be classified as resources. Note that you should be careful while choosing resources. Based on the usage pattern, you can classify resources as top-level and nested resources (which are the children of a top-level resource). Also, there is no need to expose all resources for use by the client; expose only those resources that are required for implementing the business use case. Transforming operations to HTTP methods Once you have identified all resources, as the next step, you may want to map the operations defined on the resources to the appropriate HTTP methods. The most commonly used HTTP methods (verbs) in RESTful web APIs are POST, GET, PUT, and DELETE. Note that there is no one-to-one mapping between the CRUD operations defined on the resources and the HTTP methods. Understanding of idempotent and safe operation concepts will help with using the correct HTTP method. An operation is called idempotent if multiple identical requests produce the same result. Similarly, an idempotent RESTful web API will always produce the same result on the server irrespective of how many times the request is executed with the same parameters; however, the response may change between requests. An operation is called safe if it does not modify the state of the resources. Check out the following table: MethodIdempotentSafeGETYESYESOPTIONSYESYESHEADYESYESPOSTNONOPATCHNONOPUTYESNODELETEYESNO Here are some tips for identifying the most appropriate HTTP method for the operations that you want to perform on the resources: GET: You can use this method for reading a representation of a resource from the server. According to the HTTP specification, GET is a safe operation, which means that it is only intended for retrieving data, not for making any state changes. As this is an idempotent operation, multiple identical GET requests will behave in the same manner. A GET method can return the 200 OK HTTP response code on the successful retrieval of resources. If there is any error, it can return an appropriate status code such as 404 NOT FOUND or 400 BAD REQUEST. DELETE: You can use this method for deleting resources. On successful deletion, DELETE can return the 200 OK status code. According to the HTTP specification, DELETE is an idempotent operation. Note that when you call DELETE on the same resource for the second time, the server may return the 404 NOT FOUND status code since it was already deleted, which is different from the response for the first request. The change in response for the second call is perfectly valid here. However, multiple DELETE calls on the same resource produce the same result (state) on the server. PUT: According to the HTTP specification, this method is idempotent. When a client invokes the PUT method on a resource, the resource available at the given URL is completely replaced with the resource representation sent by the client. When a client uses the PUT request on a resource, it has to send all the available properties of the resource to the server, not just the partial data that was modified within the request. You can use PUT to create or update a resource if all attributes of the resource are available with the client. This makes sure that the server state does not change with multiple PUT requests. On the other hand, if you send partial resource content in a PUT request multiple times, there is a chance that some other clients might have updated some attributes that are not present in your request. In such cases, the server cannot guarantee that the state of the resource on the server will remain identical when the same request is repeated, which breaks the idempotency rule. POST: This method is not idempotent. This method enables you to use the POST method to create or update resources when you do not know all the available attributes of a resource. For example, consider a scenario where the identifier field for an entity resource is generated at the server when the entity is persisted in the data store. You can use the POST method for creating such resources as the client does not have an identifier attribute while issuing the request. Here is a simplified example that illustrates this scenario. In this example, the employeeID attribute is generated on the server: POST hrapp/api/employees HTTP/1.1 Host: packtpub.com {employee entity resource in JSON} On the successful creation of a resource, it is recommended to return the status of 201 Created and the location of the newly created resource. This allows the client to access the newly created resource later (with server-generated attributes). The sample response for the preceding example will look as follows: 201 Created Location: hrapp/api/employees/1001 Best practice Use caching only for idempotent and safe HTTP methods, as others have an impact on the state of the resources. Understanding the difference between PUT and POST A common question that you will encounter while designing a RESTful web API is when you should use the PUT and POST methods? Here's the simplified answer: You can use PUT for creating or updating a resource, when the client has the full resource content available. In this case, all values are with the client and the server does not generate a value for any of the fields. You will use POST for creating or updating a resource if the client has only partial resource content available. Note that you are losing the idempotency support with POST. An idempotent method means that you can call the same API multiple times without changing the state. This is not true for the POST method; each POST method call may result in a server state change. PUT is idempotent, and POST is not. If you have strong customer demands, you can support both methods and let the client choose the suitable one on the basis of the use case. Naming RESTful web resources Resources are a fundamental concept in RESTful web services. A resource represents an entity that is accessible via the URI that you provide. The URI, which refers to a resource (which is known as a RESTful web API), should have a logically meaningful name. Having meaningful names improves the intuitiveness of the APIs and, thereby, their usability. Some of the widely followed recommendations for naming resources are shown here: It is recommended you use nouns to name both resources and path segments that will appear in the resource URI. You should avoid using verbs for naming resources and resource path segments. Using nouns to name a resource improves the readability of the corresponding RESTful web API, particularly when you are planning to release the API over the internet for the general public. You should always use plural nouns to refer to a collection of resources. Make sure that you are not mixing up singular and plural nouns while forming the REST URIs. For instance, to get all departments, the resource URI must look like /departments. If you want to read a specific department from the collection, the URI becomes /departments/{id}. Following the convention, the URI for reading the details of the HR department identified by id=10 should look like /departments/10. The following table illustrates how you can map the HTTP methods (verbs) to the operations defined for the departments' resources: ResourceGETPOSTPUTDELETE/departmentsGet all departmentsCreate a new departmentBulk update on departmentsDelete all departments/departments/10Get the HR department with id=10Not allowedUpdate the HR departmentDelete the HR department While naming resources, use specific names over generic names. For instance, to read all programmers' details of a software firm, it is preferable to have a resource URI of the form /programmers (which tells about the type of resource), over the much generic form /employees. This improves the intuitiveness of the APIs by clearly communicating the type of resources that it deals with. Keep the resource names that appear in the URI in lowercase to improve the readability of the resulting resource URI. Resource names may include hyphens; avoid using underscores and other punctuation. If the entity resource is represented in the JSON format, field names used in the resource must conform to the following guidelines: Use meaningful names for the properties Follow the camel case naming convention: The first letter of the name is in lowercase, for example, departmentName The first character must be a letter, an underscore (_), or a dollar sign ($), and the subsequent characters can be letters, digits, underscores, and/or dollar signs Avoid using the reserved JavaScript keywords If a resource is related to another resource(s), use a subresource to refer to the child resource. You can use the path parameter in the URI to connect a subresource to its base resource. For instance, the resource URI path to get all employees belonging to the HR department (with id=10) will look like /departments/10/employees. To get the details of employee with id=200 in the HR department, you can use the following URI: /departments/10/employees/200. The resource path URI may contain plural nouns representing a collection of resources, followed by a singular resource identifier to return a specific resource item from the collection. This pattern can repeat in the URI, allowing you to drill down a collection for reading a specific item. For instance, the following URI represents an employee resource identified by id=200 within the HR department: /departments/hr/employees/200. Although the HTTP protocol does not place any limit on the length of the resource URI, it is recommended not to exceed 2,000 characters because of the restriction set by many popular browsers. Best practice: Avoid using actions or verbs in the URI as it refers to a resource. Using HATEOAS in response representation Hypertext as the Engine of Application State (HATEOAS) refers to the use of hypermedia links in the resource representations. This architectural style lets the clients dynamically navigate to the desired resource by traversing the hypermedia links present in the response body. There is no universally accepted single format for representing links between two resources in JSON. Hypertext Application Language The Hypertext API Language (HAL) is a promising proposal that sets the conventions for expressing hypermedia controls (such as links) with JSON or XML. Currently, this proposal is in the draft stage. It mainly describes two concepts for linking resources: Embedded resources: This concept provides a way to embed another resource within the current one. In the JSON format, you will use the _embedded attribute to indicate the embedded resource. Links: This concept provides links to associated resources. In the JSON format, you will use the _links attribute to link resources. Here is the link to this proposal: http://tools.ietf.org/html/draft-kelly-json-hal-06. It defines the following properties for each resource link: href: This property indicates the URI to the target resource representation template: This property would be true if the URI value for href has any PATH variable inside it (template) title: This property is used for labeling the URI hreflang: This property specifies the language for the target resource title: This property is used for documentation purposes name: This property is used for uniquely identifying a link The following example demonstrates how you can use the HAL format for describing the department resource containing hyperlinks to the associated employee resources. This example uses the JSON HAL for representing resources, which is represented using the application/hal+json media type: GET /departments/10 HTTP/1.1 Host: packtpub.com Accept: application/hal+json HTTP/1.1 200 OK Content-Type: application/hal+json { "_links": { "self": { "href": "/departments/10" }, "employees": { "href": "/departments/10/employees" }, "employee": { "href": "/employees/{id}", "templated": true } }, "_embedded": { "manager": { "_links": { "self": { "href": "/employees/1700" } }, "firstName": "Chinmay", "lastName": "Jobinesh", "employeeId": "1700", } }, "departmentId": 10, "departmentName": "Administration" } To summarize, we discussed the details of designing RESTful web APIs including identifying the resources, using HTTP methods, and naming the web resources. Additionally we got introduced to Hypertext application language. Read More: Getting started with Django RESTful Web Services Testing RESTful Web Services with Postman Documenting RESTful Java web services using Swagger
Read more
  • 0
  • 0
  • 6329

article-image-working-with-the-vue-router-plugin-for-spas
Pravin Dhandre
07 Jun 2018
5 min read
Save for later

Working with the Vue-router plugin for SPAs

Pravin Dhandre
07 Jun 2018
5 min read
Single-Page Applications (SPAs) are web applications that load a single HTML page and updates that page dynamically based on the user interaction with the app. These SPAs use AJAX and HTML5 for creating fluid and responsive web applications without any requirement of constant page reloads. In this tutorial, we will show a step-by-step approach of how to install an extremely powerful plugin Vue-router to build Single Page Applications. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example. Similar to how you add Vue and Vuex to your applications, you can either directly include the library from unpkg, or head to the following URL and download a local copy for yourself: https://unpkg.com/Vue-router. Add the JavaScript to a new HTML document, along with Vue, and your application's JavaScript. Create an application container element, your view, as well. In the following example, I have saved the Vue-router JavaScript file as router.js: Initialize VueRouter in your JavaScript application We are now ready to add VueRouter and utilize its power. Before we do that, however, we need to create some very simple components which we can load and display based on the URL. As we are going to be loading the components with the router, we don't need to register them with Vue.component, but instead create JavaScript objects with the same properties as we would a Vue component. For this first exercise, we are going to create two pages—Home and About pages. Found on most websites, these should help give you context as to what is loading where and when. Create two templates in your HTML page for us to use: Don't forget to encapsulate all your content in one "root" element (represented here by the wrapping div tags). You also need to ensure you declare the templates before your application JavaScript is loaded. We've created a Home page template, with the id of homepage, and an About page, containing some placeholder text from lorem ipsum, with the id of about. Create two components in your JavaScript which reference these two templates: The next step is to give the router a placeholder to render the components in the view. This is done by using a custom <router-view> HTML element. Using this element gives you control over where your content will render. It allows us to have a header and footer right in the app view, without needing to deal with messy templates or include the components themselves. Add a header, main, and footer element to your app. Give yourself a logo in the header and credits in the footer; in the main HTML element, place the router-view placeholder: Everything in the app view is optional, except the router-view, but it gives you an idea of how the router HTML element can be implemented into a site structure. The next stage is to initialize the Vue-router and instruct Vue to use it. Create a new instance of VueRouter and add it to the Vue instance—similar to how we added Vuex in the previous section: We now need to tell the router about our routes (or paths), and what component it should load when it encounters each one. Create an object inside the Vue-router instance with a key of routes and an array as the value. This array needs to include an object for each route: Each route object contains a path and component key. The path is a string of the URL that you want to load the component on. Vue-router serves up components based on a first-come-first-served basis. For example, if there are several routes with the same path, the first one encountered is used. Ensure each route has the beginning slash—this tells the router it is a root page and not a sub-page. Press save and view your app in the browser. You should be presented with the content of the Home template component. If you observe the URL, you will notice that on page load a hash and forward slash (#/) are appended to the path. This is the router creating a method for browsing the components and utilizing the address bar. If you change this to the path of your second route, #/about, you will see the contents of the About component. Vue-router is also able to use the JavaScript history API to create prettier URLs. For example, yourdomain.com/index.html#about would become yourdomain.com/about. This is activated by adding mode: 'history' to your VueRouter instance: With this, you should now be familiar with Vue-router and how to initialize it for creating new routes and paths for different web pages on your website. Do check out the book Vue.js 2.x by Example to start developing building blocks for a successful e-commerce website. What is React.js and how does it work? Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js
Read more
  • 0
  • 0
  • 2181

article-image-how-to-navigate-files-in-a-vue-app-using-the-dropbox-api
Pravin Dhandre
05 Jun 2018
13 min read
Save for later

How to navigate files in a Vue app using the Dropbox API

Pravin Dhandre
05 Jun 2018
13 min read
Dropbox API is a set of HTTP endpoints that help apps to integrate with Dropbox easily. It allows developers to work with files present in Dropbox and provides access to advanced functionality like full-text search, thumbnails, and sharing. In this article we will discuss Dropbox API functionalities listed below to navigate and query your files and folders: Loading and querying the Dropbox API Listing the directories and files from your Dropbox account Adding a loading state to your app Using Vue animations To get started you will need a Dropbox account and follow the steps detailed in this article. If you don't have one, sign up and add a few dummy files and folders. The content of Dropbox doesn't matter, but having folders to navigate through will help with understanding the code. This article is an excerpt from a book written by Mike Street titled Vue.js 2.x by Example.  This book puts Vue.js into a real-world context, guiding you through example projects to help you build Vue.js applications from scratch. Getting started—loading the libraries Create a new HTML page for your app to run in. Create the HTML structure required for a web page and include your app view wrapper: It's called #app here, but call it whatever you want - just remember to update the JavaScript. As our app code is going to get quite chunky, make a separate JavaScript file and include it at the bottom of the document. You will also need to include Vue and the Dropbox API SDK. As with before, you can either reference the remote files or download a local copy of the library files. Download a local copy for both speed and compatibility reasons. Include your three JavaScript files at the bottom of your HTML file: Create your app.js and initialize a new Vue instance, using the el tag to mount the instance onto the ID in your view. Creating a Dropbox app and initializing the SDK Before we interact with the Vue instance, we need to connect to the Dropbox API through the SDK. This is done via an API key that is generated by Dropbox itself to keep track of what is connecting to your account and where Dropbox requires you to make a custom Dropbox app. Head to the Dropbox developers area and select Create your app. Choose Dropbox API and select either a restricted folder or full access. This depends on your needs, but for testing, choose Full Dropbox. Give your app a name and click the button Create app. Generate an access token to your app. To do so, when viewing the app details page, click the Generate button under the Generated access token. This will give you a long string of numbers and letters - copy and paste that into your editor and store it as a variable at the top of your JavaScript. In this the API key will be referred to as XXXX: Now that we have our API key, we can access the files and folders from our Dropbox. Initialize the API and pass in your accessToken variable to the accessToken property of the Dropbox API: We now have access to Dropbox via the dbx variable. We can verify our connection to Dropbox is working by connecting and outputting the contents of the root path: This code uses JavaScript promises, which are a way of adding actions to code without requiring callback functions. Take a note of the first line, particularly the path variable. This lets us pass in a folder path to list the files and folders within that directory. For example, if you had a folder called images in your Dropbox, you could change the parameter value to /images and the file list returned would be the files and folders within that directory. Open your JavaScript console and check the output; you should get an array containing several objects - one for each file or folder in the root of your Dropbox. Displaying your data and using Vue to get it Now that we can retrieve our data using the Dropbox API, it's time to retrieve it within our Vue instance and display in our view. This app is going to be entirely built using components so we can take advantage of the compartmentalized data and methods. It will also mean the code is modular and shareable, should you want to integrate into other apps. We are also going to take advantage of the native Vue created() function - we'll cover it when it gets triggered in a bit. Create the component First off, create your custom HTML element, <dropbox-viewer>, in your View. Create a <script> template block at the bottom of the page for our HTML layout: Initialize your component in your app.js file, pointing it to the template ID: Viewing the app in the browser should show the heading from the template. The next step is to integrate the Dropbox API into the component. Retrieve the Dropbox data Create a new method called dropbox. In there, move the code that calls the Dropbox class and returns the instance. This will now give us access to the Dropbox API through the component by calling this.dropbox(): We are also going to integrate our API key into the component. Create a data function that returns an object containing your access token. Update the Dropbox method to use the local version of the key: We now need to add the ability for the component to get the directory list. For this, we are going to create another method that takes a single parameter—the path. This will give us the ability later to request the structure of a different path or folder if required. Use the code provided earlier - changing the dbx variable to this.dropbox(): Update the Dropbox filesListFolder function to accept the path parameter passed in, rather than a fixed value. Running this app in the browser will show the Dropbox heading, but won't retrieve any folders because the methods have not been called yet. The Vue life cycle hooks This is where the created() function comes in. The created() function gets called once the Vue instance has initialized the data and methods, but has yet to mount the instance on the HTML component. There are several other functions available at various points in the life cycle; more about these can be read at Alligator.io. The life cycle is as follows: Using the created() function gives us access to the methods and data while being able to start our retrieval process as Vue is mounting the app. The time between these various stages is split-second, but every moment counts when it comes to performance and creating a quick app. There is no point waiting for the app to be fully mounted before processing data if we can start the task early. Create the created() function on your component and call the getFolderStructure method, passing in an empty string for the path to get the root of your Dropbox: Running the app now in your browser will output the folder list to your console, which should give the same result as before. We now need to display our list of files in the view. To do this, we are going to create an empty array in our component and populate it with the result of our Dropbox query. This has the advantage of giving Vue a variable to loop through in the view, even before it has any content. Displaying the Dropbox data Create a new property in your data object titled structure, and assign this to an empty array. In the response function of the folder retrieval, assign response.entries to this.structure. Leave console.log as we will need to inspect the entries to work out what to output in our template: We can now update our view to display the folders and files from your Dropbox. As the structure array is available in our view, create a <ul> with a repeatable <li> looping through the structure. As we are now adding a second element, Vue requires templates to have one containing the element, wrap your heading and list in a <div>: Viewing the app in the browser will show a number of empty bullet points when the array appears in the JavaScript console. To work out what fields and properties you can display, expand the array in the JavaScript console and then further for each object. You should notice that each object has a collection of similar properties and a few that vary between folders and files. The first property, .tag, helps us identify whether the item is a file or a folder. Both types then have the following properties in common: id: A unique identifier to Dropbox name: The name of the file or folder, irrespective of where the item is path_display: The full path of the item with the case matching that of the files and folders path_lower: Same as path_display but all lowercase Items with a .tag of a file also contain several more fields for us to display: client_modified: This is the date when the file was added to Dropbox. content_hash: A hash of the file, used for identifying whether it is different from a local or remote copy. More can be read about this on the Dropbox website. rev: A unique identifier of the version of the file. server_modified: The last time the file was modified on Dropbox. size: The size of the file in bytes. To begin with, we are going to display the name of the item and the size, if present. Update the list item to show these properties: More file meta information To make our file and folder view a bit more useful, we can add more rich content and metadata to files such as images. These details are available by enabling the include_media_info option in the Dropbox API. Head back to your getFolderStructure method and add the parameter after path. Here are some new lines of readability: Inspecting the results from this new API call will reveal the media_info key for videos and images. Expanding this will reveal several more pieces of information about the file, for example, dimensions. If you want to add these, you will need to check that the media_info object exists before displaying the information: Try updating the path when retrieving the data from Dropbox. For example, if you have a folder called images, change the this.getFolderStructure parameter to /images. If you're not sure what the path is, analyze the data in the JavaScript console and copy the value of the path_lower attribute of a folder, for example: Formatting the file sizes With the file size being output in plain bytes it can be quite hard for a user to dechiper. To combat this, we can add a formatting method to output a file size which is more user-friendly, for example displaying 1kb instead of 1024. First, create a new key on the data object that contains an array of units called byteSizes: This is what will get appended to the figure, so feel free to make these properties either lowercase or full words, for example, megabyte. Next, add a new method, bytesToSize, to your component. This will take one parameter of bytes and output a formatted string with the unit at the end: We can now utilize this method in our view: Add loading screen The last step of this tutorial is to make a loading screen for our app. This will tell the user the app is loading, should the Dropbox API be running slowly (or you have a lot of data to show!). The theory behind this loading screen is fairly basic. We will set a loading variable to true by default that then gets set to false once the data has loaded. Based on the result of this variable, we will utilize view attributes to show, and then hide, an element with the loading text or animation in and also reveal the loaded data list. Create a new key in the data object titled isLoading. Set this variable to true by default: Within the getFolderStructure method on your component, set the isLoading variable to false. This should happen within the promise after you have set the structure: We can now utilize this variable in our view to show and hide a loading container. Create a new <div> before the unordered list containing some loading text. Feel free to add a CSS animation or an animated gif—anything to let the user know the app is retrieving data: We now need to only show the loading div if the app is loading and the list once the data has loaded. As this is just one change to the DOM, we can use the v-if directive. To give you the freedom of rearranging the HTML, add the attribute to both instead of using v-else. To show or hide, we just need to check the status of the isLoading variable. We can prepend an exclamation mark to the list to only show if the app is not loading: Our app should now show the loading container once mounted, and then it should show the list once the app data has been gathered. To recap, our complete component code now looks like this: Animating between states As a nice enhancement for the user, we can add some transitions between components and states. Helpfully, Vue includes some built-in transition effects. Working with CSS, these transitions allow you to add fades, swipes, and other CSS animations easily when DOM elements are being inserted. More information about transitions can be found in the Vue documentation. The first step is to add the Vue custom HTML <transition> element. Wrap both your loading and list with separate transition elements and give it an attribute of name and a value of fade: Now add the following CSS to either the head of your document or a separate style sheet if you already have one: With the transition element, Vue adds and removes various CSS classes based on the state and time of the transition. All of these begin with the name passed in via the attribute and are appended with the current stage of transition: Try the app in your browser, you should notice the loading container fading out and the file list fading in. Although in this basic example, the list jumps up once the fading has completed, it's an example to help you understand using transitions in Vue. We learned to make Dropbox viewer to list out files and folders from the Dropbox account and also used Vue animations for navigation. Do check out the book Vue.js 2.x by Example to start integrating remote data into your web applications swiftly. Building a real-time dashboard with Meteor and Vue.js Building your first Vue.js 2 Web application Installing and Using Vue.js
Read more
  • 0
  • 0
  • 6813
article-image-testing-single-page-apps-spas-vue-js-dev-tools
Pravin Dhandre
25 May 2018
8 min read
Save for later

Testing Single Page Applications (SPAs) using Vue.js developer tools

Pravin Dhandre
25 May 2018
8 min read
Testing, especially for big applications, is paramount – especially when deploying your application to a development environment. Whether you choose unit testing or browser automation, there are a host of articles and books available on the subject. In this tutorial, we have covered the usage of Vue developer tools to test Single Page Applications. We will also touch upon other alternative tools like Nightwatch.js, Selenium, and TestCafe for testing. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example.  Using the Vue.js developer tools The Vue developer tools are available for Chrome and Firefox and can be downloaded from GitHub. Once installed, they become an extension of the browser developer tools. For example, in Chrome, they appear after the Audits tab. The Vue developer tools will only work when you are using Vue in development mode. By default, the un-minified version of Vue has the development mode enabled. However, if you are using the production version of the code, the development tools can be enabled by setting the devtools variable to true in your code: Vue.config.devtools = true We've been using the development version of Vue, so the dev tools should work with all three of the SPAs we have developed. Open the Dropbox example and open the Vue developer tools. Inspecting Vue components data and computed values The Vue developer tools give a great overview of the components in use on the page. You can also drill down into the components and preview the data in use on that particular instance. This is perfect for inspecting the properties of each component on the page at any given time. For example, if we inspect the Dropbox app and navigate to the Components tab, we can see the <Root> Vue instance and we can see the <DropboxViewer> component. Clicking this will reveal all of the data properties of the component – along with any computed properties. This lets us validate whether the structure is constructed correctly, along with the computed path property: Drilling down into each component, we can access individual data objects and computed properties. Using the Vue developer tools for inspecting your application is a much more efficient way of validating data while creating your app, as it saves having to place several console.log() statements. Viewing Vuex mutations and time-travel Navigating to the next tab, Vuex, allows us to watch store mutations taking place in real time. Every time a mutation is fired, a new line is created in the left-hand panel. This element allows us to view what data is being sent, and what the Vuex store looked like before and after the data had been committed. It also gives you several options to revert, commit, and time-travel to any point. Loading the Dropbox app, several structure mutations immediately populate within the left-hand panel, listing the mutation name and the time they occurred. This is the code pre-caching the folders in action. Clicking on each one will reveal the Vuex store state – along with a mutation containing the payload sent. The state display is after the payload has been sent and the mutation committed. To preview what the state looked like before that mutation, select the preceding option: On each entry, next to the mutation name, you will notice three symbols that allow you to carry out several actions and directly mutate the store in your browser: Commit this mutation: This allows you to commit all the data up to that point. This will remove all of the mutations from the dev tools and update the Base State to this point. This is handy if there are several mutations occurring that you wish to keep track of. Revert this mutation: This will undo the mutation and all mutations after this point. This allows you to carry out the same actions again and again without pressing refresh or losing your current place. For example, when adding a product to the basket in our shop app, a mutation occurs. Using this would allow you to remove the product from the basket and undo any following mutations without navigating away from the product page. Time-travel to this state: This allows you to preview the app and state at that particular mutation, without reverting any mutations that occur after the selected point. The mutations tab also allows you to commit or revert all mutations at the top of the left-hand panel. Within the right-hand panel, you can also import and export a JSON encoded version of the store's state. This is particularly handy when you want to re-test several circumstances and instances without having to reproduce several steps. Previewing event data The Events tab of the Vue developer tools works in a similar way to the Vuex tab, allowing you to inspect any events emitted throughout your app. Changing the filters in this app emits an event each time the filter type is updated, along with the filter query: The left-hand panel again lists the name of the event and the time it occurred. The right panel contains information about the event, including its component origin and payload. This data allows you to ensure the event data is as you expected it to be and, if not, helps you locate where the event is being triggered. The Vue dev tools are invaluable, especially as your JavaScript application gets bigger and more complex. Open the shop SPA we developed and inspect the various components and Vuex data to get an idea of how this tool can help you create applications that only commit mutations they need to and emit the events they have to. Testing your Single Page Application The majority of Vue testing suites revolve around having command-line knowledge and creating a Vue application using the CLI (command-line interface). Along with creating applications in frontend-compatible JavaScript, Vue also has a CLI that allows you to create applications using component-based files. These are files with a .vue extension and contain the template HTML along with the JavaScript required for the component. They also allow you to create scoped CSS – styles that only apply to that component. If you chose to create your app using the CLI, all of the theory and a lot of the practical knowledge you have learned in this book can easily be ported across. Command-line unit testing Along with component files, the Vue CLI allows you to integrate with command-line unit tests easier, such as Jest, Mocha, Chai, and TestCafe (https://testcafe.devexpress.com/). For example, TestCafe allows you to specify several different tests, including checking whether content exists, to clicking buttons to test functionality. An example of a TestCafe test checking to see if our filtering component in our first app contains the work Field would be: test('The filtering contains the word "filter"', async testController => { const filterSelector = await new Selector('body > #app > form > label:nth-child(1)'); await testController.expect(paragraphSelector.innerText).eql('Filter'); }); This test would then equate to true or false. Unit tests are generally written in conjunction with components themselves, allowing components to be reused and tested in isolation. This allows you to check that external factors have no bearing on the output of your tests. Most command-line JavaScript testing libraries will integrate with Vue.js; there is a great list available in the awesome Vue GitHub repository (https://github.com/vuejs/awesome-vue#test). Browser automation The alternative to using command-line unit testing is to automate your browser with a testing suite. This kind of testing is still triggered via the command line, but rather than integrating directly with your Vue application, it opens the page in the browser and interacts with it like a user would. A popular tool for doing this is Nightwatch.js (http://nightwatchjs.org/). You may use this suite for opening your shop and interacting with the filtering component or product list ordering and comparing the result. The tests are written in very colloquial English and are not restricted to being on the same domain name or file network as the site to be tested. The library is also language agnostic – working for any website regardless of what it is built with. The example Nightwatch.js gives on their website is for opening Google and ensuring the first result of a Google search for rembrandt van rijn is the Wikipedia entry: module.exports = { 'Demo test Google' : function (client) { client .url('http://www.google.com') .waitForElementVisible('body', 1000) .assert.title('Google') .assert.visible('input[type=text]') .setValue('input[type=text]', 'rembrandt van rijn') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('ol#rso li:first-child', 'Rembrandt - Wikipedia') .end(); } }; An alternative to Nightwatch is Selenium (http://www.seleniumhq.org/). Selenium has the advantage of having a Firefox extension that allows you to visually create tests and commands. We covered usage of Vue.js dev tools and learned to build automated tests for your web applications. If you found this tutorial useful, do check out the book Vue.js 2.x by Example and get complete knowledge resource on the process of building single-page applications with Vue.js. Building your first Vue.js 2 Web application 5 web development tools will matter in 2018
Read more
  • 0
  • 0
  • 7312

article-image-functional-programs-with-f
Kunal Chaudhari
08 May 2018
23 min read
Save for later

Building functional programs with F#

Kunal Chaudhari
08 May 2018
23 min read
Functional programming treats programs as mathematical expressions and evaluates expressions. It focuses on functions and constants, which don't change, unlike variables and states. Functional programming solves complex problems with simple code; it is a very efficient programming technique for writing bug-free applications; for example, the null exception can be avoided using this technique. In today's tutorial, we will learn how to build functional programs with F# that leverage .NET Core. Here are some rules to understand functional programming better: In functional programming, a function's output never gets affected by outside code changes and the function always gives the same result for the same parameters. This gives us confidence in the function's behavior that it will give the expected result in all the scenarios, and this is helpful for multithread or parallel programming. In functional programming, variables are immutable, which means we cannot modify a variable once it is initialized, so it is easy to determine the value of a variable at any given point at program runtime. Functional programming works on referential transparency, which means it doesn't use assignment statements in a function. For example, if a function is assigning a new value to a variable such as shown here: Public int sum(x) { x = x + 20 ; return x; } This is changing the value of x, but if we write it as shown here: Public int sum(x) { return x + 20 ; } This is not changing the variable value and the function returns the same result. Functional programming uses recursion for looping. A recursive function calls itself and runs till the condition is satisfied. Functional programming features Let's discuss some functional programming features: Higher-order functions Purity Recursion Currying Closure Function composition Higher-order functions (HOF) One function can take an input argument as another function and it can return a function. This originated from calculus and is widely used in functional programming. An order can be determined by domain and range of order such as order 0 has no function data and order 1 has a domain and range of order 0, if the order is higher than 1, it is called a higher-order function. For example, the ComplexCalc function takes another function as input and returns a different function as output: open System let sum y = x+x let divide y = x/x Let ComplexCalc func = (func 2) Printfn(ComplexCalc sum) // 4 Printfn(ComplexCalc divide) //1 In the previous example, we created two functions, sum and divide. We pass these two functions as parameters to the ComplexCalc function, and it returns a value of 4 and 1, respectively. Purity In functional programming, a function is referred to as a pure function if all its input arguments are known and all its output results are also well known and declared; or we can say the input and output result has no side-effects. Now, you must be curious to know what the side-effect could be, let's discuss it. Let's look at the following example: Public int sum(int x) { return x+x; } In the previous example, the function sum takes an integer input and returns an integer value and predefined result. This kind of function is referred to as a pure function. Let's investigate the following example: Public void verifyData() { Employee emp = OrgQueue.getEmp(); If(emp != null) { ProcessForm(emp); } } In the preceding example, the verifyData() function does not take any input parameter and does not return anything, but this function is internally calling the getEmp() function so verifyData() depends on the getEmp() function. If the output of getEmp() is not null, it calls another function, called ProcessForm() and we pass the getEmp() function output as input for ProcessForm(emp). In this example, both the functions, getEmp() and ProcessForm(), are unknown at the verifyData() function level call, also emp is a hidden value. This kind of program, which has hidden input and output, is treated as a side-effect of the program. We cannot understand what it does in such functions. This is different from encapsulation; encapsulation hides the complexity but in such function, the functionality is not clear and input and output are unreliable. These kinds of function are referred to as impure functions. Let's look at the main concepts of pure functions: Immutable data: Functional programming works on immutable data, it removes the side-effect of variable state change and gives a guarantee of an expected result. Referential transparency: Large modules can be replaced by small code blocks and reuse any existing modules. For example, if a = b*c and d = b*c*e then the value of d can be written as d = a*e. Lazy evaluation: Referential transparency and immutable data give us the flexibility to calculate the function at any given point of time and we will get the same result because a variable will not change its state at any time. Recursion In functional programming, looping is performed by recursive functions. In F#, to make a function recursive, we need to use the rec keyword. By default, functions are not recursive in F#, we have to rectify this explicitly using the rec keyword. Let's take an example: let rec summation x = if x = 0 then 0 else x + summation(x-1) printfn "The summation of first 10 integers is- %A" (summation 10) In this code, we used the keyword rec for the recursion function and if the value passed is 0, the sum would be 0; otherwise it will add x + summation(x-1), like 1+0 then 2+1 and so on. We should take care with recursion because it can consume memory heavily. Currying This converts a function with multiple input parameter to a function which takes one parameter at a time, or we can say it breaks the function into multiple functions, each taking one parameter at a time. Here is an example: int sum = (a,b) => a+b int sumcurry = (a) =>(b) => a+b sumcurry(5)(6) // 11 int sum8 = sumcurry(8) // b=> 8+b sum8(5) // 13 Closure Closure is a feature which allows us to access a variable which is not within the scope of the current module. It is a way of implementing lexically scoped named binding, for example: int add = x=> y=> x+y int addTen = add(10) addTen(5) // this will return 15 In this example, the add() function is internally called by the addTen() function. In an ideal world, the variables x and y should not be accessible when the add() function finishes its execution, but when we are calling the function addTen(), it returns 15. So, the state of the function add() is saved even though code execution is finished, otherwise, there is no way of knowing the add(10) value, where x = 10. We are able to find the value of x because of lexical scoping and this is called closure. Function composition As we discussed earlier in HOF, function composition means getting two functions together to create a third new function where the output of a function is the input of another function. There are n number of functional programming features. Functional programming is a technique to solve problems and write code in an efficient way. It is not language-specific, but many languages support functional programming. We can also use non-functional languages (such as C#) to write programs in a functional way. F# is a Microsoft programming language for concise and declarative syntax. Getting started with F# In this section, we will discuss F# in more detail. Classes Classes are types of object which can contain functions, properties, and events. An F# class must have a parameter and a function attached to a member. Both properties and functions can use the member keyword. The following is the class definition syntax: type [access-modifier] type-name [type-params] [access-modifier] (parameter-list) [ as identifier ] = [ class ] [ inherit base-type-name(base-constructor-args) ] [ let-bindings ] [ do-bindings ] member-list [ end ] // Mutually recursive class definitions: type [access-modifier] type-name1 ... and [access-modifier] type-name2 ... Let’s discuss the preceding syntax for class declaration: type: In the F# language, class definition starts with a type keyword. access-modifier: The F# language supports three access modifiers—public, private, and internal. By default, it considers the public modifier if no other access modifier is provided. The Protected keyword is not used in the F# language, and the reason is that it will become object-oriented rather than functional programming. For example, F# usually calls a member using a lambda expression and if we make a member type protected and call an object of a different instance, it will not work. type-name: It is any of the previously mentioned valid identifiers; the default access modifier is public. type-params: It defines optional generic type parameters. parameter-list: It defines constructor parameters; the default access modifier for the primary constructor is public. identifier: It is used with the optional as keyword, the as keyword gives a name to an instance variable which can be used in the type definition to refer to the instance of the type. Inherit: This keyword allows us to specify the base class for a class. let-bindings: This is used to declare fields or function values in the context of a class. do-bindings: This is useful for the execution of code to create an object member-list: The member-list comprises extra constructors, instance and static method declarations, abstract bindings, interface declarations, and event and property declarations. Here is an example of a class: type StudentName(firstName,lastName) = member this.FirstName = firstName member this.LastName = lastName In the previous example, we have not defined the parameter type. By default, the program considers it as a string value but we can explicitly define a data type, as follows: type StudentName(firstName:string,lastName:string) = member this.FirstName = firstName member this.LastName = lastName Constructor of a class In F#, the constructor works in a different way to any other .NET language. The constructor creates an instance of a class. A parameter list defines the arguments of the primary constructor and class. The constructor contains let and do bindings, which we will discuss next. We can add multiple constructors, apart from the primary constructor, using the new keyword and it must invoke the primary constructor, which is defined with the class declaration. The syntax of defining a new constructor is as shown: new (argument-list) = constructor-body Here is an example to explain the concept. In the following code, the StudentDetail class has two constructors: a primary constructor that takes two arguments and another constructor that takes no arguments: type StudentDetail(x: int, y: int) = do printfn "%d %d" x y new() = StudentDetail(0, 0) A let and do binding A let and do binding creates the primary constructor of a class and runs when an instance of a class is created. A function is compiled into a member if it has a let binding. If the let binding is a value which is not used in any function or member, then it is compiled into a local variable of a constructor; otherwise, it is compiled into a field of the class. The do expression executes the initialized code. As any extra constructors always call the primary constructor, let and do bindings always execute, irrespective of which constructor is called. Fields that are created by let bindings can be accessed through the methods and properties of the class, though they cannot be accessed from static methods, even if the static methods take an instance variable as a parameter: type Student(name) as self = let data = name do self.PrintMessage() member this.PrintMessage() = printf " Student name is %s" data Generic type parameters F# also supports a generic parameter type. We can specify multiple generic type parameters separated by a comma. The syntax of a generic parameter declaration is as follows: type MyGenericClassExample<'a> (x: 'a) = do printfn "%A" x The type of the parameter infers where it is used. In the following code, we call the MyGenericClassExample method and pass a sequence of tuples, so here the parameter type became a sequence of tuples: let g1 = MyGenericClassExample( seq { for i in 1 .. 10 -> (i, i*i) } ) Properties Values related to an object are represented by properties. In object-oriented programming, properties represent data associated with an instance of an object. The following snippet shows two types of property syntax: // Property that has both get and set defined. [ attributes ] [ static ] member [accessibility-modifier] [self- identifier.]PropertyName with [accessibility-modifier] get() = get-function-body and [accessibility-modifier] set parameter = set-function-body // Alternative syntax for a property that has get and set. [ attributes-for-get ] [ static ] member [accessibility-modifier-for-get] [self-identifier.]PropertyName = get-function-body [ attributes-for-set ] [ static ] member [accessibility-modifier-for-set] [self- identifier.]PropertyName with set parameter = set-function-body There are two kinds of property declaration: Explicitly specify the value: We should use the explicit way to implement the property if it has non-trivial implementation. We should use a member keyword for the explicit property declaration. Automatically generate the value: We should use this when the property is just a simple wrapper for a value. There are many ways of implementing an explicit property syntax based on need: Read-only: Only the get() method Write-only: Only the set() method Read/write: Both get() and set() methods An example is shown as follows: // A read-only property. member this.MyReadOnlyProperty = myInternalValue // A write-only property. member this.MyWriteOnlyProperty with set (value) = myInternalValue <- value // A read-write property. member this.MyReadWriteProperty with get () = myInternalValue and set (value) = myInternalValue <- value Backing stores are private values that contain data for properties. The keyword, member val instructs the compiler to create backing stores automatically and then gives an expression to initialize the property. The F# language supports immutable types, but if we want to make a property mutable, we should use get and set. As shown in the following example, the MyClassExample class has two properties: propExample1 is read-only and is initialized to the argument provided to the primary constructor, and propExample2 is a settable property initialized with a string value ".Net Core 2.0": type MyClassExample(propExample1 : int) = member val propExample1 = property1 member val propExample2 = ".Net Core 2.0" with get, set Automatically implemented properties don't work efficiently with some libraries, for example, Entity Framework. In these cases, we should use explicit properties. Static and instance properties We can further categorize properties as static or instance properties. Static, as the name suggests, can be invoked without any instance. The self-identifier is neglected by the static property while it is necessary for the instance property. The following is an example of the static property: static member MyStaticProperty with get() = myStaticValue and set(value) = myStaticValue <- value Abstract properties Abstract properties have no implementation and are fully abstract. They can be virtual. It should not be private and if one accessor is abstract all others must be abstract. The following is an example of the abstract property and how to use it: // Abstract property in abstract class. // The property is an int type that has a get and // set method [<AbstractClass>] type AbstractBase() = abstract Property1 : int with get, set // Implementation of the abstract property type Derived1() = inherit AbstractBase() let mutable value = 10 override this.Property1 with get() = value and set(v : int) = value <- v // A type with a "virtual" property. type Base1() = let mutable value = 10 abstract Property1 : int with get, set default this.Property1 with get() = value and set(v : int) = value <- v // A derived type that overrides the virtual property type Derived2() = inherit Base1() let mutable value2 = 11 override this.Property1 with get() = value2 and set(v) = value2 <- v Inheritance and casts In F#, the inherit keyword is used while declaring a class. The following is the syntax: type MyDerived(...) = inherit MyBase(...) In a derived class, we can access all methods and members of the base class, but it should not be a private member. To refer to base class instances in the F# language, the base keyword is used. Virtual methods and overrides  In F#, the abstract keyword is used to declare a virtual member. So, here we can write a complete definition of the member as we use abstract for virtual. F# is not similar to other .NET languages. Let's have a look at the following example: type MyClassExampleBase() = let mutable x = 0 abstract member virtualMethodExample : int -> int default u. virtualMethodExample (a : int) = x <- x + a; x type MyClassExampleDerived() = inherit MyClassExampleBase () override u. virtualMethodExample (a: int) = a + 1 In the previous example, we declared a virtual method, virtualMethodExample, in a base class, MyClassExampleBase, and overrode it in a derived class, MyClassExampleDerived. Constructors and inheritance An inherited class constructor must be called in a derived class. If a base class constructor contains some arguments, then it takes parameters of the derived class as input. In the following example, we will see how derived class arguments are passed in the base class constructor with inheritance: type MyClassBase2(x: int) = let mutable z = x * x do for i in 1..z do printf "%d " i type MyClassDerived2(y: int) = inherit MyClassBase2(y * 2) do for i in 1..y do printf "%d " i If a class has multiple constructors, such as new(str) or new(), and this class is inherited in a derived class, we can use a base class constructor to assign values. For example, DerivedClass, which inherits BaseClass, has new(str1,str2), and in place of the first string, we pass inherit BaseClass(str1). Similarly, for blank, we wrote inherit BaseClass(). Let's explore the following example in more detail: type BaseClass = val string1 : string new (str) = { string1 = str } new () = { string1 = "" } type DerivedClass = inherit BaseClass val string2 : string new (str1, str2) = { inherit BaseClass(str1); string2 = str2 } new (str2) = { inherit BaseClass(); string2 = str2 } let obj1 = DerivedClass("A", "B") let obj2 = DerivedClass("A") Functions and lambda expressions A lambda expression is one kind of anonymous function, which means it doesn't have a name attached to it. But if we want to create a function which can be called, we can use the fun keyword with a lambda expression. We can pass the kind parameter in the lambda function, which is created using the fun keyword. This function is quite similar to a normal F# function. Let's see a normal F# function and a lambda function: // Normal F# function let addNumbers a b = a+b // Evaluating values let sumResult = addNumbers 5 6 // Lambda function and evaluating values let sumResult = (fun (a:int) (b:int) -> a+b) 5 6 // Both the function will return value sumResult = 11 Handling data – tuples, lists, record types, and data manipulation F# supports many kind data types, for example: Primitive types: bool, int, float, string values. Aggregate type: class, struct, union, record, and enum Array: int[], int[ , ], and float[ , , ] Tuple: type1 * type2 * like (a,1,2,true) type is—char * int * int * bool Generic: list<’x>, dictionary < ’key, ’value> In an F# function, we can pass one tuple instead parameters of multiple parameters of different types. Declaration of a tuple is very simple and we can assign values of a tuple to different variables, for example: let tuple1 = 1,2,3 // assigning values to variables , v1=1, v2= 2, v3=3 let v1,v2,v3 = tuple1 // if we want to assign only two values out of three, use “_” to skip the value. Assigned values: v1=1, //v3=3 let v1,_,v3 = tuple In the preceding examples, we saw that tuple supports pattern matching. These are option types and an option type in F# supports the idea that the value may or not be present at runtime. List List is a generic type implementation. An F# list is similar to a linked list implementation in any other functional language. It has a special opening and closing bracket construct, a short form of the standard empty list ([ ]) syntax: let empty = [] // This is an empty list of untyped type or we can say //generic type. Here type is: 'a list let intList = [10;20;30;40] // this is an integer type list The cons operator is used to prepend an item to a list using a double colon cons(prepend,::). To append another list to one list, we use the append operator—@: // prepend item x into a list let addItem xs x = x :: xs let newIntList = addItem intList 50 // add item 50 in above list //“intlist”, final result would be- [50;10;20;30;40] // using @ to append two list printfn "%A" (["hi"; "team"] @ ["how";"are";"you"]) // result – ["hi"; "team"; "how";"are";"you"] Lists are decomposable using pattern matching into a head and a tail part, where the head is the first item in the list and the tail part is the remaining list, for example: printfn "%A" newIntList.Head printfn "%A" newIntList.Tail printfn "%A" newIntList.Tail.Tail.Head let rec listLength (l: 'a list) = if l.IsEmpty then 0 else 1 + (listLength l.Tail) printfn "%d" (listLength newIntList) Record type The class, struct, union, record, and enum types come under aggregate types. The record type is one of them, it can have n number of members of any individual type. Record type members are by default immutable but we can make them mutable. In general, a record type uses the members as an immutable data type. There is no way to execute logic during instantiation as a record type don't have constructors. A record type also supports match expression, depending on the values inside those records, and they can also again decompose those values for individual handling, for example: type Box = {width: float ; height:int } let giftbox = {width = 6.2 ; height = 3 } In the previous example, we declared a Box with float a value width and an integer height. When we declare giftbox, the compiler automatically detects its type as Box by matching the value types. We can also specify type like this: let giftbox = {Box.width = 6.2 ; Box.height = 3 } or let giftbox : Box = {width = 6.2 ; height = 3 } This kind of type declaration is used when we have the same type of fields or field type declared in more than one type. This declaration is called a record expression. Object-oriented programming in F# F# also supports implementation inheritance, the creation of object, and interface instances. In F#, constructed types are fully compatible .NET classes which support one or more constructors. We can implement a do block with code logic, which can run at the time of class instance creation. The constructed type supports inheritance for class hierarchy creation. We use the inherit keyword to inherit a class. If the member doesn't have implementation, we can use the abstract keyword for declaration. We need to use the abstractClass attribute on the class to inform the compiler that it is abstract. If the abstractClass attribute is not used and type has all abstract members, the F# compiler automatically creates an interface type. Interface is automatically inferred by the compiler as shown in the following screenshot: The override keyword is used to override the base class implementation; to use the base class implementation of the same member, we use the base keyword. In F#, interfaces can be inherited from another interface. In a class, if we use the construct interface, we have to implement all the members in the interface in that class, as well. In general, it is not possible to use interface members from outside the class instance, unless we upcast the instance type to the required interface type. To create an instance of a class or interface, the object expression syntax is used. We need to override virtual members if we are creating a class instance and need member implementation for interface instantiation: type IExampleInterface = abstract member IntValue: int with get abstract member HelloString: unit -> string type PrintValues() = interface IExampleInterface with member x.IntValue = 15 member x.HelloString() = sprintf "Hello friends %d" (x :> IExampleInterface).IntValue let example = let varValue = PrintValues() :> IExampleInterface { new IExampleInterface with member x.IntValue = varValue.IntValue member x.HelloString() = sprintf "<b>%s</b>" (varValue.HelloString()) } printfn "%A" (example.HelloString()) Exception handling The exception keyword is used to create a custom exception in F#; these exceptions adhere to Microsoft best practices, such as constructors supplied, serialization support, and so on. The keyword raise is used to throw an exception. Apart from this, F# has some helper functions, such as failwith, which throws a failure exception at F# runtime, and invalidop, invalidarg, which throw the .NET Framework standard type invalid operation and invalid argument exception, respectively. try/with is used to catch an exception; if an exception occurred on an expression or while evaluating a value, then the try/with expression could be used on the right side of the value evaluation and to assign the value back to some other value. try/with also supports pattern matching to check an individual exception type and extract an item from it. try/finally expression handling depends on the actual code block. Let's take an example of declaring and using a custom exception: exception MyCustomExceptionExample of int * string raise (MyCustomExceptionExample(10, "Error!")) In the previous example, we created a custom exception called MyCustomExceptionExample, using the exception keyword, passing value fields which we want to pass. Then we used the raise keyword to raise exception passing values, which we want to display while running the application or throwing the exception. However, as shown here, while running this code, we don't get our custom message in the error value and the standard exception message is displayed: We can see in the previous screenshot that the exception message doesn't contain the message that we passed. In order to display our custom error message, we need to override the standard message property on the exception type. We will use pattern matching assignment to get two values and up-cast the actual type, due to the internal representation of the exception object. If we run this program again, we will get the custom message in the exception: exception MyCustomExceptionExample of int * string with override x.Message = let (MyCustomExceptionExample(i, s)) = upcast x sprintf "Int: %d Str: %s" i s raise (MyCustomExceptionExample(20, "MyCustomErrorMessage!")) Now, we will get the following error message: In the previous screenshot, we can see our custom message with integer and string values included in the output. We can also use the helper function, failwith, to raise a failure exception, as it includes our message as an error message, as follows: failwith "An error has occurred" The preceding error message can be seen in the following screenshot: Here is a detailed exception screenshot: An example of the invalidarg helper function follows. In this factorial function, we are checking that the value of x is greater than zero. For cases where x is less than 0, we call invalidarg, pass x as the parameter name that is invalid, and then some error message saying the value should be greater than 0. The invalidarg helper function throws an invalid argument exception from the standard system namespace in .NET: let rec factorial x = if x < 0 then invalidArg "x" "Value should be greater than zero" match x with | 0 -> 1 | _ -> x * (factorial (x - 1)) To summarize, we discussed functional programming and its features, such as higher-order functions, purity, lazy evaluation and how to write functions and lambda expressions in F#, exception handling, and so on. You enjoyed an excerpt from a book written by Rishabh Verma and Neha Shrivastava, titled  .NET Core 2.0 By Example. This book will give a detailed walkthrough on functional programming with F# and .NET Core from scratch. What is functional reactive programming? Functional Programming in C#  
Read more
  • 0
  • 0
  • 3386

article-image-unit-testing-in-net-core-with-visual-studio-2017-for-better-code-quality
Kunal Chaudhari
07 May 2018
12 min read
Save for later

Unit Testing in .NET Core with Visual Studio 2017 for better code quality

Kunal Chaudhari
07 May 2018
12 min read
The famous Java programmer, Bruce Eckel, came up with a slogan which highlights the importance of testing software: If it ain't tested, it's broken. Though a confident programmer may challenge this, it beautifully highlights the ability to determine that code works as expected over and over again, without any exception. How do we know that the code we are shipping to the end user or customer is of good quality and all the user requirements would work? By testing? Yes, by testing, we can be confident that the software works as per customer requirements and specifications. If there are any discrepancies between expected and actual behavior, it is referred to as a bug/defect in the software. The earlier the discrepancies are caught, the more easily they can be fixed before the software is shipped, and the results are good quality. No wonder software testers are also referred to as quality control analysts in various software firms. The mantra for a good software tester is: In God we trust, the rest we test. In this article, we will understand the testing deployment model of .NET Core applications, the Live Unit Testing feature of Visual Studio 2017. We will look at types of testing methods briefly and write our unit tests, which a software developer must write after writing any program. Software testing is conducted at various levels: Unit testing: While coding, the developer conducts tests on a unit of a program to validate that the code they have written is error-free. We will write a few unit tests shortly. Integration testing: In a team where a number of developers are working, there may be different components that the developers are working on. Even if all developers perform unit testing and ensure that their units are working fine, there is still a need to ensure that, upon integration of these components, they work without any error. This is achieved through integration testing. System testing: The entire software product is tested as a whole. This is accomplished using one or more of the following: Functionality testing: Test all the functionality of the software against the business requirement document. Performance testing: To test how performant the software is. It tests the average time, resource utilization, and so on, taken by the software to complete a desired business use case. This is done by means of load testing and stress testing, where the software is put under high user and data load. Security testing: Tests how secure the software is against common and well-known security threats. Accessibility testing: Tests if the user interface is accessible and user-friendly to specially-abled people or not. User acceptance testing: When the software is ready to be handed over to the customer, it goes through a round of testing by the customer for user interaction and response. Regression testing: Whenever a piece of code is added/updated in the software to add a new functionality or fix an existing functionality, it is tested to detect if there are any side-effects from the newly added/updated code. Of all these different types of testing, we will focus on unit testing, as it is done by the developer while coding the functionality. Unit testing .NET Core has been designed with testability in mind. .NET Core 2.0 has unit test project templates for VB, F#, and C#. We can also pick the testing framework of our choice amongst xUnit, NUnit, and MSTest. Unit tests that test single programming parts are the most minimal-level tests. Unit tests should just test code inside the developer's control, and ought to not test infrastructure concerns, for example, databases, filesystems, or network resources. Unit tests might be composed utilizing test-driven development (TDD) or they can be added to existing code to affirm its accuracy. The naming convention of Test class names should end with Test and reside in the same namespace as the class being tested. For instance, the unit tests for the Microsoft.Example.AspNetCore class would be in the Microsoft.Example.AspNetCoreTest class in the test assembly. Also, unit test method names must be descriptive about what is being tested, under what conditions, and what the expectations are. A good unit test has three main parts to it in the following specified order: Arrange Act Assert We first arrange the code and then act on it and then do a series of asserts to check if the actual output matches the expected output. Let's have a look at them in detail: Arrange: All the parameter building, and method invocations needed for making a call in the act section must be declared in the arrange section. Act: The act stage should be one statement and as simple as possible. This one statement should be a call to the method that we are trying to test. Assert: The only reason method invocation may fail is if the method itself throws an exception, else, there should always be some state change or output from any meaningful method invocation. When we write the act statement, we anticipate an output and then do assertions if the actual output is the same as expected. If the method under test should throw an exception under normal circumstances, we can do assertions on the type of exception and the error message that should be thrown. We should be watchful while writing unit test cases, that we don't inject any dependencies on the infrastructure. Infrastructure dependencies should be taken care of in integration test cases, not in unit tests. We can maintain a strategic distance from these shrouded dependencies in our application code by following the Explicit Dependencies Principle and utilizing Dependency Injection to request our dependencies on the framework. We can likewise keep our unit tests in a different project from our integration tests and guarantee our unit test project doesn't have references to the framework. Testing using xUnit In this section, we will learn to write unit and integration tests for our controllers. There are a number of options available to us for choosing the test framework. We will use xUnit for all our unit tests and Moq for mocking objects. Let's create an xUnit test project by doing the following: Open the Let's Chat project in Visual Studio 2017 Create a new folder named  Test Right-click the Test folder and click Add | New Project Select xUnit Test Project (.NET Core) under Visual C# project templates, as shown here: Delete the default test class that gets created with the template Create a test class inside this project AuthenticationControllerUnitTests for the unit test We need to add some NuGet packages. Right-click the project in VS 2017 to edit the project file and add the references manually, or use the NuGet Package Manager to add these packages: // This package contains dependencies to ASP.NET Core <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> // This package is useful for the integration testing, to build a test host for the project to test. <PackageReference Include="Microsoft.AspNetCore.TestHost" Version="2.0.0" /> // Moq is used to create fake objects <PackageReference Include="Moq" Version="4.7.63" /> With this, we are now ready to write our unit tests. Let's start doing this, but before we do that, here's some quick theory about xUnit and Moq. The documentation from the xUnit website and Wikipedia tells us that xUnit.net is a free, open source, community-focused unit testing tool for the .NET Framework. It is the latest technology for unit testing C#, F#, Visual Basic .NET, and other .NET languages. All xUnit frameworks share the following basic component architecture: Test runner: It is an executable program that runs tests implemented using an xUnit framework and reports the test results. Test case: It is the most elementary class. All unit tests are inherited from here. Test fixtures: Test fixures (also known as a test context) are the set of preconditions or state needed to run a test. The developer should set up a known good state before the tests, and return to the original state after the tests. Test suites: It is a set of tests that all share the same fixture. The order of the tests shouldn't matter. xUnit.net includes support for two different major types of unit test: Facts: Tests which are always true. They test invariant conditions, that is, data-independent tests. Theories: Tests which are only true for a particular set of data. Moq is a mocking framework for C#/.NET. It is used in unit testing to isolate the class under test from its dependencies and ensure that the proper methods on the dependent objects are being called. Recall that in unit tests, we only test a unit or a layer/part of the software in isolation and hence do not bother about external dependencies, so we assume they work fine and just mock them using the mocking framework of our choice. Let's put this theory into action by writing a unit test for the following action in AuthenticationController: public class AuthenticationController : Controller { private readonly ILogger<AuthenticationController> logger; public AuthenticationController(ILogger<AuthenticationController> logger) { this.logger = logger; } [Route("signin")] public IActionResult SignIn() { logger.LogInformation($"Calling {nameof(this.SignIn)}"); return Challenge(new AuthenticationProperties { RedirectUri = "/" }); } } The unit test code depends on how the method to be tested is written. To understand this, let's write a unit test for a SignIn action. To test the SignIn method, we need to invoke the SignIn action in AuthenticationController. To do so, we need an instance of the AuthenticationController class, on which the SignIn action can be invoked. To create the instance of AuthenticationController, we need a logger object, as the AuthenticationController constructor expects it as a parameter. Since we are only testing the SignIn action, we do not bother about the logger and so we can mock it. Let's do it: /// <summary> /// Authentication Controller Unit Test - Notice the naming convention {ControllerName}Test /// </summary> public class AuthenticationControllerTest { /// <summary> /// Mock the dependency needed to initialize the controller. /// </summary> private Mock<ILogger<AuthenticationController>> mockedLogger = new Mock<ILogger<AuthenticationController>>(); /// <summary> /// Tests the SignIn action. /// </summary> [Fact] public void SignIn_Pass_Test() { // Arrange - Initialize the controller. Notice the mocked logger object passed as the parameter. var controller = new AuthenticationController(mockedLogger.Object); // Act - Invoke the method to be tested. var actionResult = controller.SignIn(); // Assert - Make assertions if actual output is same as expected output. Assert.NotNull(actionResult); Assert.IsType<ChallengeResult>(actionResult); Assert.Equal(((ChallengeResult)actionResult). Properties.Items.Count, 1); } } Reading the comments would explain the unit test code. The previous example shows how easy it is to write a unit test. Agreed, depending on the method to be tested, things can get complicated. But it is likely to be around mocking the objects and, with some experience on the mocking framework and binging around, mocking should not be a difficult task. The unit test for the SignOut action would be a bit complicated in terms of mocking as it uses HttpContext. The unit test for the SignOut action is left to the reader as an exercise. Let's explore a new feature introduced in Visual Studio 2017 called Live Unit Testing. Live Unit Testing It may disappoint you but Live Unit Testing (LUT) is available only in the Visual Studio 2017 Enterprise edition and not in the Community edition. What is Live Unit Testing? It's a new productivity feature, introduced in the Visual Studio 2017 Enterprise edition, that provides real-time feedback directly in the Visual Studio editor on how code changes are impacting unit tests and code coverage. All this happens live, while you write the code and hence it is called Live Unit Testing. This will help in maintaining the quality by keeping tests passing as changes are made. It will also remind us when we need to write additional unit tests, as we are making bug fixes or adding features. To start Live Unit Testing: Go to the Test menu item Click Live Unit Testing Click Start, as shown here: On clicking this, your CPU usage may go higher as Visual Studio spawns the MSBuild and tests runner processes in the background. In a short while, the editor will display the code coverage of the individual lines of code that are covered by the unit test. The following image displays the lines of code in AuthenticationController that are covered by the unit test. On clicking the right icon, it displays the tests covering this line of code and also provides the option to run and debug the test: Similarly, if we open the test file, it will show the indicator there as well. Super cool, right! If we navigate to Test|Live Unit Testing now, we would see the options to Stop and Pause. So, in case we wish to save  our resources after getting the data once, we can pause or stop Live Unit Testing: There are numerous icons which indicates the code coverage status of individual lines of code. These are: Red cross: Indicates that the line is covered by at least one failing test Green check mark: Indicates that the line is covered by only passing tests Blue dash: Indicates that the line is not covered by any test If you see a clock-like icon just below any of these icons, it indicates that the data is not up to date. With this productivity-enhancing feature, we conclude our discussion on basic unit testing. Next, we will learn about containers and how we can do the deployment and testing of our .NET Core 2.0 applications in containers. To summarize, we learned the importance of testing and how we can write unit tests using Moq and xUnit. We saw a new productivity-enhancing feature introduced in Visual Studio 2017 Enterprise edition called Live Unit Testing and how it helps us write better-quality code. You read an excerpt from a book written by Rishabh Verma and Neha Shrivastava, titled  .NET Core 2.0 By Example. This book will help you build cross-platform solutions with .NET Core 2.0 through real-life scenarios.   More on Testing: Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman Selenium and data-driven testing: Interview insights
Read more
  • 0
  • 0
  • 5279
article-image-building-two-way-interactive-chatbot-twilio
Gebin George
04 May 2018
4 min read
Save for later

Building a two-way interactive chatbot with Twilio: A step-by-step guide

Gebin George
04 May 2018
4 min read
To build a chatbot that can communicate both ways we need to do two things: build the chatbot into the web app and modify setup configurations in Twilio. To do these, follow these steps: Create an index.js file in the root directory of the project. Install the express and body-parser libraries. These libraries will be used to make a web app: npm install body-parser --save npm install express --save Create a web app in index.js: // Two-way SMS Bot const express = require('express') const bodyParser = require('body-parser') const twilio = require('twilio') const app = express() app.set('port', (process.env.PORT || 5000)) Chapter 5 [ 185 ] // Process application/x-www-form-urlencoded app.use(bodyParser.urlencoded({extended: false})) // Process application/json app.use(bodyParser.json()) // Spin up the server app.listen(app.get('port'), function() { console.log('running on port', app.get('port')) }) // Index route app.get('/', function (req, res) { res.send('Hello world, I am SMS bot.') }) //Twilio webhook app.post('/sms/', function (req, res) { var botSays = 'You said: ' + req.body.Body; var twiml = new twilio.TwimlResponse(); twiml.message(botSays); res.writeHead(200, {'Content-Type': 'text/xml'}); res.end(twiml.toString()); }) The preceding code creates a web app that looks for incoming messages from users and responds to them. The response is currently to repeat what the user has said. Push it onto the cloud: git add . git commit -m webapp git push heroku master Now we have a web app on the cloud at https://ms-notification-bot.herokuapp.com/sms/ that can be called when an incoming SMS message arrives. This app will generate an appropriate chatbot response to the incoming message. Go to the Twilio Programmable SMS Dashboard page at https://www.twilio.com/ console/sms/dashboard. Select Messaging Services on the menu and click Create new Messaging Service: Give it a name and select Chat Bot/Interactive 2-Way as the use case: This will take you to the Configure page with a newly-assigned service ID: Under Inbound Settings, specify the URL of the web app we have created in the REQUEST URL field (that is, https://sms-notification-bot.herokuapp.com/sms/): Now all the inbound messages will be routed to this web app. Go back to the SMS console page at https://www.twilio com/console/sms/services. Here you will notice your new messaging service listed along with the inbound request URL: Click the service to attach a number to the service: You can either add a new number, in which case you need to buy one or choose the number you already have. We already have one sending notifications that can be reused. Click Add an Existing Number. Select the number by checking the box on the right and click Add Selected: Once added, it will be listed on the Numbers page as follows: In Advanced settings, we can add multiple numbers for serving different geographic regions and have them respond as if the chatbot is responding over a local number. The final step is to try sending an SMS message to the number and receive a response. Send a message using any SMS app on your phone and observe the response: Congratulations! You now have a two-way interactive chatbot. This tutorial is an excerpt from the book, Hands-On Chatbots and Conversational UI Development written by  Srini Janarthanam. If you found our post useful, do check out this book to get real-world examples of voice-enabled UIs for personal and home assistance. How to build a basic server side chatbot using Go Build a generative chatbot using recurrent neural networks (LSTM RNNs) Top 4 chatbot development frameworks for developers    
Read more
  • 0
  • 0
  • 4912

article-image-build-a-foodie-bot-with-javascript
Gebin George
03 May 2018
7 min read
Save for later

Build a foodie bot with JavaScript

Gebin George
03 May 2018
7 min read
Today, we are going to build a chatbot that can search for restaurants based on user goals and preferences. Let us begin by building Node.js modules to get data from Zomato based on user preferences. Create a file called zomato.js. Add a request module to the Node.js libraries using the following command in the console: This tutorial has been taken from Hands-On Chatbots and Conversational UI Development. > npm install request --save In zomato.js, add the following code to begin with: var request = require('request'); var baseURL = 'https://developers.zomato.com/api/v2.1/'; var apiKey = 'YOUR_API_KEY'; var catergories = null; var cuisines = null; getCategories(); getCuisines(76); Replace YOUR_API_KEY with your Zomato key. Let's build functions to get the list of categories and cuisines at startup. These queries need not be run when the user asks for a restaurant search because this information is pretty much static: function getCuisines(cityId){ var options = { uri: baseURL + 'cuisines', headers: { 'user-key': apiKey }, qs: {'city_id':cityId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); cuisines = JSON.parse(body).cuisines; } } request(options,callback); } The preceding code will fetch a list of cuisines available in a particular city (identified by a Zomato city ID). Let us add the code for identifying the list of categories: function getCategories(){ var options = { uri: baseURL + 'categories', headers: { 'user-key': apiKey }, qs: {}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { categories = JSON.parse(body).categories; } } request(options,callback); } Now that we have the basic functions out of our way, let us code in the restaurant search code: function getRestaurant(cuisine, location, category){ var cuisineId = getCuisineId(cuisine); var categoryId = getCategoryId(category); var options = { uri: baseURL + 'locations', headers: { 'user-key': apiKey }, qs: {'query':location}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); locationInfo = JSON.parse(body).location_suggestions; search(locationInfo[0], cuisineId, categoryId); } } request(options,callback); } function search(location, cuisineId, categoryId){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'categories': [categoryId]}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') var results = JSON.parse(body).restaurants; console.log(results); } } request(options,callback); } The preceding code will look for restaurants in a given location, cuisine, and category. For instance, you can search for a list of Indian restaurants in Newington, Edinburgh that do delivery. We now need to integrate this with the chatbot code. Let us create a separate file called index.js. Let us begin with the basics: var restify = require('restify'); var builder = require('botbuilder'); var request = require('request'); var baseURL = 'https://developers.zomato.com/api/v2.1/'; var apiKey = 'YOUR_API_KEY'; var catergories = null; var cuisines = null; Chapter 6 [ 247 ] getCategories(); //setTimeout(function(){getCategoryId('Delivery')}, 10000); getCuisines(76); //setTimeout(function(){getCuisineId('European')}, 10000); // Setup Restify Server var server = restify.createServer(); server.listen(process.env.port || process.env.PORT || 3978, function () { console.log('%s listening to %s', server.name, server.url); }); // Create chat connector for communicating with // the Bot Framework Service var connector = new builder.ChatConnector({ appId: process.env.MICROSOFT_APP_ID, appPassword: process.env.MICROSOFT_APP_PASSWORD }); // Listen for messages from users server.post('/foodiebot', connector.listen()); Add the bot dialog code to carry out the restaurant search. Let us design the bot to ask for cuisine, category, and location before proceeding to the restaurant search: var bot = new builder.UniversalBot(connector, [ function (session) { session.send("Hi there! Hungry? Looking for a restaurant?"); session.send("Say 'search restaurant' to start searching."); session.endDialog(); } ]); // Search for a restaurant bot.dialog('searchRestaurant', [ function (session) { session.send('Ok. Searching for a restaurant!'); builder.Prompts.text(session, 'Where?'); }, function (session, results) { session.conversationData.searchLocation = results.response; builder.Prompts.text(session, 'Cuisine? Indian, Italian, or anything else?'); }, function (session, results) { session.conversationData.searchCuisine = results.response; builder.Prompts.text(session, 'Delivery or Dine-in?'); }, function (session, results) { session.conversationData.searchCategory = results.response; session.send('Ok. Looking for restaurants..'); getRestaurant(session.conversationData.searchCuisine, session.conversationData.searchLocation, session.conversationData.searchCategory, session); } ]) .triggerAction({ matches: /^search restaurant$/i, confirmPrompt: 'Your restaurant search task will be abandoned. Are you sure?' }); Notice that we are calling the getRestaurant() function with four parameters. Three of these are ones that we have already defined: cuisine, location, and category. To these, we have to add another: session. This passes the session pointer that can be used to send messages to the emulator when the data is ready. Notice how this changes the getRestaurant() and search() functions: function getRestaurant(cuisine, location, category, session){ var cuisineId = getCuisineId(cuisine); var categoryId = getCategoryId(category); var options = { uri: baseURL + 'locations', headers: { 'user-key': apiKey }, qs: {'query':location}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log(body); locationInfo = JSON.parse(body).location_suggestions; search(locationInfo[0], cuisineId, categoryId, session); } } request(options,callback); } function search(location, cuisineId, categoryId, session){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'category': categoryId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') console.log(body); //var results = JSON.parse(body).restaurants; //console.log(results); var resultsCount = JSON.parse(body).results_found; console.log('Found:' + resultsCount); session.send('I have found ' + resultsCount + ' restaurants for you!'); session.endDialog(); } } request(options,callback); } Once the results are obtained, the bot responds using session.send() and ends the dialog: Now that we have the results, let's present them in a more visually appealing way using cards. To do this, we need a function that can take the results of the search and turn them into an array of cards: function presentInCards(session, results){ var msg = new builder.Message(session); msg.attachmentLayout(builder.AttachmentLayout.carousel) var heroCardArray = []; var l = results.length; if (results.length > 10){ l = 10; } for (var i = 0; i < l; i++){ var r = results[i].restaurant; var herocard = new builder.HeroCard(session) .title(r.name) .subtitle(r.location.address) .text(r.user_rating.aggregate_rating) .images([builder.CardImage.create(session, r.thumb)]) .buttons([ builder.CardAction.imBack(session, "book_table:" + r.id, "Book a table") ]); heroCardArray.push(herocard); } msg.attachments(heroCardArray); return msg; } And we call this function from the search() function: function search(location, cuisineId, categoryId, session){ var options = { uri: baseURL + 'search', headers: { 'user-key': apiKey }, qs: {'entity_id': location.entity_id, 'entity_type': location.entity_type, 'cuisines': [cuisineId], 'category': categoryId}, method: 'GET' } var callback = function(error, response, body) { if (error) { console.log('Error sending messages: ', error) } else if (response.body.error) { console.log('Error: ', response.body.error) } else { console.log('Found restaurants:') console.log(body); var results = JSON.parse(body).restaurants; var msg = presentInCards(session, results); session.send(msg); session.endDialog(); } } request(options,callback); } Here is how it looks: We saw how to build a restaurant search bot, that gives you restaurant suggestions as per your preference. If you found our post useful check out Chatbots and Conversational UI Development. Top 4 chatbot development frameworks for developers How to create a conversational assistant using Python My friend, the robot: Artificial Intelligence needs Emotional Intelligence    
Read more
  • 0
  • 0
  • 6533