Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-docker-isnt-going-anywhere
Savia Lobo
22 Jun 2018
5 min read
Save for later

Docker isn't going anywhere

Savia Lobo
22 Jun 2018
5 min read
To create good software, developers often have to weave in the UI, frameworks, databases, libraries, and of course a whole bunch of code modules. These elements together build an immersive user experience on the front-end. However, deploying and testing software is way too complex these days as all these elements should be properly set-up in order to build successful software. Here, containers are of a great help as they enable developers to pack all the contents of their app, including the code, libraries, and other dependencies, and ship it over as a singular package. One can think of software as a puzzle and containers just help one to get all the pieces in their proper position for the effective functioning of the software. Docker is one of the popular choices in containers. The rise of Docker Containers Linux Containers have been in the market for almost a decade. However, it was after the release of Docker five years ago that developers widely started using containers in a simple way. At present, containers, especially Docker containers are popular and in use everywhere and this popularity seems set to stay. As per our Packt Skill Up developer survey on top sysadmin and virtualization tools, almost 46% of the developer crowd voted that they use Docker containers on a regular basis.  It ranked third after Linux and Windows OS in the lead. Source: Packt Skill Up survey 2018 Also, organizations such as Red Hat, Canonical, Microsoft, Oracle and all other major IT companies and cloud businesses that have adopted Docker. Docker is often confused with virtual machines; read our article on Virtual machines vs Containers to understand the differences between the two. VMs such as Hyper-V, KVM, Xen, and so on are based on the concept of emulating hardware virtually. As such, they come with huge system requirements. On the other hand, Docker containers or in general containers use the same OS and kernel. Apart from this, Docker is just right if you want to use minimal hardware to run multiple copies of your app at the same time. This would, in turn, save huge costs on power and hardware for data centers annually. Docker containers boot within a fraction of seconds unlike virtual machines that require 10-20 GB of operating system data to boot, which eventually slows down the whole process. For CI/CD, Docker makes it easy to set up environments for local development that replicates a live server. It helps run multiple development environments using different software, OS, and configurations; all from the same host. One can run test projects on new or different servers and can also work on the same project with similar settings, irrespective of the local host environment. Docker can also be deployed on the cloud as it is designed for integration within most of the DevOps platforms including Puppet, Chef, and so on. One can even manage standalone development environments with it. Why developers love Docker Docker brought in novel ideas in the market for the organizations starting with making containers easy to use and deploy. In the year 2014, Docker announced that it was partnering with the major tech leaders Google, Red Hat, and Parallels on its open-source component libcontainer. This made libcontainer the defacto standard for Linux containers. Microsoft also announced that it would bring Docker-based containers to its Azure Cloud. Docker has also donated its software container format and its runtime, along with its specifications to Linux’s Open Container Project. This project includes all the contents of the libcontainer project, nsinit, and all other modifications such that it can independently run without Docker.  Further, Docker Containerd, is also hosted by Cloud Native Computing Foundation (CNCF). Few reasons why Docker is preferred by many: It has a great user experience, which helps developers to use the programming language of their choice. One requires to perform less amount of coding One can run Docker on any operating system such as Windows, Linux, Mac, and etc. The Docker Kubernetes combo DevOps can be used to deploy and monitor Docker containers but they are not highly optimized for this task. Containers need to be individually monitored as they contain huge density in respect to the matter they contain. The possible solution to this is cloud orchestration tools, and what better than Kubernetes as it is one of the most dominant cloud orchestration tools in the market. As Kubernetes has a bigger community and a bigger share of the market, Docker made a smart move to include Kubernetes as one of its offerings. With this, Docker users and customers can not only use Kubernetes’ secure orchestration experience but also an end-to-end Docker experience. Docker gives an A to Z experience for developers and system administrators. With Docker, Developers can focus on writing code and forget about the rest of the deployment. They can also make use of different programs designed to run on Docker and can make use of it in their own projects. System administrators in a way can reduce system overhead as compared to VMs. Docker’s portability and ease of installation make it easy for admins to save a bunch of time lost in installing individual VM components. Also, with Google, Microsoft, RedHat, and others absorbing Docker technology in their daily operations, it is surely not going anywhere soon. Docker’s future is bright and we can expect machine learning to be a part of it sooner than later. Are containers the end of virtual machines? How to ace managing the Endpoint Operations Management Agent with vROps Atlassian open sources Escalator, a Kubernetes autoscaler project  
Read more
  • 0
  • 0
  • 2868

article-image-operator-overloading-techniques-in-kotlin-you-need-to-know
Aaron Lazar
21 Jun 2018
6 min read
Save for later

4 operator overloading techniques in Kotlin you need to know

Aaron Lazar
21 Jun 2018
6 min read
Operator overloading is a form of polymorphism. Some operators change behaviors on different types. The classic example is the operator plus (+). On numeric values, plus is a sum operation and on String is a concatenation. Operator overloading is a useful tool to provide your API with a natural surface. Let's say that we're writing a Time and Date library; it'll be natural to have the plus and minus operators defined on time units.  In this article, we'll understand how Operator Overloading works in Kotlin. This article has been extracted from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty.  Kotlin lets you define the behavior of operators on your own or existing types with functions, normal or extension, marked with the operator modifier: class Wolf(val name:String) { operator fun plus(wolf: Wolf) = Pack(mapOf(name to this, wolf.name to wolf)) } class Pack(val members:Map<String, Wolf>) fun main(args: Array<String>) { val talbot = Wolf("Talbot") val northPack: Pack = talbot + Wolf("Big Bertha") // talbot.plus(Wolf("...")) } The operator function plus returns a Pack value. To invoke it, you can use the infix operator way (Wolf + Wolf) or the normal way (Wolf.plus(Wolf)). Something to be aware of about operator overloading in Kotlin—the operators that you can override in Kotlin are limited; you can't create arbitrary operators. Binary operators Binary operators receive a parameter (there are exceptions to this rule—invoke and indexed access). The Pack.plus extension function receives a Wolf parameter and returns a new Pack. Note that MutableMap also has a plus (+) operator: operator fun Pack.plus(wolf: Wolf) = Pack(this.members.toMutableMap() + (wolf.name to wolf)) val biggerPack = northPack + Wolf("Bad Wolf") The following table will show you all the possible binary operators that can be overloaded: Operator Equivalent Notes x + y x.plus(y) x - y x.minus(y) x * y x.times(y) x / y x.div(y) x % y x.rem(y) From Kotlin 1.1, previously mod. x..y x.rangeTo(y) x in y y.contains(x) x !in y !y.contains(x) x += y x.plussAssign(y) Must return Unit. x -= y x.minusAssign(y) Must return Unit. x *= y x.timesAssign(y) Must return Unit. x /= y x.divAssign(y) Must return Unit. x %= y x.remAssign(y) From Kotlin 1.1, previously modAssign. Must return Unit. x == y x?.equals(y) ?: (y === null) Checks for null. x != y !(x?.equals(y) ?: (y === null)) Checks for null. x < y x.compareTo(y) < 0 Must return Int. x > y x.compareTo(y) > 0 Must return Int. x <= y x.compareTo(y) <= 0 Must return Int. x >= y x.compareTo(y) >= 0 Must return Int. Invoke When we introduce lambda functions, we show the definition of Function1: /** A function that takes 1 argument. */ public interface Function1<in P1, out R> : Function<R> { /** Invokes the function with the specified argument. */ public operator fun invoke(p1: P1): R } The invoke function is an operator, a curious one. The invoke operator can be called without name. The class Wolf has an invoke operator: enum class WolfActions { SLEEP, WALK, BITE } class Wolf(val name:String) { operator fun invoke(action: WolfActions) = when (action) { WolfActions.SLEEP -> "$name is sleeping" WolfActions.WALK -> "$name is walking" WolfActions.BITE -> "$name is biting" } } fun main(args: Array<String>) { val talbot = Wolf("Talbot") talbot(WolfActions.SLEEP) // talbot.invoke(WolfActions.SLEEP) } That's why we can call a lambda function directly with parenthesis; we are, indeed, calling the invoke operator. The following table will show you different declarations of invoke with a number of different arguments: Operator Equivalent Notes x() x.invoke() x(y) x.invoke(y) x(y1, y2) x.invoke(y1, y2) x(y1, y2..., yN) x.invoke(y1, y2..., yN) Indexed access The indexed access operator is the array read and write operations with square brackets ([]), that is used on languages with C-like syntax. In Kotlin, we use the get operators for reading and set for writing. With the Pack.get operator, we can use Pack as an array: operator fun Pack.get(name: String) = members[name]!! val badWolf = biggerPack["Bad Wolf"] Most of Kotlin data structures have a definition of the get operator, in this case, the Map<K, V> returns a V?. The following table will show you different declarations of get with a different number of arguments: Operator Equivalent Notes x[y] x.get(y) x[y1, y2] x.get(y1, y2) x[y1, y2..., yN] x.get(y1, y2..., yN) The set operator has similar syntax: enum class WolfRelationships { FRIEND, SIBLING, ENEMY, PARTNER } operator fun Wolf.set(relationship: WolfRelationships, wolf: Wolf) { println("${wolf.name} is my new $relationship") } talbot[WolfRelationships.ENEMY] = badWolf The operators get and set can have any arbitrary code, but it is a very well-known and old convention that indexed access is used for reading and writing. When you write these operators (and by the way, all the other operators too), use the principle of least surprise. Limiting the operators to their natural meaning on a specific domain, makes them easier to use and read in the long run. The following table will show you different declarations of set with a different number of arguments: Operator Equivalent Notes x[y] = z x.set(y, z) Return value is ignored x[y1, y2] = z x.set(y1, y2, z) Return value is ignored x[y1, y2..., yN] = z x.set(y1, y2..., yN, z) Return value is ignored Unary operators Unary operators don't have parameters and act directly in the dispatcher. We can add a not operator to the Wolf class: operator fun Wolf.not() = "$name is angry!!!" !talbot // talbot.not() The following table will show you all the possible unary operators that can be overloaded: Operator Equivalent Notes +x x.unaryPlus() -x x.unaryMinus() !x x.not() x++ x.inc() Postfix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. x-- x.dec() Postfix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. ++x x.inc() Prefix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. --x x.dec() Prefix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. Postfix (increment and decrement) returns the original value and then changes the variable with the operator returned value. Prefix returns the operator's returned value and then changes the variable with that value. Now you know how Operator Overloading works in Kotlin. If you found this article interesting and would like to read more, head on over to get the whole book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Extension functions in Kotlin: everything you need to know Building RESTful web services with Kotlin Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 8187

article-image-top-14-cryptocurrency-trading-bots
Guest Contributor
21 Jun 2018
9 min read
Save for later

Top 14 Cryptocurrency Trading Bots - and one to forget

Guest Contributor
21 Jun 2018
9 min read
Men in rags became millionaires and rich people bite the dust within minutes, thanks to crypto currencies. According to a research, over 1500 crypto currencies are being traded globally and with over 6 million wallets, proving that digital currency is here not just to stay but to rule. The rise and fall of crypto market isn’t hidden from anyone but the catch is—cryptocurrency still sells like a hot cake. According to Bill Gates, “The future of money is digital currency”. With thousands of digital currencies rolling globally, crypto traders are immensely occupied and this is where cryptocurrency trading bots come into play. They ease out the currency trade and research process that results in spending less effort and earning more money not to mention the hours saved. According to Eric Schmidt, ex CEO of Google, “Bitcoin is a remarkable cryptographic achievement and the ability to create something that is not duplicable in the digital world has enormous value.” The crucial part is - whether the crypto trading bot is dependable and efficient enough to deliver optimum results within crunch time. To make sure you don't miss an opportunity to chip in cash in your digital wallet, here are the top 15 crypto trading bots ranked according to the performance: 1- Gunbot Gunbot is a crypto trading bot that boasts of detailed settings and is fit for beginners as well as professionals. Along with making custom strategies, it comes with a“Reversal Trading” feature. It enables continuous trading and works with almost all the exchanges (Binance, Bittrex, GDAX, Poloniex, etc). Gunbot is backed by thousands of users that eventually created an engaging and helpful community. While Gunbot offers different packages with price tags of 0.02 to 0.15 BTC, you can always upgrade them. The bot comes with a lifetime license and is constantly upgraded. Haasbot Hassonline created this cryptocurrency trading bot in January 2014. Its algorithm is very popular among cryptocurrency geeks. It can trade over 500 altcoins and bitcoins on famous exchanges such as BTCC, Kraken, Bitfinex, Huobi, Poloniex, etc. You need to put a little input of the currency and the bot will do all the trading work for you. Haasbot is customizable and has various technical indicator tools. The cryptocurrency trading bot also recognizes candlestick patterns. This immensely popular trading bot is priced between 0.12BTC and 0.32 BTC for three months. 3- Gekko Gekko is a cryptocurrency trading bot that supports over 18 Bitcoin exchanges including Bitstamp, Poloniex, Bitfinex, etc. This bot is a backtesting platform and is free for use. It is a full fledged open source bot that is available on the GitHub. Using this bot is easy as it comes with basic trading strategies. The webinterface of Gekko was written from scratch and it can run backtests, visualize the test results while you monitor your local data with it. Gekko updates you on the go using plugins for Telegram, IRC, email and several different platforms. The trading bot works great with all operating systems such as Windows, Linux and macOS. You can even run it on your Raspberry PI and cloud platforms. 4- CryptoTrader CyrptoTrader is a  cloud-based platform which allows users to create automated algorithmic trading programs in minutes. It is one of the most attractive crypto trading bot and you wont need to install any unknown software with this bot. A highly appreciated feature of CryptoTrader is its Strategy Marketplace where users can trade strategies. It supports major currency exchanges such as Coinbase, Bitstamp, BTCe and is supported for live trading and backtesting. The company claims its cloud based trading bots are unique as compared with the currently available bots in the market. 5- BTC Robot One of the very initial automated crypto trading bot, BTC Robot offers multiple packages for different memberships and software. It provides users with a downloadable version of Windows. The minimum robot plan is of $149. BTC Robot sets up quite easily but it is noted that its algorithms aren't great at predicting the markets. The user mileage in BTC Robot varies heavily leaving many with mediocre profits. With the trading bot’s fluctuating evaluation, the profits may go up or down drastically depending on the accuracy of algorithm. On the bright side the bot comes with a sixty day refund policy that makes it a safe buy. 6- Zenbot Another open source trading bot for bitcoin trading, Zenbot can be downloaded and its code can be modified too. This trading bot hasn't got an update in the past months but still, it is among one of the few bots that can perform high frequency trading while backing up multiple assets at a time. Zenbot is a lightweight artificially intelligent crypto trading bot and supports popular exchanges such as Kraken, GDAX, Poloniex, Gemini, Bittrex, Quadriga, etc. Surprisingly, according to the GitHub’s page, Zenbot’s version 3.5.15 bagged an ROI of 195% in just a mere period of three months. 7- 3Commas 3Commas is a famous cryptocurrency trading bot that works well with various exchanges including Bitfinex, Binance, KuCoin, Bittrex, Bitstamp, GDAX, Huiboi, Poloniex and YOBIT. As it is a web based service, you can always monitor your trading dashboard on desktop, mobile and laptop computers. The bot works 24/7 and it allows you to take-profit targets and set stop-loss, along with a social trading aspect that enables you to copy the strategies used by successful traders. ETF-Like feature allows users to analyze, create and back-test a crypto portfolio and pick from the top performing portfolios created by other people. 8- Tradewave Tradewave is a platform that enables users to develop their own cryptocurrency trading bots along with automated trading on crypto exchanges. The bot trades in the cloud and uses Python to write the code directly in the browser. With Tradewave, you don't have to worry about the downtime. The bot doesn't force you to keep your computer on 24x7 nor it glitches if not connected to the internet. Trading strategies are often shared by community members that can be used by others too. However, it currently supports very few cryptocurrency exchanges such as Bitstamp and BTC-E but more exchanges will be added in coming months. 9- Leonardo Leonardo is a cryptocurrency trading bot that supports a number of exchanges such as Bittrex, Bitfinex, Poloniex, Bitstamp, OKCoin, Huobi, etc. The team behind Leonardo is extremely active and new upgrades including plugins are in the funnel. Previously, it cost 0.5 BTC but currently, it is available for $89 with a license of single exchange. Leonardo boasts of two trading strategy bots including Ping Pong Strategy and Margin Maker Strategy. The first strategy enables users to set the buy and sell price leaving all of the other plans to the bot while the Margin Maker strategy can buy and sell on price adjusted according to the direction in the market. This trading bot stands out in terms of GUI. 10- USI Tech USI Tech is a trading bot that is majorly used for forex trading but it also offers BTC packages. While majority of trading bots require an initial setup and installation, USI uses a different approach and it isn't controlled by the users. Users are needed to buy-in from their expert mining and bitcoin trade connections and then, the USI Tech bot guarantees a daily profit from the transactions and trade. To earn one percent of the capital daily, customers are advised to choose feature rich plans.. 11- Cryptohopper Cryptohopper  is a 24/7 cloud based trading bot that means it doesn't matter  if you are on the computer or not. Its system enables users to trade on technical indicators with subscription to a signaler who sends buy signals. According to the Cryptohopper’s website, it is the first crypto trading bot that is integrated with professional external signals. The bot helps in leveraging bull markets and has a latest dashboard area where users can monitor and configure everything. The dashboard also includes a configuration wizard for the major exchanges including Bittrex, GDAX, Kraken,etc. 12- My Bitcoin Bot MBB is a team effort from Brad Sheridon and his proficient teammates who are experts of cryptocurrency investment. My Bitcoin Bot is an automated trading software that can be accessed by anyone who is ready to pay for it. While the monthly plan is of $39 a month, the yearly subscription for this auto-trader bot is available for of $297. My bitcoin bot comes with heaps of advantages such as unlimited technical support, free software updates, access to trusted brokers list, etc. 13- Crypto Arbitrager A standalone application that operates on a dedicated server, Crypto Arbitrager can leverage robots even when the PC is off. The developers behind this cryptocurrency trading bot claim that this software uses code integration of financial time series. Users can make money from the difference in rates of Litecoins and Bitcoins. By implementing the advanced strategy of hedge funds, the trading bot effectively manages savings of users regardless of the state of the cryptocurrency market. 14- Crypto Robot 365 Crypto Robot 365 automatically trades your digital currency. It buys and sells popular cryptocurrencies such as Ripple, Bitcoin, ethereum, Litecoin, Monero, etc. Rather than a signup fee, this platform charges its commision on a per trade basis. The platform is FCA-Regulated and offers a realistic achievable win ratio. According to the trading needs, users can tweak the system. Moreover, it has an established trading history and  it even offers risk management options. Down The Line While cryptocurrency trading is not a piece of cake, trading with currency bots may be confusing for many. The aforementioned trading bots are used by many and each is backed by years of extensive hard work. With reliability, trustworthiness, smartwork and proactiveness being top reasons for choosing any cryptocurrency trading bot, picking up a trading bot is a hefty task. I recommend you experiment with small amount of money first and if your fate gets to a shining start, pick the trading bot that perfectly suits your way of making money via cryptocurrency. About the Author Rameez Ramzan is a Senior Digital Marketing Executive of Cubix - mobile app development company.  He specializes in link building, content marketing, and site audits to help sites perform better. He is a tech geek and loves to dwell on tech news. Crypto-ML, a machine learning powered cryptocurrency platform Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief Apple changes app store guidelines on cryptocurrency mining
Read more
  • 0
  • 1
  • 14792

article-image-5-reasons-node-js-could-topple-java
Amarabha Banerjee
20 Jun 2018
4 min read
Save for later

The top 5 reasons why Node.js could topple Java

Amarabha Banerjee
20 Jun 2018
4 min read
Last year Mikeal Rogers, the community organizer of Node.js foundation stated in an interview: “Node.js will take over Java within a year”. No doubt Java has been the most popular programming language for a very long time. But Node is catching up quickly thanks to its JavaScript connection; the most used programming language for the front end web development. JavaScript has gained significant popularity for server side web development too and that is where Node.js has a bigger role to play. JavaScript functionalities get compiled in the browser and are capable of creating sleek and beautiful websites with ease. Node.js extends JavaScript capabilities to the server side and allows JavaScript code to run on the server side. In this way, JavaScript is able to utilize the resources of the system and perform more complex tasks than just running on the browser. Today we look at the top 5 reasons why Node.js has become so popular with the potential to take over Java. Asynchronous programming Node.js brings asynchronous programming to the server side. The meaning of Asynchronous request handling is that while one request is being addressed, the newer requests will not have to wait in queue in order to be completed.The requests are taken up in parallel and are processed as and when they arrive. This saves a lot of time and also helps to maximize the processor’s power to the full extent. Event Driven Architecture Node.js is completely built upon the foundation of Event Driven Architecture. What do we mean by event driven architecture in Node.js? Every request, be it access to database or a simple redirect to a web address is considered as an event and is stored in a single thread. Once the thread is complete with requests, be it a single request or multiple requests, the events are completed in sequence and any new request is added as an event on top of the previous events. As the events are completed, the output is either printed or delivered. This event driven approach has paved way for the present event driven architecture based application and implementation of microservices. Vibrant Community The Node.js developer community is a large and an active community. This has propelled the creation of several other third party tools which have made server-side development easier. One such tool is Socket.io which enables push messaging between the server and the client. Tools like Socket.io, Express.js, Websockets etc, have enabled faster message transfer resulting in more efficient and better applications. Better for Scaling When you are trying to build a large scale industrial grade application, there are two techniques available - multithreading and event driven architecture. Although the choice depends on the exact requirement of the application, Node can solve a multitude of your problems because it doesn’t just scale up the number of processors, but it can scale up per processor. This simply means the number of processes per processor can also be scaled up in node.js in addition to the number of processors. Real Time Applications Are you developing real time applications like Google doc, or Trello where there is a need of small messages travelling to and from, from the server to the client? Node.js will be the best choice for you to build something similar. The reason being the feature we discussed in the second point - event driven architecture and also the presence of fast messaging tools. The smaller and more frequent your messaging needs, the better node.js works for you. Although we’ve looked at some of the features in favor of Node.js, no technology is above limitations. For example if you are building CRUD applications and there is no need for real time data flow, then node.js would not make your job any easier. If you are looking to build CPU heavy applications, then Node.js might disappoint you because it comprises of only one CPU thread. But keeping in mind that it brings the flexibility of JavaScript to the server side and is the inspiration behind groundbreaking technologies like Microservices, it’s imperative that Node.js is going to grow more in the near future. Server-Side Rendering Implementing 5 Common Design Patterns in JavaScript (ES8) Behavior Scripting in C# and Javascript for game developers
Read more
  • 0
  • 0
  • 3826

Banner background image
article-image-5-things-you-need-to-learn-to-become-a-server-side-web-developer
Amarabha Banerjee
19 Jun 2018
6 min read
Save for later

5 things you need to learn to become a server-side web developer

Amarabha Banerjee
19 Jun 2018
6 min read
The profession of a back end web developer is ringing out loud and companies seek to get a qualified server-side developer to their team. The fact that the back-end specialist has comprehensive set of knowledge and skills helps them realize their potential in versatile web development projects. Before diving into what it takes to succeed at back end development as a profession, let’s look at what it’s about. In simple words, the back end is that invisible part of any application that activates all its internal elements. If the front-end answers the question of “how does it look”, then the back end or server-side web development deals with “how does it work”. A back end developer is the one who deals with the administrative part of the web application, the internal content of the system, and server-side technologies such as database, architecture and software logic. If you intend to become a professional server-side developer then there are few basic steps which will ease out your journey. In this article we have listed down five aspects of server-side development: servers, databases, networks, queues and frameworks, which you must master to become a successful server side web developer. Servers and databases: At the heart of server-side development are servers which are nothing but the hardware and storage devices connected to a local computer with working internet connection. So everytime you ask your browser to load a web page, the data stored in the servers are accessed and sent to the browser in a certain format. The bigger the application, the larger the amount of data stored in the server-side. The larger the data, the higher possibility of lag and slow performance. Databases are the particular file formats in which the data is stored. There are two different types of databases - Relational and Non- Relational. Both have their own pros and cons. Some of the popular databases which you can learn to take your skills up to the next level are NoSQL, SQL Server, MySQL, MongoDB, DynamoDB etc. Static and Dynamic servers: Static servers are physical hard drives where application data, CSS and HTML files, pictures and images are stored. Dynamic servers actually signify another layer between the server and the browser. They are often known as application servers. The primary function of these application servers is to process the data and format it as per the web page when the data fetching operation is initiated from the browser. This makes saving data much easier and process of data loading becomes much faster. For example, Wikipedia servers are filled with huge amounts of data, but they are not stored as HTML pages, rather they are stored as raw data. When they are queried by the browser, the application browser processes the data and formats it into the HTML format and then sends it to the browser. This makes the process a whole lot faster and space saving for the physical data storage. If you want to go a step ahead and think futuristic, then the latest trend is moving your servers on the cloud. This means the server-side tasks are performed by different cloud based services like Amazon AWS, and Microsoft Azure. This makes your task much simpler as a back end developer, since you simply need to decide which services you would require to best run your application and the rest is taken care off by the cloud service providers. Another aspect of server side development that’s generating a lot of interest among developer is is serverless development. This is based on the concept that the cloud service providers will allocate server space depending on your need and you don’t have to take care of backend resources and requirements. In a way the name Serverless is a misnomer, because the servers are there, just that they are in the cloud and you don’t have to bother about it. The primary role of a backend developer in a serverless system would be to figure out the best possible services and optimize the running cost on the cloud, deploy and monitor the system for non-stop robust performance. The communication protocol: The protocol which defines the data transfer between client side and server side is called HyperTextTransfer Protocol (HTTP). When a search request is typed in the browser, an HTTP request with a URL is sent to the server and the server then sends a response message with either request succeeded or web page not found. When an HTML page is returned for a search query, it is rendered by the web browser. While processing the response, the browser may discover links to other resources (e.g. an HTML page usually references JavaScript and CSS pages), and send separate HTTP Requests to download these files. Both static and dynamic websites use exactly the same communication protocol/patterns. As we have progressed quite a long way from the initial communication protocols, newer technologies like SSL, TLS, IPv6 have taken over the web communication domain. Transport Layer Security (TLS) – and its predecessor, Secure Sockets Layer (SSL), which is now deprecated by the Internet Engineering Task Force (IETF) – are cryptographic protocols that provide communications security over a computer network. The primary reason these protocols were introduced was to protect user data and provide increased security. Similarly newer protocols had to be introduced around late 90’s to cater to the increasing number of internet users. Protocols are basically unique identification pointers that determine the IP address of the server. The initial protocol used was IPv4 which is currently being substituted by IPv6 which has the capability to provide 2^128 or 3.4×1038 addresses. Message Queuing: This is one of the most important aspects of creating fast and dynamic web applications. Message Queuing is the stage where data is queued as per the different responses and then delivered to the browser. This process is asynchronous which means that the server and the browser need not interact with the message queue at the same time. There are some popular message queuing tools like RabbitMQ, MQTT, ActiveMQ which provide real time message queuing functionality. Server-side frameworks and languages: Now comes the last but one of the most important pointers. If you are a developer with a particular choice of language in mind, you can use a language based framework to add functionalities to your application easily. Also this makes it more efficient. Some of the popular server-side frameworks are Node.js for JavaScript, Django for Python, Laravel for PHP, Spring for Java and so on. But using these frameworks will need some amount of experience in respective languages. Now that you have a broad understanding of what server-side web development is, and what are the components, you can jump right into server-side development, databases and protocols management to progress into a successful professional back-end web developer. The best backend tools in web development Preparing the Spring Web Development Environment Is novelty ruining web development?  
Read more
  • 0
  • 0
  • 9394

article-image-keep-your-serverless-aws-applications-secure-tutorial
Savia Lobo
18 Jun 2018
11 min read
Save for later

Keep your serverless AWS applications secure [Tutorial]

Savia Lobo
18 Jun 2018
11 min read
Handling security is an extensive and complex topic. If not done right, you open up your app to dangerous hacks and breaches. Even if everything is right, it may be hacked. So it's important we understand common security mechanisms to avoid exposing websites to vulnerabilities and follow the recommended practices and methodologies that have been largely tested and proven to be robust. In this tutorial, we will learn how to secure serverless applications using AWS. Additionally, we will learn about the security basics and then move on to handle authorization and authentication using AWS. This article is an excerpt taken from the book, 'Building Serverless Web Applications' wriiten by Diego Zanon. Security basics in AWS One of the mantras of security experts is this: don't roll your own. It means you should never use in a production system any kind of crypto algorithm or security model that you developed by yourself. Always use solutions that have been highly used, tested, and recommended by trusted sources. Even experienced people may commit errors and expose a solution to attacks, especially in the cryptography field, which requires advanced math. However, when a proposed solution is analyzed and tested by a great number of specialists, errors are much less frequent. In the security world, there is a term called security through obscurity. It is defined as a security model where the implementation mechanism is not publicly known, so there is a belief that it is secure because no one has prior information about the flaws it has. It can be indeed secure, but if used as the only form of protection, it is considered as a poor security practice. If a hacker is persistent enough, he or she can discover flaws even without knowing the internal code. In this case, again, it's better to use a highly tested algorithm than your own. Security through obscurity can be compared to someone trying to protect their own money by burying it in the backyard when the common security mechanism would be to put the money in a bank. The money can be safe while buried, but it will be protected only until someone finds about its existence and starts to look for it. Due to this reason, when dealing with security, we usually prefer to use open source algorithms and tools. Everyone can access and discover flaws in them, but there are also a great number of specialists that are involved in finding the vulnerabilities and fixing them. In this section, we will discuss other security concepts that everyone must know when building a system. Information security When dealing with security, there are some attributes that need to be considered. The most important ones are the following: Authentication: Confirm the user's identity by validating that the user is who they claim to be Authorization: Decide whether the user is allowed to execute the requested action Confidentiality: Ensure that data can't be understood by third-parties Integrity: Protect the message against undetectable modifications Non-repudiation: Ensure that someone can't deny the authenticity of their own message Availability: Keep the system available when needed These terms will be better explained in the next sections. Authentication Authentication is the ability to confirm the user's identity. It can be implemented by a login form where you request the user to type their username and password. If the hashed password matches what was previously saved in the database, you have enough proof that the user is who they claim to be. This model is good enough, at least for typical applications. You confirm the identity by requesting the user to provide what they know. Another kind of authentication is to request the user to provide what they have. It can be a physical device (like a dongle) or access to an e-mail account or phone number. However, you can't ask the user to type their credentials for every request. As long as you authenticate it in the first request, you must create a security token that will be used in the subsequent requests. This token will be saved on the client side as a cookie and will be automatically sent to the server in all requests. On AWS, this token can be created using the Cognito service. How this is done will be described later in this chapter. Authorization When a request is received in the backend, we need to check if the user is allowed to execute the requested action. For example, if the user wants to checkout the order with ID 123, we need to make a query to the database to identify who is the owner of the order and compare if it is the same user. Another scenario is when we have multiple roles in an application and we need to restrict data access. For example, a system developed to manage school grades may be implemented with two roles, such as student and teacher. The teacher will access the system to insert or update grades, while the students will access the system to read those grades. In this case, the authentication system must restrict the actions insert and update for users that are part of the teachers group and users in the students group must be restricted to read their own grades. Most of the time, we handle authorization in our own backend, but some serverless services don't require a backend and they are responsible by themselves to properly check the authorization. For example, in the next chapter, we are going to see how serverless notifications are implemented on AWS. When we use AWS IoT, if we want a private channel of communication between two users, we must give them access to one specific resource known by both and restrict access to other users to avoid the disclosure of private messages. Confidentiality Developing a website that uses HTTPS for all requests is the main drive to achieve confidentiality in the communication between the users and your site. As the data is encrypted, it's very hard for malicious users to decrypt and understand its contents. Although there are some attacks that can intercept the communication and forge certificates (man-in-the-middle), those require the malicious user to have access to the machine or network of the victim user. From our side, adding HTTPS support is the best thing that we can do to minimize the chance of attacks. Integrity Integrity is related to confidentiality. While confidentiality relies on encrypting a message to prevent other users from accessing its contents, integrity deals with protecting the messages against modifications by encrypting messages with digital signatures (TLS certificates). Integrity is an important concept when designing low level network systems, but all that matters for us is adding HTTPS support. Non-repudiation Non-repudiation is a term that is often confused with authentication since both of them have the objective to prove who has sent the message. However, the main difference is that authentication is more interested in a technical view and the non-repudiation concept is interested in legal terms, liability, and auditing. When you have a login form with user and password input, you can authenticate the user who correctly knows the combination, but you can't have 100% certain since the credentials can be correctly guessed or stolen by a third-party. On the other hand, if you have a stricter access mechanism, such as a biometric entry, you have more credibility. However, this is not perfect either. It's just a better non-repudiation mechanism. Availability Availability is also a concept of interest in the information security field because availability is not restricted to how you provision your hardware to meet your user needs. Availability can suffer attacks and can suffer interruptions due to malicious users. There are attacks, such as Distributed Denial of Service (DDoS), that aim to create bottlenecks to disrupt site availability. In a DDoS attack, the targeted website is flooded with superfluous requests with the objective to overload the systems. This is usually accomplished by a controlled network of infected machines called a botnet. On AWS, all services run under the AWS Shield service, which was designed to protect against DDoS attacks with no additional charge. However, if you run a very large and important service, you may be a direct target of advanced and large DDoS attacks. In this case, there is a premium tier offered in the AWS Shield service to ensure your website's availability even in worst case scenarios. This requires an investment of US$ 3,000 per month, and with this, you will have 24x7 support of a dedicated team and access to other tools for mitigation and analysis of DDoS attacks. Security on AWS We use AWS credentials, roles, and policies, but security on AWS is much more than handling authentication and authorization of users. This is what we will discuss in this section. Shared responsibility model Security on AWS is based on a shared responsibility model. While Amazon is responsible for keeping the infrastructure safe, the customers are responsible for patching security updates to software and protecting their own user accounts. AWS's responsibilities include the following: Physical security of the hardware and facilities Infrastructure of networks, virtualization, and storage Availability of services respecting Service Level Agreements (SLAs) Security of managed services such as Lambda, RDS, DynamoDB, and others A customer's responsibilities are as follows: Applying security patches to the operating system on EC2 machines Security of installed applications Avoiding disclosure of user credentials Correct configuration of access policies and roles Firewall configurations Network traffic protection (encrypting data to avoid disclosure of sensitive information) Encryption of server-side data and databases In the serverless model, we rely only on managed services. In this case, we don't need to worry about applying security patches to the operating system or runtime, but we do need to worry about third-party libraries that our application depends on to execute. Also, of course, we need to worry about all the things that we need to configure (firewalls, user policies, and so on), the network traffic (supporting HTTPS) and how data is manipulated by the application. The Trusted Advisor tool AWS offers a tool named Trusted Advisor, which can be accessed through https://console.aws.amazon.com/trustedadvisor. It was created to offer help on how you can optimize costs or improve performance, but it also helps identify security breaches and common misconfigurations. It searches for unrestricted access to specific ports on your EC2 machines, if Multi-Factor Authentication is enabled on the root account and if IAM users were created in your account. You need to pay for AWS premium support to unlock other features, such as cost optimization advice. However, security checks are free. Pen testing A penetration test (or pen test) is a good practice that all big websites must perform periodically. Even if you have a good team of security experts, the usual recommendation is to hire a specialized third-party company to perform pen tests and to find vulnerabilities. This is because they will most likely have tools and procedures that your team may not have tried yet. However, the caveat here is that you can't execute these tests without contacting AWS first. To respect their user terms, you can only try to find breaches on your own account and assets, in scheduled time frames (so they can disable their intrusion detection systems for your assets), and only on restricted services, such as EC2 instances and RDS. AWS CloudTrail AWS CloudTrail is a service that was designed to record all AWS API calls that are executed on your account. The output of this service is a set of log files that register the API caller, the date/time, the source IP address of the caller, the request parameters, and the response elements that were returned. This kind of service is pretty important for security analysis, in case there are data breaches, and for systems that need the auditing mechanism for compliance standards. MFA Multi-Factor Authentication (MFA) is an extra security layer that everyone must add to their AWS root account to protect against unauthorized access. Besides knowing the user and password, a malicious user would also need physical access to your smartphone or security token, which greatly restricts the risks. On AWS, you can use MFA through the following means: Virtual devices: Application installed on Android, iPhone, or Windows phones Physical devices: Six-digit tokens or OTP cards SMS: Messages received on your phone We have discussed the basic security concepts and how to apply them on a serverless project. If you've enjoyed reading this article, do check out 'Building Serverless Web Applications' to implement signup, sign in, and log out features using Amazon Cognito. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 2703
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-a-ux-strategy-is-worthless-without-a-solid-usability-test-plan
Sugandha Lahoti
15 Jun 2018
48 min read
Save for later

A UX strategy is worthless without a solid usability test plan

Sugandha Lahoti
15 Jun 2018
48 min read
A UX practitioner's primary goal is to provide every user with the best possible experience of a product. The only way to do this is to connect repeatedly with users and make sure that the product is being designed according to their needs. At the beginning of a project, this research tends to be more exploratory. Towards the end of the project, it tends to be more about testing the product. In this article, we explore in detail one of the most common methods of testing a product with users--the usability test. We will describe the steps to plan and conduct usability tests. Usability tests provide insights into how to practically plan, conduct, and analyze any user research. This article is an excerpt from the book UX for the Web by Marli Ritter, and Cara Winterbottom. This book teaches you how UX and design thinking can make your site stand out from the rest of the sites on the internet. Tips to maximize the value of user testing Testing with users is not only about making their experience better; it is also about getting more people to use your product. People will not use a product that they do not find useful, and they will choose the product that is most enjoyable and usable if they have options. This is especially the case with the web. People leave websites if they can't find or do things they want. Unlike with other products, they will not take time to work it out. Research by organizations such as the Nielsen Norman group generally shows that a website has between 5 and 10 seconds to show value to a visitor. User testing is one of the main methods available to us to ensure that we make websites that are useful, enjoyable, and usable. However, to be effective it must be done properly. Jared Spool, a usability expert, identified seven typical mistakes that people make while testing with users, which lessen its value. The following list addresses how not to make those mistakes: Know why you're testing: What are the goals of your test? Make sure that you specify the test goals clearly and concretely so that you choose the right method. Are you observing people's behavior (usability test), finding out whether they like your design (focus group or sentiment analysis), or finding out how many do something on your website (web analytics)? Posing specific questions will help to formulate the goals clearly. For example, will the new content reduce calls to the service center? Or what percentage of users return to the website within a week? Design the right tasks: If your testing involves tasks, design scenarios that correspond to tasks users would actually perform. Consider what would motivate someone to spend time on your website, and use this to create tasks. Provide participants with the information they would have to complete the tasks in a real-life situation; no more and no less. For example, do not specify tasks using terms from your website interface; then participants will simply be following instructions when they complete the tasks, rather than using their own mental models to work out what to do. Recruit the right users: If you design and conduct a test perfectly, but test on people who are not like your users, then the results will not be valid. If they know too much or too little about the product, subject area, or technology, then they will not behave like your users would and will not experience the same problems. When recruiting participants, ask what qualities define your users, and what qualities make one person experience the website differently to another. Then recruit on these qualities. In addition, recruit the right number of users for your method. Ongoing research by the Nielsen Norman group and others indicate that usability tests typically require about five people per test, while A/B tests require about 40 people, and card sorting requires about 15 people. These numbers have been calculated to maximize the return on investment of testing. For example, five users in a usability test have been shown by the Nielsen Norman group (and confirmed repeatedly by other researchers) to find about 85% of the serious problems in an interface. Adding more users improves the percentage marginally, but increases the costs significantly. If you use the wrong numbers then your results will not be valid or the amount of data that you need to analyze will be unmanageable for the time and resources you have available. Get the team and stakeholders involved: If user testing is seen as an outside activity, most of the team will not pay attention as it is not part of their job and easy to ignore. When team members are involved, they gain insights into their own work and its effectiveness. Try to get team members to attend some of the testing if possible. Otherwise, make sure everyone is involved in preparing the goals and tasks (if appropriate) for the test. Share the results in a workshop afterward, so everyone can be involved in reflecting on the results and their implications. Facilitate the test well: Facilitating a test well is a difficult task. A good facilitator makes users feel comfortable so they act more naturally. At the same time, the facilitator must control the flow of the test so that everything is accomplished in the available time, and not give participants hints about what to do or say. Make sure that facilitators have a lot of practice and constructive feedback from the team to improve their skills. Plan how to share the results: It takes time and skill to create an effective user testing report that communicates the test and results well. Even if you have the time and skill, most team members will probably not read the report. Find other ways to share results to those who need them. For example, create a bug list for developers using project management software or a shared online document; have a workshop with the team and stakeholders and present the test and results to them. Have review sessions immediately after test days. Iterate: Most user testing is most effective if performed regularly and iteratively; for testing different aspects or parts of the design; for testing solutions based on previous tests; for finding new problems or ideas introduced by the new solutions; for tracking changes to results based on time, seasonality, maturity of product or user base; or for uncovering problems that were previously hidden by larger problems. Many organizations only make provision to test with users once at the end of design, if at all. It is better to split your budget into multiple tests if possible. As we explore usability testing, each of these guidelines will be addressed more concretely. Planning and conducting usability tests Before starting, let's look at what we mean by a usability test, and describe the different types. Usability testing involves watching a representative set of users attempt realistic tasks, and collecting data about what they do and say. Essentially, a usability test is about watching a user interact with a product. This is what makes it a core UX method: it persuades stakeholders about the importance of designing for and testing with their users. Team members who watch participants struggle to use their product are often shocked that they had not noticed the glaringly obvious design problems that are revealed. In later iterations, usability tests should reveal fewer or more minor problems, which provides proof of the success of a design before launch. Apart from glaring problems, how do we know what makes a design successful? The definition of usability by the International Organization for Standardization (ISO) is: Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. This definition shows us the kind of things that make a successful design. From this definition, usability comprises: Effectiveness: How completely and accurately the required tasks can be accomplished. Efficiency: How quickly tasks can be performed. Satisfaction: How pleasant and enjoyable the task is. This can become a delight if a design pleases us in unexpected ways. There are three additional points that arise from the preceding points: Discoverability: How easy it is to find out how to use the product the first time. Learnability: How easy it is to continue to improve using the product, and remember how to use it. Error proneness: How well the product prevents errors and helps users recover. This equates to the number and severity of errors that users experience while doing tasks. These six points provide us with guidance on the kinds of tasks we should be designing and the kind of observations we should be making when planning a usability test. There are three ways of gathering data in a usability test--using metrics to guide quantitative measurement, observing, and asking questions. The most important is observation. Metrics allow comparison, guide observation, and help us design tasks, but they are not as important as why things happen. We discover why by observing interactions and emotional responses during task performance. In addition, we must be very careful when assigning meaning to quantitative metrics because of the small numbers of users involved in usability tests. Typically, usability tests are conducted with about five participants. This number has been repeatedly shown to be most effective when considering testing costs against the number of problems uncovered. However, it is too small for statistical significance testing, so any numbers must be reported carefully. If we consider observation against asking questions, usability tests are about doing things, not discussing them. We may ask users to talk aloud while doing tasks to help us understand what they are thinking, but we need the context of what they are doing. "To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior." - Jakob Nielsen This means that usability tests trump questionnaires and surveys. It also means that people are notoriously bad at remembering what they did or imagining what they will do. It does not mean that we never listen to what users say, as there is a lot of value to be gained from a well-formed question asked at the right time. We must just be careful about how we understand it. We need to interpret what people say within the context of how they say it, what they are doing when they say it, and what their biases might be. For example, users tend to tell researchers what they think we want to hear, so any value judgment will likely be more positive than it should. This is called experimenter bias. Despite the preceding cautions, all three methods are useful and increase the value of a test. While observation is core, the most effective usability tests include tasks carefully designed around metrics, and begin and end with a contextual interview with the user. The interviews help us to understand the user's previous and current experiences, and the context in which they might use the website in their own lives. Planning a usability test can seem like a daunting task. There are so many details to work out and organize, and they all need to come together on the day(s) of the test. The following diagram is a flowchart of the usability test process. Each of the boxes represents a different area that must be considered or organized: However, by using these areas to break the task down into logical steps and keeping a checklist, the task becomes manageable. Planning usability tests In designing and planning a usability test, you need to consider five broad questions: What: What are the objectives, scope, and focus of the test? What fidelity are you testing? How: How will you realize the objectives? Do you need permissions and sign off? What metrics, tasks, and questions are needed? What are the hardware and software requirements? Do you need a prototype? What other materials will you need? How will you conduct the test? Who: How many participants and who will they be? How will you recruit them? What are the roles needed? Will team members, clients, and stakeholders attend? Will there be a facilitator and/or a notetaker? Where: What venue will you use? Is the test conducted in an internal or external lab, on the streets/in coffee shops, or in users' homes/work? When: What is the date of the test? What will the schedule be? What is the timing of each part? Documenting these questions and their answers forms your test plan. The following figure illustrates the thinking around each of these broad questions: It is important to remember that no matter how carefully you plan usability testing, it can all go horribly wrong. Therefore, have backup plans wherever you can. For example, for participants who cancel late or do not arrive, have a couple of spares ready; for power cuts, be prepared with screenshots so you can at least simulate some tasks on paper; for testing live sites when the internet connection fails, have a portable wireless router or cache pages beforehand. Designing the test - formulating goals and structure The first thing to consider when planning a usability test is its goal. This will dictate the test scope, focus, and tasks and questions. For example, if your goal is a general usability test of the whole website, the tasks will be based on the business reasons for the site. These are the most important user interactions. You will ask questions about general impressions of the site. However, if your goal is to test the search and filtering options, your tasks will involve finding things on the website. You will ask questions about the difficulty of finding things. If you are not sure what the specific goal of the usability test might be, think about the following three points: Scope: Do you want to test part of the design, or the whole website? Focus: Which area of the website will you focus on? Even if you want to test the whole website, there will be areas that are more important. For example, checkout versus contact page. Behavioral questions: Are there questions about how users behave, or how different designs might impact user behavior, that are being asked within the organization? Thinking about these questions will help you refine your test goals. Once you have the goals, you can design the structure of the test and create a high-level test plan. When deciding on how many tests to conduct in a day and how long each test should be, remember moderator and user fatigue. A test environment is a stressful situation. Even if you are testing with users in their own home, you are asking them to perform unfamiliar tasks with an unfamiliar system. If users become too tired, this will affect test results negatively. Likewise, facilitating a test is tiring as the moderator must observe and question the user carefully, while monitoring things like the time, their own language, and the script. Here are details to consider when creating a schedule for test sessions: Test length: Typically, each test should be between 60 and 90 minutes long. Number of tests: You should not be facilitating more than 5-6 tests in one day. When counting the hours, leave at least half an hour cushioning space between each test. This gives you time to save the recording, make extra notes if necessary, communicate with any observers, and it provides flexibility if participants arrive later or tests run longer than they should. Number of tasks: This is roughly the number of tasks you hope to include in the test. In a 60-minute test, you will probably have about 40-45 minutes for tasks. The rest of the time will be taken with welcoming the participant, the initial interview, and closing questions at the end. In 45 minutes, you can fit about 5-8 tasks, depending on the nature of the tasks. It is important to remember that less is more in a test. You want to give participants time to explore the website and think about their options. You do not want to be rushing them on to the next task. The last thing to consider is moderating technique. This is how you interact with the participant and ask for their input. There are two aspects: thinking aloud and probing. Thinking aloud is asking participants to talk about what they are thinking and doing so you can understand what is in their heads. Probing is asking participants ad-hoc questions about interesting things that they do. You can do both concurrently or retrospectively: Concurrent thinking aloud and probing: Here, the participant talks while they do tasks and look at the interface. The facilitator asks questions as they come up, while the participant is doing tasks. Concurrent probing interferes with metrics such as time on task and accuracy, as you might distract users. However, it also takes less test time and can deliver more accurate insights, as participants do not have to remember their thoughts and feelings; these are shared as they happen. Retrospective thinking aloud and probing: This involves retracing the test or task after it is finished and asking participants to describe what they were thinking in retrospect. The facilitator may note down questions during tasks, and ask these later. While retrospective techniques simulate natural interaction more closely, they take longer because tasks are retraced. This means that the test must be longer or there will be fewer tasks and interview questions. Retrospective techniques also require participants to remember what they were thinking previously, which can be faulty. Concurrent moderating techniques are preferable because of the close alignment between users acting and talking about those actions. Retrospective techniques should only be used if timing metrics are very important. Even in these cases, concurrent thinking aloud can be used with retrospective probing. Thinking aloud concurrently generally interferes very little with task times and accuracy, as users are ideally just verbalizing ideas already in their heads. At each stage of test planning, share the ideas with the team and stakeholders and ask for feedback. You may need permission to go forward with test objectives and tasks. However, even if you do not need sign off, sharing details with the team gets everyone involved in the testing. This is a good way to share and promote design values. It also benefits the test, as team members will probably have good ideas about tasks to include or elements of the website to test that you have not considered. Designing tasks and metrics As we have stated previously, usability testing is about watching users interacting with a product. Tasks direct the interactions that you want to see. Therefore, they should cover the focus area of the test, or all important interactions if the whole website is tested. To make the test more natural, if possible create scenarios or user stories that link the tasks together so participants are performing a logical sequence of activities. If you have scenarios or task analyses from previous research, choose those that relate to your test goals and focus, and use them to guide your task design. If not, create brief scenarios that cover your goals. You can do this from a top-down or bottom-up perspective: Top down: What events or conditions in their world would motivate people to use this design? For example, if the website is a used goods marketplace, a potential user might have an item they want to get rid of easily, while making some money; or they might need an item and try to get it cheaply secondhand. Then, what tasks accomplish these goals? Bottom up: What are the common tasks that people do on the website? For example, in the marketplace example, common tasks are searching for specific items; browsing through categories of items; adding an item to the site to sell, which might include uploading photographs or videos, adding contact details and item descriptions. Then, create scenarios around these tasks to tie them together. Tasks can be exploratory and open-ended, or specific and directed. A test should have both. For example, you can begin with an open-ended task, such as examining the home page and exploring the links that are interesting. Then you can move onto more directed tasks, such as finding a particular color, size, and brand of shoe and adding it to the checkout cart. It is always good to begin with exploratory tasks, but these can be open-ended or directed. For example, to gather first impressions of a website, you could ask users to explore as they prefer from the home page and give their impressions as they work; or you could ask users to look at each page for five seconds, and then write down everything they remember seeing. The second option is much more controlled, which may be necessary if you want more direct comparison between participants, or are testing with a prototype where only parts of the website are available. Metrics are needed for task, observation, and interview analysis, so that we can evaluate the success of the design we are testing. They guide how we examine the results of a usability test. They are based on the definition of usability, and so relate to effectiveness, efficiency, satisfaction, discoverability, learnability, and error proneness. Metrics can be qualitative or quantitative. Qualitative metrics aim to encode the data so that we can detect patterns and trends in it, and compare the success of participants, tasks, or tests. For example, noting expressions of delight or frustration during a task. Quantitative metrics collect numbers that we can manipulate and compare against each other or benchmarks. For example, the number of errors each participant makes in a task. We must be careful how we use and assign meaning to quantitative metrics because of the small sample sizes. Here are some typical metrics: Task success or completion rates: This measures effectiveness and should always be captured as a base. It relates most closely to conversion, which is the primary business goal for a website, whether it is converting browsers to buyers, or visitors to registered users. You may just note success or failure, but it is more revealing to capture the degree of task success. For example, you can specify whether the task is completed easily, with some struggle, with help, or is not completed successfully. Time on task: A measure of efficiency. How long it takes to complete tasks. Errors per task: A measure of error-proneness. The number and severity of errors per task, especially noting critical errors where participants may not even realize they have made a mistake. Steps per task: A measure of efficiency. A number of steps or pages needed to complete each task, often against a known minimum. First click: A measure of discoverability. Noting the first click to accomplish each task, to report on findability of items on the web page. This can also be used in more exploratory tasks to judge what attracts the user's attention first. When you have designed tasks, consider them against the definition of usability to make sure that you have covered everything that you need or want to cover. The preceding diagram shows the metrics typically associated with each component of the usability definition. A valid criticism of usability testing is that it only tests first-time use of a product, as participants do not have time to become familiar with the system. There are ways around this problem. For example, certain types of task, such as search and browsing, can be repeated with different items. In later tasks, participants will be more familiar with the controls. The facilitator can use observation or metrics such as task time and accuracy to judge the effect of familiarity. A more complicated method is to conduct longitudinal tests, where participants are asked to return a few days or a week later and perform similar tasks. This is only reasonable to spend time and money on if learnability is an important metric. Planning questions and observation The interview questions that are asked at the beginning and end of a test provide valuable context for user actions and reactions, such as the user's background, their experiences with similar websites or the subject-area, and their relationship to technology. They also help the facilitator to establish rapport with the user. Other questions provide valuable qualitative information about the user's emotional reaction to the website and the tasks they are doing. A combination of observation and questions provides data on aspects such as ease of use, usefulness, satisfaction, delight, and frustration. For the initial interview, questions should be about: Welcome: These set the participant at ease, and can include questions about the participant's lifestyle, job, and family. These details help UX practitioners to present test participants as real people with normal lives when reporting on the test. Domain: These ask about the participant's experience with the domain of the website. For example, if the website is in the domain of financial services, questions might be around the participant's banking, investments, loans, and their experiences with other financial websites. As part of this, you might investigate their feelings about security and privacy. Tech: These questions ask about the participant's usage and experience with technology. For example, for testing a website on a computer, you might want to know how often the participant uses the internet or social media each day, what kinds of things they do on the internet, and whether they buy things online. If you are testing mobile usage, you might want to inquire about how often the participant uses the internet on their phone each day, and what kind of sites they visit on mobile versus desktop. Like tasks, questions can be open-ended or closed. An example of an open-ended question is: Tell me about how you use your phone throughout a normal workday, beginning with waking up in the morning and ending with going to sleep at night. The facilitator would then prompt the participant for further details suggested during the reply. A closed question might be: What is your job? These generate simple responses, but can be used as springboards into deeper answers. For example, if the answer is fireman, the facilitator might say, That's interesting. Tell me more about that. What do you do as a fireman? Questions asked at the end of the test or during the test are more about the specific experience of the website and the tasks. These are often made more quantifiable by using a rating scale to structure the answer. A typical example is a Likert scale, where participants specify their agreement or disagreement with a statement on a 5- to 7-point scale. For example, a statement might be: I can find what I want easily using this website. #1 is labeled Strongly Agree and #7 is labelled Strongly Disagree. Participants choose the number that corresponds to the strength of their agreement or disagreement. You can then compare responses between participants or across different tests. Examples of typical questions include: Ease of use (after every task): On a scale of 1-7, where 1 is really hard and 7 is really easy, how difficult or easy did you find this task? Ease of use (at the end): On a scale of 1-7, how easy or difficult did you find working on this website? Usefulness: On a scale of 1-7, how useful do you think this website would be for doing your job? Recommendation: On a scale of 1-7, how likely are you to recommend this website to a friend? It is important to always combine these kinds of questions with observation and task performance, and to ask why afterwards. People tend to self-report very positively, so often you will pay less attention to the number they give and more to how they talk about their answer afterwards. The final questions you ask provide closure for the test and end it gracefully. These can be more general and conversational. They might deliver useful data, but that is not the priority. For example, What did you think of the website? or Is there anything else you'd like to say about the website? Questions during the test often arise ad hoc because you do not understand why the participant does an action, or what they are thinking about if they stare at a page of the website for a while. You might also want to ask participants what they expect to find before they select a menu item or look at a page. In preparing for observation, it is helpful to make a list of the kinds of things you especially want to observe during the test. Typical options are: Reactions to each new page of the website First reactions when they open the Home page The variety of steps used to complete each task Expressions of delight or frustration Reactions to specific elements of the website First clicks for each task First click off the Home page Much of what you want to observe will be guided by the usability test objectives and the nature of the website. Preparing the script Once you have designed all the elements of the usability test, you can put them together in a script. This is a core document in usability testing, as it acts as the facilitator's guide during each test. There are different ideas about what to include in a script. Here, we describe a comprehensive script that describes the whole test procedure. This includes, in rough order: The information that must be told to the participant in the welcome speech. The welcome speech is very important, as it is the participant's first experience of the facilitator. It is where the rapport will first be established. The following information may need to be included: Introduction to the facilitator, client, and product. What will happen during the test, including the length. The idea that the website is being tested, not the participant, and that any problems are the fault of the product. This means the participant is valuable and helpful to the team developing a great website. Asking the participant to think aloud as much as possible, and to be honest and blunt about what they think. Asking them to imagine that they are at home in a natural situation, exploring the website. If there are observers, indication that people may be watching and that they should be ignored. Asking permission to record the session, and telling the participant why. Assuring them of their privacy and the limited usage of the recordings to analysis and internal reporting. A list of any documents that the participant must look at or sign first, for example, an NDA. Instructions on when to switch on and off any recording devices. The questions to ask in thematic sections, for example, welcome, domain, and technology. These can include potential follow - on questions, to delve for more information if necessary. A task section, that has several parts: An introduction to the prototype if necessary. If you are testing with a prototype, there will probably be unfinished areas that are not clickable. It is worth alerting participants so they know what to expect while doing tasks and talking aloud. Instructions on how to use the technology if necessary. Ideally your participants should be familiar with the technology, but if this is not the case, you want to be testing the website, not the technology. For example, if you are testing with a particular screen reader and the participant has not used it before, or if you are using eye tracking technology. An introduction to the tasks, including any scenarios provided to the participant. The description of each task. Be careful not to use words from the website interface when describing tasks, so you do not guide the participant too much. For example, instead of: How would you add this item to your cart?, say How would you buy this item? Questions to include after each task. For example, the ease of use question. Questions to prompt the participant if they are not thinking aloud when they should, especially for each new page of the website or prototype. For example: What do you see here? What can you do here? What do you think these options mean? Final questions to finish off the test, and give the participant a chance to emphasize any of their experiences. A list of any documents the participant must sign at the end, and instructions to give the incentive if appropriate. Once the script is created, timing is added to each task and section, to help the facilitator make sure that the tests do not run over time. This will be refined as the usability test is practiced. The script provides a structure to take notes in during the test, either on paper or digitally: Create a spreadsheet with rows for each question and task Use the first column for the script, from the welcome questions onwards Capture notes in subsequent columns for the user Use a separate spreadsheet for each participant during the test After all the tests, combine the results into one spreadsheet so you can easily analyze and compare The following is a diagram showing sections of the script for notetaking, with sample questions and tasks, for a radio station website: Securing a venue and inviting clients and team members If you are testing at an external venue, this is one of the first things you will need to organize for a usability test, as these venues typically need to be booked about one-two months in advance. Even if you are testing in your own offices, you will still need to book space for the testing. When considering a test venue, you should be looking for the following: A quiet, dedicated space where the facilitator, participant, and potentially a notetaker, can sit. This needs surfaces for all the equipment that will be used during the test, and comfortable space for the participant. Consider the lighting in the test room. This might cause glare if you are testing on mobile phones, so think about how best to handle the glare. For example, where the best place is for the participant to sit, and whether you can use indirect lighting of some kind. A reception room where participants can wait for their testing session. This should be comfortable. You may want to provide refreshments for participants here. Ideally, an observation room for people to watch the usability tests. Observers should never be in the same space as the testing, as this will distract participants, and probably make them uncomfortable. The observation room should be linked to the test room, either with cables or wirelessly, so observers can see what is happening on the participant's screen, and hear (and ideally see) the participant during the test. Some observation rooms have two-way mirrors into the test room, so observers can watch the facilitator and participant directly. Refreshments should be available for the observers. We have discussed various testing roles previously. Here, we describe them formally: Facilitator: This is the person who conducts the test with the participant. They sit with the participant, instruct them in the tasks and ask questions, and take notes. This is the most important role during the test. We will discuss it further in the Conducting usability tests section. Participant: This is the person who is doing the test. We will discuss recruiting test participants in the next section. Notetaker: This is an optional role. It can be worth having a separate notetaker, so the facilitator does not have to take notes during the test. This is especially the case if the facilitator is inexperienced. If there is a notetaker, they sit quietly in the test room and do not engage with the participant, except when introduced by the facilitator. Receptionist: Someone must act as receptionist for the participants who arrive. This cannot be the facilitator, as they will be in the sessions. Ask a team member or the office receptionist to take this role. Observers: Everyone else is an observer. These can be other team members and/or clients. Observers should be given guidelines for their behavior. For example, they should not interact with test participants or interrupt the test. They watch from a separate room, and should not be too noisy so that they can be heard in the test room (often these rooms are close to each other). The facilitator should discuss the tests with observers between sessions, to check if they have any questions they would like added to the test, and to discuss observations. It is worth organizing a debriefing for immediately after the tests, or the next day if possible, for the observers and facilitator to discuss the tests and observations. It is important that as many stakeholders as possible are persuaded to watch at least some of the usability testing. Watching people using your designs is always enlightening, and really helps to bring a team together. Remember to invite clients and team members early, and send reminders closer to the day. Recruiting participants When recruiting participants for usability tests, make sure that they are as close as possible to your target audience. If your website is live and you have a pool of existing users, then your job is much easier. However, if you do not have a user pool, or you want to test with people who have not used your site, then you need to create a specification for appropriate users that you can give to a recruiter or use yourself. To specify your target audience, consider what kinds of people use your website, and what attributes will cause them to behave differently to other users. If you have created personas during previous research, use these to help identify target user characteristics. If you are designing a website for a client, work with them to identify their target users. It is important to be specific, as it is difficult to look for people who fulfill abstract qualities. For example, instead of asking for tech savvy people, consider what kinds of technology such people are more likely to use, and what their activities are likely to be. Then ask for people who use the technology in the ways you have identified. Consider the behaviors that result from certain types of beliefs, attitudes, and lifestyle choices. The following are examples of general areas you should consider: Experience with technology: You may want users who are comfortable with technology or who have used specific technology, for example, the latest smartphones, or screen readers. Consider the properties that will identify these people. For example, you can specify that all participants must own a specific type or generation of mobile device, and must have owned it for at least two months. Online experience: You may want users with a certain level and frequency of internet usage. To elicit this, you can specify that you want people who have bought items online within the last few months, or who do online banking, or have never done these things. Social media presence: Often, you want people who have a certain amount of social media interaction, potentially on specific platforms. In this case you would specify that they must regularly post to or read social media such as Facebook, Twitter, Instagram, Snapchat, or more hobbyist versions such as Pinterest and/or Flickr. Experience with the domain: Participants should not know too much or too little about the domain. For example, if you are testing banking software, you may want to exclude bank employees, as they are familiar with how things work internally. Demographics: Unless your target audience is very skewed, you probably want to recruit a variety of people demographically. For example, a range of ages, gender ethnicity, economic, and education levels. There may be other characteristics you need in your usability test participants. The previous characteristics should give you an idea of how to specify such people. For example, you may want hobbyist photographers. In this case, you would recruit people who regularly take photographs and share them with friends in some way. Do not use people who you have previously used in testing, unless you specifically need people like this, as they will be familiar with your tests and procedures, which will interfere with results. Recruiting takes time and is difficult to do well. There are various ways of recruiting people for user testing, depending on your business. You may be able to use people or organizations associated with your business or target audience members to recruit people using the screening questions and incentives that you give them. You can set up social media lists of people who follow your business and are willing to participate. You can also use professional recruiters, who will get you exactly the kinds of people you need, but will charge you for it. For most tests, an incentive is usually given to thank participants for their time. This is often money, but it can also be a gift, such as stationery or gift certificates. A recruitment brief is the document that you give to recruiters. The following are the details you need to include: Day of the test, the test length, and the ideal schedule. This should state the times at which the first and last participants may be scheduled, how long each test will take, and the buffer period that should be between each session. The venue. This should include an address, maps, parking, and travel information. Contact details for the team members who will oversee the testing and recruitment. A description of the test that can be given to participants. The incentives that will be provided. The list of qualities you need in participants, or screening questions to check for these. This document can be modified to share with less formal recruitment associates. The benefit of recruiters is that they handle the whole recruitment process. If you and your team recruit participants yourselves, you will need to remind them a week before the test, and the day before the test, usually by messaging or emailing them. On the day of the test, phone participants to confirm that they will be arriving, and that they know how to get to the venue. Participants still often do not attend tests, even with all the reminders. This is the nature of testing with real people. Ideally you will be given some notice, so try to recruit an extra couple of possible participants who you can call in a pinch on the day. Setting up the hardware, software, and test materials Depending on the usability test, you will have to prepare different hardware, software, and test materials. These include screen recording software and hardware, notetaking hardware, the prototype to test, screen sharing options, and so on. The first thing to consider is the prototype, as this will have implications for hardware and software. Are you testing a live website, an electronic prototype, or a paper prototype? Live website: Set up any accounts or passwords that may be necessary. Make sure you have reliable access to the internet, or a way to cache the website on your machine if necessary. Electronic prototype: Make sure the prototype works the way it is supposed to, and that all the parts that are accessed during the tasks can be interacted with, if required. Try not to make it too obvious which parts work and which parts do not work, as this may guide participants to the correct actions during the test. Be prepared to talk participants through parts of the prototype that do not work, so they have context for the tasks. Have a safe copy of the prototype in case this copy becomes corrupted in some way. Paper prototype: Make sure that you have sketches or printouts of all the screens that you need to complete the tasks. With paper prototype testing, the facilitator takes the role of the computer, shows the results of the actions that the participant proposes, and talks participants through the screens. Make sure that you are prepared for this and know the order of the screens. Have multiple copies of the paper prototype in case parts get lost or destroyed. For any of the three, make sure the team goes through the test tasks to make sure that everything is working the way it should be. For hardware and other software, keep an equipment list, so you can check it to make sure you have all the necessary hardware with you. You may need to include: Hardware for participant to interact with the prototype or live site: This may be a desktop, laptop, or mobile device. If testing on a mobile device, you can ask participants to use their own familiar phones instead of an unfamiliar test mobile device. However, participants may have privacy issues with using their own phones, and you will not be able to test the prototype or live site on the phone beforehand. If you provide a laptop, include a separate mouse as people often have difficulty with unfamiliar mouse pads. Recording the screen and audio: This is usually screen capture software. There are many options for screen capturing software, such as Lookback, an inexpensive option for iOS and Android, and CamStudio, a free option for the PC. Specialist software that handles multiple camera inputs allows you to record face and screen at the same time. Examples are iSpy, free CCTV software, Silverback, an inexpensive option for the Mac, and Morae, an expensive but impressive option for the PC. Mobile recording alternative: You can also record mobile video with an external camera that captures the participant's fingers on screen. This means you do not have to install additional software on the phone, which might cause performance problems. In this case, you would use a document camera attached to the table, or a portable rig with a camera containing the phone and attached to a nearby PC. The video will include hesitations and hovering gestures, which are useful for understanding user behavior, but fingers might occlude the screen. In addition, rigs may interfere with natural usage of the mobile phone, as participants must hold the rig as well as the phone. Observer viewing screen: This is needed if there are observers. The venue might have screen sharing set up; if not, you will have to bring your own hardware and software. This could be an external monitor and extension cables to connect to a laptop in the interview room. It could also be screen sharing software, for example, join.me. Capturing notes: You will need a method to capture notes. Even if you are screen recording, notes will help you to review the recordings more efficiently, and remind you about parts of the recording you wanted to pay special attention to. One method is using a tablet or laptop and spreadsheet. Typing is fast and the electronic notes are easy to put together after the tests. An alternative is paper and pencil. The benefit of this is that it is least disruptive to the participant. However, these notes must be captured electronically. Camera for participant face: Capturing the participant's face is not crucial. However, it provides good insight into their feelings about tasks and questions. If you don't record face, you will only have tone of voice and the notes that were taken to remind you. Possible methods are using a webcam attached to the computer doing screen recording, or using inbuilt software such as Hangouts, Skype, or FaceTime for Apple devices. Microphone: Often sound quality is not great on screen capturing software, because of feedback from computer equipment. Using an external microphone improves the quality of sound. Wireless router: A portable wireless router in case of internet problems (if you are using the internet). Extra extension cables and chargers for all devices. You will also need to make sure that you have multiple copies of all documents needed for the testing. These might include: Consent form: When you are testing people, they typically need to give their permission to be tested. You also typically need proof that the incentive has been received by the participant. These are usually combined into a form that the participant signs to give their permission and acknowledge receipt of the incentive. Non-disclosure agreement (NDA): Many businesses require test participants to sign NDAs before viewing the prototype. This must be signed before the test begins. Test materials: Any documents that provide details to the participants for the test. Checklists: It is worth printing out your checklists for things to do and equipment, so that you can check them off as you complete actions, and be sure that you have done everything by the time it needs to be done. The following figure shows a basic sample checklist for planning a usability test. For a more detailed checklist, add in timing and break the tasks down further. These refinements will depend on the specific usability test. Where you are uncertain about how long something will take, overestimate. Remember that once you have fixed the day, everything must be ready by then. Checklist for usability test preparation Conducting usability tests On the day(s) of the usability test, if you have planned properly, all you should have to worry about are the tests themselves, and interacting with the participants. Here is a list of things to double-check on the day of each test: Before the first test: Set up and check equipment and rooms. Have a list of participants and their order. Make sure there are refreshments for participants and observers. Make sure you have a receptionist to welcome participants. Make sure that the prototype is installed or the website is accessible via the internet and working. Test all equipment, for example, recording software, screen sharing, and audio in observations room. Turn off anything on the test computer or device that might interfere with the test, for example, email, instant messaging, virus scans, and so on. Create bookmarks for any web pages you need to open. Before each test: Have the script ready to capture notes from a new participant. Have the screen recorder ready. Have the browser open in a neutral position, for example, Google search. Have sign sheets and incentive ready. Start screen sharing. Reload sample data if necessary, and clear the browser history from the last test. During each test: Follow the script, including when the participant must sign forms and receive the incentive. Press record on the screen recorder. Give the microphone to the participant if appropriate. After each test: Stop recording and save the video. Save the script. End screen sharing. Note extra details that you did not have time for during the session. Once you have all the details organized, the test session is in the hands of the facilitator. Best practices for facilitating usability sessions The facilitator should be welcoming and friendly, but relatively ordinary and not overly talkative. The participant and website should be the focus of the interview and test, not the facilitator. To create rapport with the participant, the facilitator should be an ally. A good way to do this is to make fun of the situation and reassure participants that their experiences in the test will be helpful. Another good technique is to ask more like an apprentice than an expert, so that the participant answers your questions, for example: Can you tell me more about how this works? and What happens next?. Since you want participants to feel as natural and comfortable as possible in their interactions, the facilitator should foster natural exploration and help satisfy participant curiosity as much as possible. However, they need to remain aware of the script and goals of the test, so that the participant covers what is needed. Participants often struggle to talk aloud. They forget to do so while doing tasks. Therefore, the facilitator often needs to nudge participants to talk aloud or for information. Here are some useful questions or comments: What are you thinking? What do you think about that? Describe the steps you're doing here. What's going on here? What do you think will happen next? Is that what you expected to happen? Can you show me how you would do that? When you are asking questions, you want to be sure that you help participants to be as honest and accurate as possible. We've previously stated that people are notoriously bad at projecting what they will do or remembering what they did. This does not mean that you cannot ask about what people do. You must just be careful about how you ask and always try to keep it concrete. The priorities in asking questions are: Now: Participants talking aloud about what they are doing and thinking now. Retrospective: Participants talking about what they have done or thought in the past. Never prospective: Never ask participants about what they would do in the future. Rather ask about what they have done in similar situations in the past. Here are some other techniques for ensuring you get the best out of the participants, and do not lead them too much yourself: Ask probing questions such as why and how to get to the real reasons for actions. Do not assume you know what participants are going to say. Check or paraphrase if you are not sure what they said or why they said it. For example, So are you saying the text on the left is hard to read? or You're not sure about what? or That picture is weird? How? Do not ask leading questions, as people will give positive answers to please you. For example, do not say Does that make sense?, Do you like that? or Was that easy? Rather say Can you explain how this works? What do you think of that? and How did you find doing that task? Do not tell participants what they are looking at. You are trying to find out what they think. For example, instead of Here is the product page, say Tell me what you see here, or Tell me what this page is about. Return the question to the participant if they ask what to do or what will happen: I can't tell you because we need to find out what you would do if you were alone at home. What would you normally do? or What do you think will happen? Ask one question at a time, and make time for silence. Don't overload the participants. Give them a chance to reply. People will often try to fill the silence, so you may get more responses if you don't rush to fill it yourself. Encourage action, but do not tell them what to do. For example, Give it a try. Use acknowledgment tokens to encourage action and talking aloud. For example, OK, uh huh, mm hmm. A good facilitator makes participants feel comfortable and guides them through the tasks without leading while observing carefully and asking questions where necessary. It takes practice to accomplish this well. The facilitator (and the notetaker if there is one) must also think about the analysis that will be done. Analysis is time-consuming; think about what can be done beforehand to make it easier. Here are some pointers: Taking notes on a common spreadsheet with a script is helpful because the results are ready to be combined easily. If you are gathering quantitative results, such as timing tasks or counting steps to accomplish activities, prepare spaces to note these on the spreadsheet before the test, so all the numbers are easily accessible afterward. If you are rating task completion, then note a preliminary rating as the task is completed. This can be as simple as selecting appropriate cell colors beforehand and coloring each cell as the task is completed. This may change during analysis, but you will have initial guidance. Listen for useful and illustrative quotes or video segment opportunities. Note down the quote or roughly note the timestamp, so you know where to look in the recording. In general, have a timer at hand, and note the timestamp of any important moments in each test. This will make reviewing the recordings easier and less time-consuming. We examined how to plan, organize, and conduct a usability test. As part of this, we have discussed how to design a test with goals, tasks, metrics, and questions using the definition of usability. If you liked this article, be sure to check out this book UX for the Web  to make a web app fully accessible from a development and design perspective. 3 best practices to develop effective test automation with Selenium Unit Testing in .NET Core with Visual Studio 2017 for better code quality Unit Testing and End-To-End Testing
Read more
  • 0
  • 0
  • 5216

article-image-apple-usb-restricted-mode-everything-you-need-to-know
Amarabha Banerjee
15 Jun 2018
4 min read
Save for later

Apple USB Restricted Mode: Here's Everything You Need to Know

Amarabha Banerjee
15 Jun 2018
4 min read
You must have heard about the incident where the FBI was looking to unlock the iPhone of a mass shooting suspect (one of the attackers in the San Bernardino shooting in 2015). The feds could not unlock the phone, as Apple didn’t budge from their stand of protecting user data. After a few days, police said that they have found a private agency to open the phone. The seed of that feud between the feds and Apple has evolved into a fully grown tree now. This month, Apple announced a new security feature called restricted USB mode. This disables the device’s lightning port after one hour of being locked. Quite expectedly, the law enforcement agencies are not at ease with this particular development. This feature was first introduced in the iOS 11.3 release and then retracted in the next release. But now Apple plans to introduce this feature in the upcoming iOS 12 beta release. The reason as stated by Apple is to protect user data from third party hackers and malwares which have the potential to access iPhone data remotely. You must be wondering, to what extent are these threats genuine. Whether this will mean you locking yourself out of your phone unwittingly with nothing to get you out of the situation. Well, the answer is multilayered. Firstly, if you are not an avid supporter of data privacy and feel you have nothing to hide, then this move might just annoy you for a while. You might wonder about times  when your phone is locked and suddenly forget your unlocking/ passkey. Pretty simple, write it somewhere safe and remember where you have kept it. But in case you are like me, you keep seeing the recent news of user data being hacked, and that worries you. Users are being profiled by different companies for varying end objectives from selling products to shaping up your opinion about politics and other aspects of your life. As such this news might make you a bit comfortable about your next iOS update. Private agencies coming up with solutions to open locked iPhones worried Apple. Companies like Cellebrite and Grayshift are selling devices that can hack any locked Apple device (iPhone and iPad) by using the lightning port. The apparent price of one such device is around 15k USD. What prompted Apple to introduce this security feature into their devices was that government agencies were buying these devices on a regular basis to hack into devices. Hence the threat was real, and the only way to address over 700 million iPhone users’ fears seemed to be introducing the USB restricted mode. The war is however just beginning. Third party companies are already claiming that they have devised a way to overcome this new security feature, which is yet unconfirmed. But Apple is sure to take cognizance of this fact and press their developers more to stay ahead in this cat and mouse game. This has not gone well with the law enforcement agencies as well, they see it as an attempt by Apple to create more hurdles in preventing serious and heinous crimes such as paedophilia. Their side of the argument states that now with the one hour timer since the user locks his or her phone, it becomes much more difficult for them to indict the guilty because they have more room to escape. What do you think this means? Does this give you more faith on your Apple product and will it really compel you to buy that $1200 iPhone with the confidence that your banking data, personal messages, pictures and your other sensitive data are safe at the hands of Apple? Or will it empower the perpetrators of crime to have more confidence that now their activities are not just protected by a passkey, but by an hour of time since they lock it, after which it becomes a black box? No matter what your thoughts are, the war is on, between hackers and Apple. If you belong to either of these communities, these are exciting times. If you are one of the 700 million Apple users, you can feel a bit more secure after the iOS 12 update rolls out. Apple changes app store guidelines on cryptocurrency mining Apple introduces macOS Mojave with UX enhancements like voice memos, redesigned App Store Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others
Read more
  • 0
  • 0
  • 2685

article-image-a-serverless-online-store-on-aws-could-save-you-money-build-one
Savia Lobo
14 Jun 2018
9 min read
Save for later

A serverless online store on AWS could save you money. Build one.

Savia Lobo
14 Jun 2018
9 min read
In this article you will learn to build an entire serverless project of an AWS online store, beginning with a React SPA frontend hosted on AWS followed by a serverless backend with API Gateway and Lambda functions. This article is an excerpt taken from the book, 'Building Serverless Web Applications' written by Diego Zanon. In this book, you will be introduced to the AWS services, and you'll learn how to estimate costs, and how to set up and use the Serverless Framework. The serverless architecture of AWS' online store We will build a real-world use case of a serverless solution. This sample application is an online store with the following requirements: List of available products Product details with user rating Add products to a shopping cart Create account and login pages For a better understanding of the architecture, take a look at the following diagram which gives a general view of how different services are organized and how they interact: Estimating costs In this section, we will estimate the costs of our sample application demo based on some usage assumptions and Amazon's pricing model. All pricing values used here are from mid 2017 and considers the cheapest region, US East (Northern Virginia). This section covers an example to illustrate how costs are calculated. Since the billing model and prices can change over time, always refer to the official sources to get updated prices before making your own estimations. You can use Amazon's calculator, which is accessible at this link: http://calculator.s3.amazonaws.com/index.html. If you still have any doubts after reading the instructions, you can always contact Amazon's support for free to get commercial guidance. Assumptions For our pricing example, we can assume that our online store will receive the following traffic per month: 100,000 page views 1,000 registered user accounts 200 GB of data transferred considering an average page size of 2 MB 5,000,000 code executions (Lambda functions) with an average of 200 milliseconds per request Route 53 pricing We need a hosted zone for our domain name and it costs US$ 0.50 per month. Also, we need to pay US$ 0.40 per million DNS queries to our domain. As this is a prorated cost, 100,000 page views will cost only US$ 0.04. Total: US$ 0.54 S3 pricing Amazon S3 charges you US$ 0.023 per GB/month stored, US$ 0.004 per 10,000 requests to your files, and US$ 0.09 per GB transferred. However, as we are considering the CloudFront usage, transfer costs will be charged by CloudFront prices and will not be considered in S3 billing. If our website occupies less than 1 GB of static files and has an average per page of 2 MB and 20 files, we can serve 100,000 page views for less than US$ 20. Considering CloudFront, S3 costs will go down to US$ 0.82 while you need to pay for CloudFront usage in another section. Real costs would be even lower because CloudFront caches files and it would not need to make 2,000,000 file requests to S3, but let's skip this detail to reduce the complexity of this estimation. On a side note, the cost would be much higher if you had to provision machines to handle this number of page views to a static website with the same availability and scalability. Total: US$ 0.82 CloudFront pricing CloudFront is slightly more complicated to price since you need to guess how much traffic comes from each region, as they are priced differently. The following table shows an example of estimation: RegionEstimated trafficCost per GB transferredCost per 10,000 HTTPS requestsNorth America70%US$ 0.085US$ 0.010Europe15%US$ 0.085US$ 0.012Asia10%US$ 0.140US$ 0.012South America5%US$ 0.250US$ 0.022 As we have estimated 200 GB of files transferred with 2,000,000 requests, the total will be US$ 21.97. Total: US$ 21.97 Certificate Manager pricing Certificate Manager provides SSL/TLS certificates for free. You only need to pay for the AWS resources you create to run your application. IAM pricing There is no charge specifically for IAM usage. You will be charged only by what AWS resources your users are consuming. Cognito pricing Each user has an associated profile that costs US$ 0.0055 per month. However, there is a permanent free tier that allows 50,000 monthly active users without charges, which is more than enough for our use case. Besides that, we are charged for Cognito Syncs of our user profiles. It costs US$ 0.15 for each 10,000 sync operations and US$ 0.15 per GB/month stored. If we estimate 1,000 active and registered users with less than 1 MB per profile, with less than 10 visits per month in average, we can estimate a charge of US$ 0.30. Total: US$ 0.30 IoT pricing IoT charges starts at US$ 5 per million messages exchanged. As each page view will make at least 2 requests, one to connect and another to subscribe to a topic, we can estimate a minimum of 200,000 messages per month. We need to add 1,000 messages if we suppose that 1% of the users will rate the products and we can ignore other requests like disconnect and unsubscribed because they are excluded from billing. In this setting, the total cost would be of US$ 1.01. Total: US$ 1.01 SNS pricing We will use SNS only for internal notifications, when CloudWatch triggers a warning about issues in our infrastructure. SNS charges US$ 2.00 per 100,000 e-mail messages, but it offers a permanent free tier of 1,000 e-mails. So, it will be free for us. CloudWatch pricing CloudWatch charges US$ 0.30 per metric/month and US$ 0.10 per alarm and offers a permanent free tier of 50 metrics and 10 alarms per month. If we create 20 metrics and expect 20 alarms in a month, we can estimate a cost of US$ 1.00. Total: US$ 1.00 API Gateway pricing API Gateway starts charging US$ 3.50 per million of API calls received and US$ 0.09 per GB transferred out to the Internet. If we assume 5 million requests per month with each response with an average of 1 KB, the total cost of this service will be US$ 17.93. Total: US$ 17.93 Lambda pricing When you create a Lambda function, you need to configure the amount of RAM memory that will be available for use. It ranges from 128 MB to 1.5 GB. Allocating more memory means additional costs. It breaks the philosophy of avoiding provision, but at least it's the only thing you need to worry about. The good practice here is to estimate how much memory each function needs and make some tests before deploying to production. A bad provision may result in errors or higher costs. Lambda has the following billing model: US$ 0.20 per 1 million requests US$ 0.00001667 GB-second Running time is counted in fractions of seconds, rounding up to the nearest multiple of 100 milliseconds. Furthermore, there is a permanent free tier that gives you 1 million requests and 400,000 GB-seconds per month without charges. In our use case scenario, we have assumed 5 million requests per month with an average of 200 milliseconds per execution. We can also assume that the allocated RAM memory is 512 MB per function: Request charges: Since 1 million requests are free, you pay for 4 million that will cost US$ 0.80. Compute charges: Here, 5 million executions of 200 milliseconds each gives us 1 million seconds. As we are running with a 512 MB capacity, it results in 500,000 GB-seconds, where 400,000 GB-seconds of these are free, resulting in a charge of 100,000 GB-seconds that costs US$ 1.67. Total: US$ 2.47 SimpleDB pricing Take a look at the following SimpleDB billing where the free tier is valid for new and existing users: US$ 0.14 per machine-hour (25 hours free) US$ 0.09 per GB transferred out to the internet (1 GB is free) US$ 0.25 per GB stored (1 GB is free) Take a look at the following charges: Compute charges: Considering 5 million requests with an average of 200 milliseconds of execution time, where 50% of this time is waiting for the database engine to execute, we estimate 139 machine hours per month. Discounting 25 free hours, we have an execution cost of US$ 15.96. Transfer costs: Since we'll transfer data between SimpleDB and AWS Lambda, there is no transfer cost. Storage charges: If we assume a 5 GB database, it results in US$ 1.00, since 1 GB is free. Total: US$ 16.96, but this will not be added in the final estimation since we will run our application using DynamoDB. DynamoDB DynamoDB requires you to provision the throughput capacity that you expect your tables to offer. Instead of provisioning hardware, memory, CPU, and other factors, you need to say how many read and write operations you expect and AWS will handle the necessary machine resources to meet your throughput needs with consistent and low-latency performance. One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second, where objects have a size up to 4 KB. Regarding the writing capacity, one unit means that you can write one object of size 1 KB per second. Considering these definitions, AWS offers in the permanent free tier 25 read units and 25 write units of throughput capacity, in addition to 25 GB of free storage. It charges as follows: US$ 0.47 per month for every Write Capacity Unit (WCU) US$ 0.09 per month for every Read Capacity Unit (RCU) US$ 0.25 per GB/month stored US$ 0.09 GB per GB transferred out to the Internet Since our estimated database will have only 5 GB, we are on the free tier and we will not pay for transferred data because there is no transfer cost to AWS Lambda. Regarding read/write capacities, we have estimated 5 million requests per month. If we evenly distribute them, we will get two requests per second. In this case, we will consider that it's one read and one write operation per second. We need to estimate now how many objects are affected by a read and a write operation. For a write operation, we can estimate that we will manipulate 10 items on average and a read operation will scan 100 objects. In this scenario, we would need to reserve 10 WCU and 100 RCU. As we have 25 WCU and 25 RCU for free, we only need to pay for 75 RCU per month, which costs US$ 6.75. Total: US$ 6.75 Total pricing Let's summarize the cost of each service in the following table: ServiceMonthly CostsRoute 53US$ 0.54S3US$ 0.82CloudFrontUS$ 21.97CognitoUS$ 0.30IoTUS$ 1.01CloudWatchUS$ 1.00API GatewayUS$ 17.93LambdaUS$ 2.47DynamoDBUS$ 6.75TotalUS$ 52.79 It results in a total cost of ~ US$ 50 per month in infrastructure to serve 100,000 page views. If you have a conversion rate of 1%, you can get 1,000 sales per month, which means that you pay US$ 0.05 in infrastructure for each product that you sell. Thus, in this article you learned the serverless architecture of AWS online store also learned how to estimate its costs. If you've enjoyed reading the excerpt, do check out, Building Serverless Web Applications to monitor the performance, efficiency and errors of your apps and also learn how to test and deploy your applications. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Serverless computing wars: AWS Lambdas vs Azure Functions Using Amazon Simple Notification Service (SNS) to create an SNS topic
Read more
  • 0
  • 0
  • 6764

article-image-progressive-web-amps-combining-progressive-web-apps-and-amp
Sugandha Lahoti
14 Jun 2018
8 min read
Save for later

Progressive Web AMPs: Combining Progressive Wep Apps and AMP

Sugandha Lahoti
14 Jun 2018
8 min read
Modern day web development is getting harder. Users are looking for relentless, responsive and reliable browsing. They want faster results and richer experiences. In addition to this, Modern apps need to be designed so as to support a large number of ecosystems from mobile web, desktop web, Native ioS, Native Android, Instant articles etc. Every new technology which launches has its own USP. The need for today is combining the features of the various popular mobile tech in the market and reaping their benefits as a combination. Acknowledging the standalones In a study by google it was found that “53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.” This calls for making page loads faster and effortless. A cure for this illness is in the form of AMP or Accelerated Mobile Pages, the brainwork of Google and Twitter. They are blazingly fast web pages purely meant for readability and speed. Essentially they are HTML, most of CSS, but no JavaScript. So heavy duty things such as images are not loaded until they are scrolled into view. In AMPs, links are pre-rendered before you click on them. This is made possible using the AMP caching infrastructure. It automatically caches and calls on the content to be displayed atop the AMP and that is why it feels instant. Because the developers almost never write JavaScript, it leads to a cheap, yet fairly interactive deployment model. However, AMPs are useful for a narrow range of content. They have limited functionality. Users, on the other hand, are also looking for reliability and engagement. This called for the development of what is known as Progressive web apps. Proposed by Google in 2015, PWAs combine the best of mobile and web applications to offer users an enriching experience. Think of Progressive web apps as a website that acts and feels like a complete app. Once the user starts exploring the app within the browser, it progressively becomes smarter, faster and makes user experience richer.  Application Shell Architecture and Service Workers are two core drivers that enable PWA to offer speed and functionality. Key benefits that PWA offers over traditional mobile sites include push notifications, highly responsive UI, all types of hardware access which includes access to camera & microphones, and low data usage to name a few. The concoction: PWA + AMP AMPs are fast and easy to deploy. PWAs are engaging and reliable. AMPs are effortless, more retentive and instant. PWAs supports dynamic content, provides push notifications and web manifests. AMPs work on user acquisition. PWAs enhance user experiences. They seemingly work perfectly well on different levels. But users want to Start quick and Stay quick. They want the content they view to make the first hop blazingly fast, but then provide richer pages by amazing reliability and engagement. This called for combining the features of both into one and this was how Progressive web AMPs was born. PWAMP, as the developers call it, combines the capabilities of native app ecosystem with the reach of the mobile Web. Let us look at how exactly it functions and does the needful. The Best of Both Worlds: Reaping benefits of both AMPs fall back when you have dynamic content. Lack of JavaScript means dynamic functionality such as Payments, or push notifications are unavailable. PWA, on the other hand, can never be as fast as an AMP on the first click. Progressive Web AMPs combines the best features of both by making the first click super fast and then rendering subsequent PWA pages/content. So AMP opens a webpage in the blink of an eye with zero time lag and then the subsequent swift transition to PWA leads to beautiful results with dynamic functionalities. So it starts fast and builds up as users browse further. Now, this merger is made possible using three different ways. AMP as PWA: AMP pages in combination with PWA features This involves enabling PWA features in AMP pages. The user clicks on the link, it boots up fast and you see an AMP page which loads from the AMP cache. On clicking subsequent links, the user moves away from AMP cache to the site’s domain(origin). The website continues using the AMP library, but because it is supported on origin now, service workers become active, making it possible to prompt users (by web manifests) to install a PWA version of their website for a progressive experience. AMP to PWA: AMP pages utilized for a smooth transition to PWA features In PWAs the service workers and app shells kick in only after the second step. Hence AMPs can be a perfect entry point for your apps whereas the user discovers content at fast rates with AMP pages, the service worker of the PWA installs in the background and the user is instantly upgraded to PWA in subsequent clicks which can add push notifications, add reminders, web manifests etc. So basically the next click is also going to be instant. AMP in PWA: AMP as a data source for PWA AMPs are easy and safe to embed. As they are self-contained units, they are easily embeddable in websites. Hence they can be utilized as a data source for PWAs.  AMPs make use of Shadow AMP, which can be introduced in your PWA. This AMP library, loads in the top level page. It can amplify the portions in the content as decided by the developer and connect to a whole load of documents for rendering them out. As the AMP library is compiled and loaded only once for, the entire PWA, it would, in turn, reduce backend implementations and client complexity. How are they used in the real world scenario: Shopping PWAMP offers a high engagement feature to the shoppers. Because AMP sites are automatically kept at the top by Google search engines, AMP attracts the customers to your sites by the faster discovery of the apps. The PWA keeps them thereby allowing a rich, immersive, and app-like shopping experience that keeps the shoppers engaged. Lancôme, the L’Oréal Paris cosmetics brand is soon combining AMP with their existing PWA. Their PWA had led to a 17% year over year increase in the mobile sales. With the addition of AMP, they aim to build lightweight mobile pages that load as fast as possible on smartphones to make the site faster and more engaging. Travel PWAMP features allow users to browse through a list of hotels which instantly loads up at the first click. The customer may then book a hotel of his choice in the subsequent click which upgrades him to the PWA experience. Wego, is a Singapore-based travel service. Its PWAMP has achieved a load time for new users at 1.6 seconds and 1 second for returning customers. This has helped to increase site visits by 26%, reduce bounce rates by 20% and increase conversions by 95%, since its launch. News and Media Progressive Web AMPs are also highly useful in the news apps. As the user engages with content using AMP, PWA downloads in the background creating frictionless, uninterrupted reading. Washington Post has come up with one such app where users can experience the Progressive Web App when reading an AMP article and clicking through to the PWA link when it appears in the menu. In addition, their PWA icon can be added to a user’s home screen through the phone’s browser. All the above examples showcase how the concoction proves to always be fast no matter what. Progressive Web AMPs are progressively enhanced with just one backend-the AMP to rule them all meaning that deploy targets are reduced considerably. So all ecosystems namely web, Android, and iOS are supported with just thin layers of extra code. Thus making them highly beneficial in cases of constrained engineering resources or reduced infrastructure complexity. In addition to this, Progressive Web AMPs are highly useful when a site has a lot of static content on individual pages, such as travel, media, news etc. All these statements assert the fact that PWAMP has the power to provide a full mobile web experience with an artful and strategic combination of the AMP and PWA technologies. To know more about how to build your own Progressive Web AMPs, you can visit the official developer’s website. Top frameworks for building your Progressive Web Apps (PWA) 5 reasons why your next app should be a PWA (progressive web app) Build powerful progressive web apps with Firebase
Read more
  • 0
  • 0
  • 2930
article-image-containers-end-of-virtual-machines
Vijin Boricha
13 Jun 2018
5 min read
Save for later

Are containers the end of virtual machines?

Vijin Boricha
13 Jun 2018
5 min read
For quite sometime now virtual machines (VMs) have gained a lot of traction. The major reason for this trend was IT industries were totally convinced about the fact that instead of having a huge room filled with servers, it is better to deploy all your workload on a single piece of hardware. There is no doubt that virtual machines have succeeded as they save a lot of cost and work pretty well making failovers easier. In a similar sense when containers were introduced they received a lot attention and have recently gained even more popularity amongst IT organisations. Well, there are a set of considerable reasons for this buzz; they are highly scalable, easy to use, portable, have faster execution and are mainly cost effective. Containers also subside management headaches as they share a common operating system. With this kind of flexibility it is quite easier to fix bugs, place update patches and make other alterations. All-in-all containers are lightweight and more portable than virtual machines. If all of this is true, are virtual machines going extinct? Well, for this answer you will have to deep dive into the complexities of both worlds. How Virtual Machines work? A virtual machine is an individual operating system installed on your usual operating system. The entire implementation is done by software emulation and hardware virtualization. Usually multiple virtual machines are used on servers where the physical machine remains the same but each virtual environment runs a completely separate service. Consider a Ubuntu server as a VM and use it to install all or any service you need. Now, if your deployment needs a set of software to handle web applications you provide all the necessary services to your application. Suddenly, there is a requirement for an additional service where your situation gets tighter, as all your resources are preoccupied. All you need to do is, install the new service on the guest virtual machine and you are all set to relax. Advantages of using virtual machines Multiple OS environments can run simultaneously on the same physical machine Easy to maintain, highly available, convenient recovery, and application provisioning Virtual machines tend to be more secure than containers Operating system flexibility on VMs is better than that of containers Disadvantages of using virtual machines Simultaneously running virtual machines may introduce an unstable performance, depending on the workload on the system by other running virtual machines Hardware accessibility becomes quite difficult when it comes to virtual machines Virtual machines are heavier in size taking up several gigabytes How Containers work? You can consider containers as lightweight, executable packages that provide everything an application needs to run and function as desired. A container usually sits on top of a physical server and its host OS allowing applications to run reliably in different environments by subtracting the operating system and physical infrastructure. So where VMs depend totally on hardware we have a new popular kid in town that requires significantly lesser hardware and does the task with ease and efficiency. Suppose you want to deploy multiple web servers faster, containers make it easier. The reason for this is, as you are deploying single services the containers require lesser hardware compared to virtual machines. The benefit of using containers does not end here. Docker, a popular container solution, creates a cluster of docker engines in such a way that they are managed as a single virtual system. So if you’re looking at deploying apps with scale, and lesser failovers your first preference should be containers. Advantages of using Containers You can any day add more computing workload on the same server as containers consume less resources Servers can load more containers than virtual machines as they are usually in megabytes Containers makes it easier to allocate resources to processes which helps running your applications in different environments Containers are cost effective solutions that help in decreasing both operating and development cost. Bug tracking and testing is easier in containers as there isn’t any difference in running your application locally, or on test servers, or in production Development, testing, and deployment time decreases with containers Disadvantages of using Containers Since containers share the kernel and other components of host operating system it become more vulnerable and can impact security of other containers as well Lack of operating system flexibility. Everytime you want to run a container on a different operating system you need to start a new server. Now coming to the original question. Are containers worth it? Will they eliminate virtualization entirely? Well, after reading this article you must have already guessed the clear winner considering the advantages over disadvantages of each platform. So, in virtual machines the hardware is virtualized to run multiple operating system instances. If one needs a complete platform that can provide multiple services then, virtual machines is your answer as it is considered a matured and a secure technology. If you're looking at achieving high scalability, agility, speed, lightweight, and portability, all this comes under just one hood, containers. With this standardised unit of software, one can stay ahead of the competition. If you still have concerns over security and how a vulnerable kernel can jeopardize the cluster than you need, DevSecOps is your knight in shining armor. The whole idea of DevSecOps is to bring operations and development together with security functions. In a nutshell, everyone involved in a software development life cycle is responsible for security. Kubernetes Containerd 1.1 Integration is now generally available Top 7 DevOps tools in 2018 What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 3770

article-image-computer-vision-is-an-expanding-market-heres-why
Aaron Lazar
12 Jun 2018
6 min read
Save for later

Computer vision is growing quickly. Here's why.

Aaron Lazar
12 Jun 2018
6 min read
Computer Vision is one of those technologies that has grown in leaps and bounds over the past few years. If you look back 10 years, it wasn’t the case, as CV was more a topic of academic interest. Now, however, computer vision is clearly a driver and benefactor of the renowned Artificial Intelligence. Through this article, we’ll understand the factors that have sparked the rise of Computer Vision. A billion $ market You heard it right! Computer Vision is a billion dollar market, thanks to the likes of Intel, Amazon, Netflix, etc investing heavily in the technology’s development. And from the way events are unfolding, the market is expected to hit a record $ 17 billion, by 2023. That’s at a cumulative growth rate of over 7% per year, from 2018 to 2023. Now this is a joint figure for both the hardware and software components related to Computer Vision. Under the spotlight Let’s talk a bit about a few companies that are already taking advantage of Computer Vision, and are benefiting from it. Intel There are several large organisations that are investing heavily in Computer Vision. Last year, we saw Intel invest $15 Billion towards acquiring Mobileye, an Israeli auto startup. Intel published its findings stating that the autonomous vehicle market itself would rise to $ 7 Trillion by 2050. The autonomous vehicle industry will be one of the largest implementers of computer vision technology. These vehicles will use Computer Vision to “see” their surroundings and communicate with other vehicles. Netflix Netflix on the other hand, is using Computer Vision for more creative purposes. With the rise of Netflix’s original content, the company is investing in Computer Vision to harvest static image frames directly from the source videos to provide a flexible source of raw artwork, which is used for digital merchandising. For example, within a single episode of Stranger Things, there are nearly 86k static video frames, that would had to have been analysed by human teams to identify the most appropriate stills to be featured. This meant first going through each of those 86k images, then understanding what worked for viewers of the previous episode and then applying the learning in the selection of future images. Need I estimate how long that would have taken to do? Now, Computer Vision performs this task seamlessly, with a much higher accuracy than that of humans. Pinterest Pinterest, the popular social networking application, sees millions of images, GIFs and other visuals shared every day. In 2017, they released an application feature callen Lens, that allows users to use their phone’s camera to search for similar looking decor, food and clothing, in the real world. Users can simply point their cameras at an image and Pinterest will show them similar styles and ideas. Recent reports reveal that Pinterest’s revenue has grown by a staggering 58%! National Surveillance in CCTV The world’s biggest AI startup, SenseTime, provides China with the world’s largest and most sophisticated CCTV network. With over 170 Mn CCTV cameras, the government authorities and police departments are able to seamlessly identify people. They perform this by wearing smart glasses, that have facial recognition capabilities. Bring this technology to Dubai and you’ve got a supercop in a supercar! The nation-wide surveillance project that’s named Skynet, began as early as 2005, although recent advances in AI have given it a boost. Reading through discussions like these is real fun. People used to quip that such “fancy” machines are only for the screen. If only they knew that such a machine would be a reality just a few years from then. Clearly, computer vision is one of the most highly valued commercial applications of machine learning and when integrated with AI, it’s an offer only a few can resist! Star Acquisitions that matter Several acquisitions have taken place in the field of Computer Vision in the past two years alone. The most notable of them being Intel’s acquisition of Movidius, to the tune of $400 Mn. Here are some of the others that have happened since 2016: Twitter acquires Magic Pony Technology for $150Mn Snap Inc acquires Obvious Engineering for $47 Mn Salesforce acquires Metamind for $32.8 Mn Google acquires Eyefluence for $21.6 Mn This shows the potential of the computer vision market and how big players are in the race to dive deep into the technology. Three little things driving computer vision I would say there are 3 clear growth factors that are contributing to the rise of Computer Vision: Deep Learning Advancements in Hardware Growth of the Datasets Deep Learning The advancements in the field of Deep Learning are bound to boost Computer Vision. Deep Learning algorithms are capable of processing tonnes of images, much more accurately than humans. Take Feature Extraction for example. The primary pain point with feature extraction is that you have to choose which features to look for in a given image. This becomes cumbersome and almost impossible when the number of classes you are trying to define, starts to grow. There are so many features, that you have to deal with a plethora of parameters, that have to be fine-tuned. Deep Learning simplifies this process for you. Advancements in Hardware With new hardware like GPUs capable of processing petabytes of data, algorithms are capable of running faster and more efficiently. This has led to the advancement in real-time processing and vision capabilities. Pioneering hardware manufacturers like NVIDIA and Intel are in a race to create more powerful and capable hardware to support deep learning capabilities for Computer Vision. Growth of the Datasets Training Deep Learning algorithms isn’t a daunting task anymore. There are plenty of open source data sets that you can choose from to train your algorithms. The more the data, the better is the training and accuracy. Here are some of the most notable data sets for computer vision. ImageNet with 15 million images, is a massive dataset Open Images has 9 million images Microsoft Common Objects in Context (COCO) has around 330K images CALTECH-101  has approximately 9,000 images Where tha money at? The job market for Computer Vision is on a rise too, with Computer Vision featuring at #3 on the list of top jobs in 2018, according to Indeed. Organisations are looking for Computer Vision Engineers who are well versed with writing efficient algorithms for handling large amounts of data. Source: Indeed.com So is it the right time to invest or perhaps learn Computer Vision? You bet it is! It’s clear that Computer Vision is a rapidly growing market and will have a sustained growth for the next few years. If you’re just planning to start out or even if you’re competent in using tools for Computer Vision, here are some resources to help you skill up with popular CV tools and techniques. Introducing Intel’s OpenVINO computer vision toolkit for edge computing Top 10 Tools for Computer Vision Computer Vision with Keras, Part 1
Read more
  • 0
  • 0
  • 4293

article-image-these-are-the-best-machine-learning-conferences-in-2018
Richard Gall
12 Jun 2018
8 min read
Save for later

7 of the best machine learning conferences for the rest of 2018

Richard Gall
12 Jun 2018
8 min read
We're just about half way through the year - scary, huh? But there's still time to attend a huge range of incredible machine learning conferences in 2018. Given that in this year's Skill Up survey developers working every field told us that they're interested in learning machine learning, it will certainly be worth your while (and money). We fully expect this year's machine learning conference circuit to capture the attention of those beyond the analytics world. The best machine learning conferences in 2018 But which machine learning conferences should you attend for the rest of the year? There's a lot out there, and they're not always that cheap. Let's take a look at 10 of the best machine learning conferences for the rest of this year. AI Summit London When and where? June 12-14 2018, Kensington Palace and ExCel Center, London, UK. What is it? AI Summit is all about AI and business - it's as much for business leaders and entrepreneurs as it is for academics and data scientists. The summit covers a lot of ground, from pharmaceuticals to finance to marketing, but the main idea is to explore the incredible ways Artificial Intelligence is being applied to a huge range of problems. Who is speaking? According to the event's website, there are more than 400 speakers at the summit. The keynote speakers include a number of impressive CEOs including Patrick Hunger, CEO of Saxo Bank and Helen Vaid, Global Chief Customer Officer of Pizza Hut. Who's it for? This machine learning conference is primarily for anyone who would like to consider themselves a thought leader. Don't let that put you off though, with a huge number of speakers from across the business world it is a great opportunity to see what the future of AI might look like. ML Conference, Munich When and where? June 18-10, 2018, Sheraton Munich Arabella Park Hotel, Munich, Germany. What is it? Munich's ML Conference is also about the applications of machine learning in the business world. But it's a little more practical-minded than AI Summit - it's more about how to actually start using machine learning from a technological standpoint. Who is speaking? Speakers at ML Conference are researchers and machine learning practitioners. Alison Lowndes from NVIDIA will be speaking, likely offering some useful insight on how NVIDIA is helping make deep learning accessible to businesses; Christian Petters, solutions architect at AWS will also be speaking on the important area of machine learning in the cloud. Who's it for? This is a good conference for anyone starting to become acquainted with machine learning. Obviously data practitioners will be the core audience here, but sysadmins and app developers starting to explore machine learning would also benefit from this sort of machine learning conference. O'Reilly AI Conference, San Francisco When and where? September 5-7 2018, Hilton Union Square, San Francisco, CA. What is it? According to O'Reilly's page for the event, this conference is being run to counter those conferences built around academic AI research. It's geared (surprise, surprise) towards the needs of businesses. Of course, there's a little bit of aggrandizing marketing spin there, but the idea is fundamentally a good one. It's all about exploring how cutting edge AI research can be used by businesses. It's somewhere between the two above - practical enough to be of interest to engineers, but with enough blue sky scope to satisfy the thought leaders. Who is speaking? O'Reilly have some great speakers here. There's someone else making an appearance for NVIDIA - Gaurav Agarwal, who's heading up the company's automated vehicles project. There's also Sarah Bird from Facebook who will likely have some interesting things to say about how her organization is planning to evolve its approach to AI over the years to come. Who is it for? This is for those working at the intersection of business and technology. Data scientists and analysts grappling with strategic business questions, CTOs and CMOs beginning to think seriously about how AI can change their organization will all find something here. O'Reilly Strata Data Conference, New York When and where? September 12-13, 2018, Javits Centre, New York, NY. What is it? O'Reilly's Strata Data Conference is slightly more Big Data focused than its AI Conference. Yes it will look at AI and deep learning, but it's going to tackle those areas from a big data perspective first and foremost. It's more established than the AI Summit (it actually started back in 2012 as Strata + Hadoop World), so there's a chance it will have a slightly more conservative vibe. That could be a good or bad thing, of course. Who is speaking? This is one of the biggest Big Data conferences on the planet, As you'd expect the speakers are from some of the biggest organizations in the world, from Cloudera to Google and AWS. There's a load of names we could pick out, but one we're most excited about is Varant Zanoyan from Airbnb who will be talking about Zipline, Airbnb's new data management platform for machine learning. Who's it for? This is a conference for anyone serious about big data. There's going to be a considerable amount of technical detail here, so you'll probably want to be well acquainted with what's happening in the big data world. ODSC Europe 2018, London When and where? September 19-22, Novotel West, London, UK. What is it? The Open Data Science Conference is very much all about the open source communities that are helping push data science, machine learning and AI forward. There's certainly a business focus, but the event is as much about collaboration and ideas. They're keen to stress how mixed the crowd is at the event. From data scientists to web developers, academics and business leaders, ODSC is all about inclusivity. It's also got a clear practical bent. Everyone will want different things from the conference, but learning is key here. Who is speaking? ODSC haven't yet listed speakers on their website, simply stating on their website "our speakers include some of the core contributors to many open source tools, libraries, and languages". This indicates the direction of the event - community driven, and all about the software behind it. Who's it for? More than any of the other machine learning conferences listed here, this is probably the one that really is for everyone. Yes, it might be a more technical than theoretical, but it's designed to bring people into projects. Speakers want to get people excited, whether they're an academic, app developer or CTO. MLConf SF, San Francisco When and where? November 14 2018, Hotel Nikko, San Francisco, CA. What is it? MLConf has a lot in common with ODSC. The focus is on community and inclusivity rather than being overtly corporate. However, it is very much geared towards cutting edge research from people working in industry and academia - this means it has a little more of a specialist angle than ODSC. Who is speaking? At the time of writing, MLConf are on the look out for speakers. If you're interested, submit an abstract - guidelines can be found here. However, the event does have Uber's Senior Data Science Manager Franzisca Bell scheduled to speak, which is sure to be an interesting discussion on the organization's current thinking and challenges with huge amounts of data at its disposal. Who's it for? This is an event for machine learning practitioners and students. Level of expertise isn't strictly an issue - an inexperienced data analyst could get a lot from this. With some key figures from the tech industry there will certainly be something for those in leadership and managerial positions too. AI Expo, Santa Clara When and where? November 28-29, 2018, Santa Clara Convention Center, Santa Clara, CA. What is it? Santa Clara's AI Expo is one of the biggest machine learning conferences. With four different streams, including AI technologies, AI and the consumer, AI in the enterprise, and Data analytics for AI and IoT, the event organizers are really trying to make their coverage pretty comprehensive. Who is speaking? The event's website boasts 75+ speakers. The most interesting include Elena Grewel, Airbnb's Head of Data Science, Matt Carroll, who leads developer relations at Google Assistant, and LinkedIn's Senior Director of Dara Science, Xin Fu. Who is it for? With so much on offer this has wide appeal. From marketers to data analysts, there's likely to be something on offer. However, with so much going on you do need to know what you want to get out of an event like this - so be clear on what AI means to you and what you want to learn. Did we miss an important machine learning conference? Are you attending any of these this year? Let us know in the comments - we'd love to hear from you.
Read more
  • 0
  • 0
  • 2904
article-image-reasons-your-business-to-adopt-cloud-computing
Vijin Boricha
11 Jun 2018
6 min read
Save for later

5 reasons why your business should adopt cloud computing

Vijin Boricha
11 Jun 2018
6 min read
Businesses are moving their focus to using existing technology to accomplish their 2018 business targets. Although cloud services have been around for a while, many organisations hesitated to make the move. But recent enhancements such as cost-effectiveness, portability, agility, and faster connectivity have grabbed more attention from new and not so famous organisations. So, if your organization is looking for ways to achieve greater heights and you are exploring healthy investments that benefit your organisation then, your first choice should be cloud computing as the on-premises server system is fading away. You don’t need any proof to agree that cloud computing is playing a vital role in changing the way businesses work today. Organizations have started looking for cloud options to widen their businesses’ reach (read revenue, growth, sales) and to run more efficiently (read cost savings, bottom line, ROI). There are three major cloud options that growing businesses can look at: Public Cloud Private Cloud Hybrid Cloud A Gartner report states that by the year 2020 big vendors will shift from cloud-first to cloud-only policies. If you are wondering what could fuel this predicted rise in cloud adoption, look no further.Below are some factors contributing to this trend of businesses adopting cloud computing. Cloud offers increased flexibility One of the most beneficial aspects of adopting cloud computing is its flexibility no matter the size of the organization or the location your employee is placed at. Cloud computing comes with a wide range of options from modifying storage space to supporting both in-office and remotely located employees. This makes it easy for businesses to increase and decrease server loads along with providing employees with the benefit of working from anywhere at anytime with zero timezone restrictions. Cloud computing services, in a way, help businesses focus on revenue growth than spending time and resources on building hardware/software capabilities. Cloud computing is cost effective Cloud-backed businesses definitely benefit on cost as there is no need to maintain expensive in-house servers and other expensive devices given that everything is handled on the cloud. If you want your business to grow, you just need to spend on storage space and pay for the services you use. Cost transparency helps organizations plan their expenditure and pay-per-use is one of the biggest advantage businesses can leverage. With cloud adoption you eliminate spending on increasing processing power, hard drive space or building a large data center. When there are less hardware facilities to manage, you do not need a large IT team to handle it. Software licensing cost is automatically eliminated as the software is already stored on cloud and businesses have an option of paying as per their use. Scalability is easier with cloud The best part about cloud computing is its support for unpredicted requirements which helps businesses scale or downsize resources quickly and efficiently. It’s all about modifying your subscription plan which allows you to upgrade your storage or bandwidth plans as per your business needs.This kind of scalability option helps increasing business performance and minimizes the risk of up-front investments of operational issues and maintenance. Better availability means less downtime and better productivity So with cloud adoption you need not worry about downtime as they are reliable and maintain about 100% uptime. This means whatever you own on the cloud is available to your customers at any point. For every server breakdown, the cloud service providers make sure of having a backup server in place to avoid missing out on essential data. This can barely be achieved by traditional on-premises infrastructure, which is another reason businesses should switch to cloud. All of the above mentioned mechanism makes it easy to share files and documents with teammates, thanks to its flexible accessibility. Teams can collaborate more effectively when they can access documents anytime and anywhere. This obviously improves workflow and gives businesses a competitive edge. Being present is office to complete tasks is no longer a requirement for productivity;  a work/life balance is an added side-effect of such an arrangement. In short, you need not worry about operational disasters and you can get the job done without physically being present in office. Automated Backups One major problem with an on-premises data center is that everything depends on the functioning of your physical system. In cases where you lose your device or some kind of disaster befalls your physical system, it may lead to loss of data as well. This is never the case with cloud as you can access your files and documents from any device or location no matter the physical device you use. Organizations have to bear a massive expense for regular back-ups whereas cloud computing comes with automatic back-ups and provides enterprise grade functioning to all sizes of businesses. If you’re thinking about data security, cloud is a safer option as each of the cloud computing variants (private, public, and hybrid) has its own set of benefits. If you are not dealing with sensitive data, choosing public cloud would be the best option whereas for sensitive data, businesses should opt for private cloud where they have total control of the security policies. On the other hand, hybrid cloud allows you to benefit from both worlds. So, if you are looking for scalable solutions along with a much more controlled architecture for data security, hybrid cloud architecture will blend well with your business needs. It allows users to pick and choose the public or private cloud service they require to fulfill their business requirements. Migrating your business to cloud definitely has more advantages over disadvantages. It helps increase organizational efficiency and fuels business growth. Cloud computing helps reduce time-to-market, facilitates product development, keeps employees happy, and builds a desired workflow. This in the end helps your organisation achieve greater success. It doesn’t hurt that the cost you saved thus is available for you to invest in areas that are in dire need of some cash inflow! Read Next: What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Serverless computing wars: AWS Lambdas vs Azure Functions How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 3295

article-image-technical-debt-is-damaging-businesses
Richard Gall
11 Jun 2018
5 min read
Save for later

Technical debt is damaging businesses

Richard Gall
11 Jun 2018
5 min read
A lot of things make working in tech difficult. Technical debt is one of them. Whether you're working in-house or for an external team, you've probably experienced some tough challenges when it comes to legacy software. Most people have encountered strange internal software systems, a CMS that has been customized in a way that no one has the energy to fathom. Working your way around and through these can be a headache to say the least. In this year's Skill Up survey, we found that Technical debt and legacy issues are seen by developers as the biggest barrier to business goals. According to 49% of respondents, old technology and software is stopping organizations from reaching their full potential. But it might also be stopping developers from moving forward in their careers. Read the report in full. Sign up to our newsletter and download the PDF for free. Technical debt and the rise of open source Arguably, issues around technical debt have become more pronounced in the last decade as the pace of technical change has seemingly increased. I say seemingly, because it's not so much that we're living in an entirely new technical landscape. It's more that the horizons of that landscape are expanding. There are more possibilities and options open to businesses today. Technology leadership is difficult in 2018. To do it well, you need to stay on top of new technologies. But you also need a solid understanding of your internal systems, your team, as well as wider strategic initiatives and business goals. There are a lot of threads you need to manage. Are technology leaders struggling with technical debt? Perhaps technology leaders are struggling. But perhaps they're also making the best of difficult situations. When you're juggling multiple threads in the way I've described, you need to remain focused on what's important. Ultimately, that's delivering software that delivers value. True, your new mobile app might not be ideal; the internal CMS you were building for a client might not offer an exemplary user experience. But it still does the job - and that, surely is the most important thing? We can do better - let's solve technical debt together It's important to be realistic. In the age of burn out and over work, let's not beat ourselves up when things aren't quite what we want. Much of software engineering is, after all, making the best of a bad situation. But the solutions to technical debt can probably be found in a cultural shift. The lack of understanding of technology on the part of management is surely a large cause of technical debt. When projects aren't properly scoped and when deadlines are set without a clear sense of what level of work is required, that's when legacy issues begin to become a problem. In fact, it's worth looking at all the other barriers. In many ways, they are each a piece of the puzzle if we are to use technology more effectively - more imaginatively - to solve business problems. Take these three: Lack of quality training or learning Team resources Lack of investment in projects All of these point to a wider cultural problem with the way software is viewed in businesses. There's no investment, teams are under-resourced, and support to learn and develop new skills is simply not being provided. With this lack of regard for software, it's unsurprising that developers are spending more time solving problems on, say, legacy code, than solving big, interesting problems. Ones that might actually have a big impact. One way of solving technical debt, then, is to make a concerted effort to change the cultural mindset. Yes, some of this will need to come from senior management, but all software engineers need to take responsibility. This means better communication and collaboration, a commitment to documentation - those things that are so easy to forget to do well when you could be shipping code. What happens if we don't start solving technical debt Technical debt is like global warming - it's happening already. We feel the effects every day. However, it's only going to get worse. Yes, it's going to damage businesses, but it's also going to hurt developers. It's restricting the scope of developers to do the work they want to do and make a significant impact on their businesses. It seems as though we're locked in a strange cycle where businesses talk about the importance of 'digital skills' and technical knowledge gaps but ironically can't offer the resources or scope for talented developers to actually do their job properly. Developers bring skills, ideas, and creativity to jobs only to find that they're isn't really time to indulge that creativity. "Maybe next year, when we have more time" goes the common refrain. There's never going to be more time - that's obvious to anyone who's ever had a job, engineer or otherwise. So why not take steps to start solving technical debt now? Read next 8 Reasons why architects love API driven architecture Python, Tensorflow, Excel and more – Data professionals reveal their top tools The best backend tools in web development
Read more
  • 0
  • 0
  • 3210