Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-how-get-started-redux-react-native
Emilio Rodriguez
04 Apr 2016
5 min read
Save for later

How To Get Started with Redux in React Native

Emilio Rodriguez
04 Apr 2016
5 min read
In mobile development there is a need for architectural frameworks, but complex frameworks designed to be used in web environments may end up damaging the development process or even the performance of our app. Because of this, some time ago I decided to introduce in all of my React Native projects the leanest framework I ever worked with: Redux. Redux is basically a state container for JavaScript apps. It is 100 percent library-agnostic so you can use it with React, Backbone, or any other view library. Moreover, it is really small and has no dependencies, which makes it an awesome tool for React Native projects. Step 1: Install Redux in your React Native project. Redux can be added as an npm dependency into your project. Just navigate to your project’s main folder and type: npm install --save react-redux By the time this article was written React Native was still depending on React Redux 3.1.0 since versions above depended on React 0.14, which is not 100 percent compatible with React Native. Because of this, you will need to force version 3.1.0 as the one to be dependent on in your project. Step 2: Set up a Redux-friendly folder structure. Of course, setting up the folder structure for your project is totally up to every developer but you need to take into account that you will need to maintain a number of actions, reducers, and components. Besides, it’s also useful to keep a separate folder for your API and utility functions so these won’t be mixing with your app’s core functionality. Having this in mind, this is my preferred folder structure under the src folder in any React Native project: Step 3: Create your first action. In this article we will be implementing a simple login functionality to illustrate how to integrate Redux inside React Native. A good point to start this implementation is the action, a basic function called from the component whenever we want the whole state of the app to be changed (i.e. changing from the logged out state into the logged in state). To keep this example as concise as possible we won’t be doing any API calls to a backend – only the pure Redux integration will be explained. Our action creator is a simple function returning an object (the action itself) with a type attribute expressing what happened with the app. No business logic should be placed here; our action creators should be really plain and descriptive. Step 4: Create your first reducer. Reducers are the ones in charge of updating the state of the app. Unlike in Flux, Redux only has one store for the whole app, but it will be conveniently name-spaced automatically by Redux once the reducers have been applied. In our example, the user reducer needs to be aware of when the user is logged in. Because of that, it needs to import the LOGIN_SUCCESS constant we defined in our actions before and export a default function, which will be called by Redux every time an action occurs in the app. Redux will automatically pass the current state of the app and the action occurred. It’s up to the reducer to realize if it needs to modify the state or not based on the action.type. That’s why almost every time our reducer will be a function containing a switch statement, which modifies and returns the state based on what action occurred. It’s important to state that Redux works with object references to identify when the state is changed. Because of this, the state should be cloned before any modification. It’s also interesting to know that the action passed to the reducers can contain other attributes apart from type. For example, when doing a more complex login, the user first name and last name can be added to the action by the action created and used by the reducer to update the state of the app. Step 5: Create your component. This step is almost pure React Native coding. We need a component to trigger the action and to respond to the change of state in the app. In our case it will be a simple View containing a button that disappears when logged in. This is a normal React Native component except for some pieces of the Redux boilerplate: The three import lines at the top will require everything we need from Redux ‘mapStateToProps’ and ‘mapDispatchToProps’ are two functions bound with ‘connect’ to the component: this makes Redux know that this component needs to be passed a piece of the state (everything under ‘userReducers’) and all the actions available in the app. Just by doing this, we will have access to the login action (as it is used in the onLoginButtonPress) and to the state of the app (as it is used in the !this.props.user.loggedIn statement) Step 6: Glue it all from your index.ios.js. For Redux to apply its magic, some initialization should be done in the main file of your React Native project (index.ios.js). This is pure boilerplate and only done once: Redux needs to inject a store holding the app state into the app. To do so, it requires a ‘Provider’ wrapping the whole app. This store is basically a combination of reducers. For this article we only need one reducer, but a full app will include many others and each of them should be passed into the combineReducers function to be taken into account by Redux whenever an action is triggered. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 0
  • 25716

article-image-multithreading-qt
Packt
16 Nov 2016
13 min read
Save for later

Multithreading with Qt

Packt
16 Nov 2016
13 min read
Qt has its own cross-platform implementation of threading. In this article by Guillaume Lazar and Robin Penea, authors of the book Mastering Qt 5, we will study how to use Qt and the available tools provided by the Qt folks. (For more resources related to this topic, see here.) More specifically, we will cover the following: Understanding the QThread framework in depth The worker model and how you can offload a process from the main thread An overview of all the available threading technologies in Qt Discovering QThread Qt provides a sophisticated threading system. We assume that you already know threading basics and the associated issues (deadlocks, threads synchronization, resource sharing, and so on) and we will focus on how Qt implements it. The QThread is the central class for of the Qt threading system. A QThread instance manages one thread of execution within the program. You can subclass QThread to override the run() function, which will be executed in the QThread class. Here is how you can create and start a QThread: QThread thread; thread.start(); The start() function calling will automatically call the run() function of thread and emit the started() signal. Only at this point, the new thread of execution will be created. When run() is completed, thread will emit the finished() signal. This brings us to a fundamental aspect of QThread: it works seamlessly with the signal/slot mechanism. Qt is an event-driven framework, where a main event loop (or the GUI loop) processes events (user input, graphical, and so on) to refresh the UI. Each QThread comes with its own event loop that can process events outside the main loop. If not overridden, run() calls the QThread::exec() function, which starts the thread's event loop. You can also override QThread and call exec(), as follows: class Thread : public QThread { Q_OBJECT protected: void run() { Object* myObject = new Object(); connect(myObject, &Object::started, this, &Thread::doWork); exec(); } private slots: void doWork(); }; The started()signal will be processed by the Thread event loop only upon the exec() call. It will block and wait until QThread::exit() is called. A crucial thing to note is that a thread event loop delivers events for all QObject classes that are living in that thread. This includes all objects created in that thread or moved to that thread. This is referred to as the thread affinity of an object. Here's an example: class Thread : public QThread { Thread() : mObject(new QObject()) { } private : QObject* myObject; }; // Somewhere in MainWindow Thread thread; thread.start(); In this snippet, myObject is constructed in the Thread constructor, which is created in turn in MainWindow. At this point, thread is living in the GUI thread. Hence, myObject is also living in the GUI thread. An object created before a QCoreApplication object has no thread affinity. As a consequence, no event will be dispatched to it. It is great to be able to handle signals and slots in our own QThread, but how can we control signals across multiple threads? A classic example is a long running process that is executed in a separate thread that has to notify the UI to update some state: class Thread : public QThread { Q_OBJECT void run() { // long running operation emit result("I <3 threads"); } signals: void result(QString data); }; // Somewhere in MainWindow Thread* thread = new Thread(this); connect(thread, &Thread::result, this, &MainWindow::handleResult); connect(thread, &Thread::finished, thread, &QObject::deleteLater); thread->start(); Intuitively, we assume that the first connect function sends the signal across multiple threads (to have a result available in MainWindow::handleResult), whereas the second connect function should work on thread's event loop only. Fortunately, this is the case due to a default argument in the connect() function signature: the connection type. Let's see the complete signature: QObject::connect( const QObject *sender, const char *signal, const QObject *receiver, const char *method, Qt::ConnectionType type = Qt::AutoConnection) The type variable takes Qt::AutoConnection as a default value. Let's review the possible values of Qt::ConectionType enum as the official Qt documentation states: Qt::AutoConnection: If the receiver lives in the thread that emits the signal, Qt::DirectConnection is used. Otherwise, Qt::QueuedConnection is used. The connection type is determined when the signal is emitted. Qt::DirectConnection: This slot is invoked immediately when the signal is emitted. The slot is executed in the signaling thread. Qt::QueuedConnection: The slot is invoked when control returns to the event loop of the receiver's thread. The slot is executed in the receiver's thread. Qt::BlockingQueuedConnection: This is the same as Qt::QueuedConnection, except that the signaling thread blocks until the slot returns. This connection must not be used if the receiver lives in the signaling thread or else the application will deadlock. Qt::UniqueConnection: This is a flag that can be combined with any one of the preceding connection types, using a bitwise OR element. When Qt::UniqueConnection is set, QObject::connect() will fail if the connection already exists (that is, if the same signal is already connected to the same slot for the same pair of objects). When using Qt::AutoConnection, the final ConnectionType is resolved only when the signal is effectively emitted. If you look again at our example, the first connect(): connect(thread, &Thread::result, this, &MainWindow::handleResult); When the result() signal will be emitted, Qt will look at the handleResult() thread affinity, which is different from the thread affinity of the result() signal. The thread object is living in MainWindow (remember that it has been created in MainWindow), but the result() signal has been emitted in the run() function, which is running in a different thread of execution. As a result, a Qt::QueuedConnection function will be used. We will now take a look at the second connect(): connect(thread, &Thread::finished, thread, &QObject::deleteLater); Here, deleteLater() and finished() live in the same thread, therefore, a Qt::DirectConnection will be used. It is crucial that you understand that Qt does not care about the emitting object thread affinity, it looks only at the signal's "context of execution." Loaded with this knowledge, we can take another look at our first QThread example to have a complete understanding of this system: class Thread : public QThread { Q_OBJECT protected: void run() { Object* myObject = new Object(); connect(myObject, &Object::started, this, &Thread::doWork); exec(); } private slots: void doWork(); }; When Object::started() is emitted, a Qt::QueuedConnection function will be used. his is where your brain freezes. The Thread::doWork() function lives in another thread than Object::started(), which has been created in run(). If the Thread has been instantiated in the UI Thread, this is where doWork() would have belonged. This system is powerful but complex. To make things more simple, Qt favors the worker model. It splits the threading plumbing from the real processing. Here is an example: class Worker : public QObject { Q_OBJECT public slots: void doWork() { emit result("workers are the best"); } signals: void result(QString data); }; // Somewhere in MainWindow QThread* thread = new Thread(this); Worker* worker = new Worker(); worker->moveToThread(thread); connect(thread, &QThread::finished, worker, &QObject::deleteLater); connect(this, &MainWindow::startWork, worker, &Worker::doWork); connect(worker, &Worker::resultReady, this, handleResult); thread->start(); // later on, to stop the thread thread->quit(); thread->wait(); We start by creating a Worker class that has the following: A doWork()slot that will have the content of our old QThread::run() function A result()signal that will emit the resulting data Next, in MainWindow, we create a simple thread and an instance of Worker. The worker->moveToThread(thread) function is where the magic happens. It changes the affinity of the worker object. The worker now lives in the thread object. You can only push an object from your current thread to another thread. Conversely, you cannot pull an object that lives in another thread. You cannot change the thread affinity of an object if the object does not live in your thread. Once thread->start() is executed, we cannot call worker->moveToThread(this) unless we are doing it from this new thread. After that, we will use three connect() functions: We handle worker life cycle by reaping it when the thread is finished. This signal will use a Qt::DirectConnection function. We start the Worker::doWork() upon a possible UI event. This signal will use a Qt::QueuedConnection. We process the resulting data in the UI thread with handleResult(). This signal will use a Qt::QueuedConnection. To sum up, QThread can be either subclassed or used in conjunction with a worker class. Generally, the worker approach is favored because it separates more cleanly the threading affinity plumbing from the actual operation you want to execute in parallel. Flying over Qt multithreading technologies Built upon QThread, several threading technologies are available in Qt. First, to synchronize threads, the usual approach is to use a mutual exclusion (mutex) for a given resource. Qt provides it by the mean of the QMutex class. Its usage is straightforward: QMutex mutex; int number = 1; mutex.lock(); number *= 2; mutex.unlock(); From the mutex.lock() instruction, any other thread trying to lock the mutex object will wait until mutex.unlock() has been called. The locking/unlocking mechanism is error prone in complex code. You can easily forget to unlock a mutex in a specific exit condition, causing a deadlock. To simplify this situation, Qt provides a QMutexLocker that should be used where the QMutex needs to be locked: QMutex mutex; QMutexLocker locker(&mutex); int number = 1; number *= 2; if (overlyComplicatedCondition) { return; } else if (notSoSimple) { return; } The mutex is locked when the locker object is created, and it will be unlocked when locker is destroyed, for example, when it goes out of scope. This is the case for every condition we stated where the return statement appears. It makes the code simpler and more readable. If you need to create and destroy threads frequently, managing QThread instances by hand can become cumbersome. For this, you can use the QThreadPool class, which manages a pool of reusable QThreads. To execute code within threads managed by a QThreadPool, you will use a pattern very close to the worker we covered earlier. The main difference is that the processing class has to extend the QRunnable class. Here is how it looks: class Job : public QRunnable { void run() { // long running operation } } Job* job = new Job(); QThreadPool::globalInstance()->start(job); Just override the run() function and ask QThreadPool to execute your job in a separate thread. The QThreadPool::globalInstance() function is a static helper function that gives you access to an application global instance. You can create your own QThreadPool class if you need to have a finer control over the QThreadPool life cycle. Note that QThreadPool::start() takes the ownership of the job object and will automatically delete it when run() finishes. Watch out, this does not change the thread affinity like QObject::moveToThread() does with workers! A QRunnable class cannot be reused, it has to be a freshly baked instance. If you fire up several jobs, QThreadPool automatically allocates the ideal number of threads based on the core count of your CPU. The maximum number of threads that the QThreadPool class can start can be retrieved with QThreadPool::maxThreadCount(). If you need to manage threads by hand, but you want to base it on the number of cores of your CPU, you can use the handy static function, QThreadPool::idealThreadCount(). Another approach to multithreaded development is available with the Qt Concurrent framework. It is a higher level API that avoids the use of mutexes/locks/wait conditions and promotes the distribution of the processing among CPU cores. Qt Concurrent relies of the QFuture class to execute a function and expect a result later on: void longRunningFunction(); QFuture<void> future = QtConcurrent::run(longRunningFunction); The longRunningFunction() will be executed in a separated thread obtained from the default QThreadPool class. To pass parameters to a QFuture class and retrieve the result of the operation, use the following code: QImage processGrayscale(QImage& image); QImage lenna; QFuture<QImage> future = QtConcurrent::run(processGrayscale, lenna); QImage grayscaleLenna = future.result(); Here, we pass lenna as a parameter to the processGrayscale() function. Because we want a QImage as a result, we declare QFuture with the template type QImage. After that, future.result() blocks the current thread and waits for the operation to be completed to return the final QImage template type. To avoid blocking, QFutureWatcher comes to the rescue: QFutureWatcher<QImage> watcher; connect(&watcher, &QFutureWatcher::finished, this, &QObject::handleGrayscale); QImage processGrayscale(QImage& image); QImage lenna; QFuture<QImage> future = QtConcurrent::run(processImage, lenna); watcher.setFuture(future); We start by declaring a QFutureWatcher with the template argument matching the one used for QFuture. Then, simply connect the QFutureWatcher::finished signal to the slot you want to be called when the operation has been completed. The last step is to the tell the watcher to watch the future object with watcher.setFuture(future). This statement looks almost like it's coming from a science fiction movie. Qt Concurrent also provides a MapReduce and FilterReduce implementation. MapReduce is a programming model that basically does two things: Map or distribute the processing of datasets among multiple cores of the CPU Reduce or aggregate the results to provide it to the caller check styleThis technique has been first promoted by Google to be able to process huge datasets within a cluster of CPU. Here is an example of a simple Map operation: QList images = ...; QImage processGrayscale(QImage& image); QFuture<void> future = QtConcurrent::mapped( images, processGrayscale); Instead of QtConcurrent::run(), we use the mapped function that takes a list and the function to apply to each element in a different thread each time. The images list is modified in place, so there is no need to declare QFuture with a template type. The operation can be made a blocking operation using QtConcurrent::blockingMapped() instead of QtConcurrent::mapped(). Finally, a MapReduce operation looks like this: QList images = ...; QImage processGrayscale(QImage& image); void combineImage(QImage& finalImage, const QImage& inputImage); QFuture<void> future = QtConcurrent::mappedReduced( images, processGrayscale, combineImage); Here, we added a combineImage() that will be called for each result returned by the map function, processGrayscale(). It will merge the intermediate data, inputImage, into the finalImage. This function is called only once at a time per thread, so there is no need to use a mutex object to lock the result variable. The FilterReduce reduce follows exactly the same pattern, the filter function simply allows to filter the input list instead of transforming it. Summary In this article, we discovered how a QThread works and you learned how to efficiently use tools provided by Qt to create a powerful multi-threaded application. Resources for Article: Further resources on this subject: QT Style Sheets [article] GUI Components in Qt 5 [article] DOM and QTP [article]
Read more
  • 0
  • 1
  • 25686

article-image-using-jupyter-write-documentation
Marin Gilles
13 Nov 2015
5 min read
Save for later

How to Write Documentation with Jupyter

Marin Gilles
13 Nov 2015
5 min read
The Jupyter notebook is an interactive notebook allowing you to write documents with embedded code, and execute this code on the fly. It was originally developed as a part of the Ipython project, and could only be used for Python code at that time. Nowadays, the Jupyter notebook integrates multiple languages, such as R, Julia, Haskell and much more - the notebook supports about 50 languages. One of the best features of the notebook is to be able to mix code and markdown with embedded HTML/CSS. It allows an easy creation of beautiful interactive documents, such as documentation examples. It can also help with the writing of full documentation using its export mechanism to HTML, RST (ReStructured Text), markdown and PDF. Interactive documentation examples When writing library documentation, a lot of time should be dedicated to writing examples for each part of the library. However, those examples are quite often static code, each part being explained through comments. To improve the writing of those examples, a solution could be using a Jupyter notebook, which can be downloaded and played with by anyone reading your library documentation. Solutions also exist to have the notebook running directly on your website, as seen on the Jupyter website, where you can try the notebooks. This will not be explained in this post, but the notebook was designed on a server-client pattern, making this easy to get running. Using the notebook cells capabilities, you can separate each part of your code, or each example, and describe it nicely and properly outside the code cell, improving readability. From the Jupyter Python notebook example, we see what the following code does, execute it (and even get graphics back directly on the notebook!). Here is an example of a Jupyter notebook, for the Python language, with matplotlib integration: Even more than that, instead of just having your example code in a file, people downloading your notebook will directly get the full example documentation, giving them a huge advantage in understanding what the example is and what it does when opening the notebook again after six months. And they can just hit the run button, and tinker with your example to understand all its details, without having to go back and forth between the website and their code editor, saving them time. They will love you for that! Generate documentation from your notebooks Developing mainly in Python, I am used to the Sphinx library as a documentation generator. It can export your documentation to HTML from RST files, and scoops your code library to generate documentation from docstring, all with a single command, making it quite useful in the process of writing. As Jupyter notebooks can be exported to RST, why not use this mechanism to create your RST files with Jupyter, then generate your full documentation with Sphinx? To manually convert your notebook, you can click on File -> Download As -> reST. You will be prompted to download the file. That's it! Your notebook was exported. However, while this method is good for testing purposes, this will not be good for an automatic generation of documentation with sphinx. To be able to convert automatically, we are going to use a tool named nbconvert with which can do all the required conversions from the command line. To convert your notebook to RST, you just need to do the following $ jupyter nbconvert --to rst *your_notebook*.ipynb or to convert every notebook in the current folder: $ jupyter nbconvert --to rst *.ipynb Those commands can easily be integrated in a Makefile for your documentation, making the process of converting your notebooks completely invisible. If you want to keep your notebooks in a folder notebooks and your generated files in a folder rst, you can run assuming you have the following directory tree: Current working directory | |-rst/ |-notebooks/ |-notebook1.ipynb |-notebook2.ipynb |-... the following commands: $ cd rst $ jupyter nbconvert --to rst ../notebooks/*.ipynb This will convert all the notebooks in notebooks and place them in the rst folder. A Python API is also available if you want to generate your documentation from Python (Documentation). A lot more export options are available on the nbconvert documentation. You can create PDF, HTML or even slides, if you want to make a presentation based on a notebook, and can even pull a presentation from a remote location. Jupyter notebooks are very versatile documents, allowing interactive code exploration, export to a large number of formats, remote work, collaborative work and more. You can find more information on the official Jupyter website where you will also be able to try it. I mainly focused this post on the Python language, in which the IPython notebooks, ancestor of Jupyter were developed, but with the integration of more than 50 languages, it makes it a tool that every developer should be aware of, to create documentation, tutorials, or just to try code and keep notes at the same place. Dive deeper into the Jupyter Notebook by navigating your way around the dashboard and interface in our article. Read now! About the author Marin Gilles is a PhD student in Physics, in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or Ipython.
Read more
  • 0
  • 0
  • 25654

article-image-the-cap-theorem-in-practice-the-consistency-vs-availability-trade-off-in-distributed-databases
Richard Gall
12 Sep 2019
7 min read
Save for later

The CAP Theorem in practice: The consistency vs. availability trade-off in distributed databases

Richard Gall
12 Sep 2019
7 min read
When you choose a database you are making a design decision. One of the best frameworks for understanding what this means in practice is the CAP Theorem. What is the CAP Theorem? The CAP Theorem, developed by computer scientist Eric Brewer in the late nineties, states that databases can only ever fulfil two out of three elements: Consistency - that reads are always up to date, which means any client making a request to the database will get the same view of data. Availability - database requests always receive a response (when valid). Partition tolerance - that a network fault doesn’t prevent messaging between nodes. In the context of distributed (NoSQL) databases, this means there is always going to be a trade-off between consistency and availability. This is because distributed systems are always necessarily partition tolerant (ie. it simply wouldn’t be a distributed database if it wasn’t partition tolerant.) Read next: Different types of NoSQL databases and when to use them How do you use the CAP Theorem when making database decisions? Although the CAP Theorem can feel quite abstract, it has practical, real-world consequences. From both a technical and business perspective the trade-offs will lead you to some very important questions. There are no right answers. Ultimately it will be all about the context in which your database is operating, the needs of the business, and the expectations and needs of users. You will have to consider things like: Is it important to avoid throwing up errors in the client? Or are we willing to sacrifice the visible user experience to ensure consistency? Is consistency an actual important part of the user’s experience Or can we actually do what we want with a relational database and avoid the need for partition tolerance altogether? As you can see, these are ultimately user experience questions. To properly understand those, you need to be sensitive to the overall goals of the project, and, as said above, the context in which your database solution is operating. (Eg. Is it powering an internal analytics dashboard? Or is it supporting a widely used external-facing website or application?) And, as the final bullet point highlights, it’s always worth considering whether the consistency v availability trade-off should matter at all. Avoid the temptation to think a complex database solution will always be better when a simple, more traditional solution will do the job. Of course, it’s important to note that systems that aren’t partition tolerant are a single point of failure in a system. That introduces the potential for unreliability. Prioritizing consistency in a distributed database It’s possible to get into a lot of technical detail when talking about consistency and availability, but at a really fundamental level the principle is straightforward: you need consistency (or what is called a CP database) if the data in the database must always be up to date and aligned, even in the instance of a network failure (eg. the partitioned nodes are unable to communicate with one another for whatever reason). Particular use cases where you would prioritize consistency is when you need multiple clients to have the same view of the data. For example, where you’re dealing with financial information, personal information, using a database that gives you consistency and confidence that data you are looking at is up to date in a situation where the network is unreliable or fails. Examples of CP databases MongoDB Learning MongoDB 4 [Video] MongoDB 4 Quick Start Guide MongoDB, Express, Angular, and Node.js Fundamentals Redis Build Complex Express Sites with Redis and Socket.io [Video] Learning Redis HBase Learn by Example : HBase - The Hadoop Database [Video] HBase Design Patterns Prioritizing availability in a distributed database Availability is essential when data accumulation is a priority. Think here of things like behavioral data or user preferences. In scenarios like these, you will want to capture as much information as possible about what a user or customer is doing, but it isn’t critical that the database is constantly up to date. It simply just needs to be accessible and available even when network connections aren’t working. The growing demand for offline application use is also one reason why you might use a NoSQL database that prioritizes availability over consistency. Examples of AP databases Cassandra Learn Apache Cassandra in Just 2 Hours [Video] Mastering Apache Cassandra 3.x - Third Edition DynamoDB Managed NoSQL Database In The Cloud - Amazon AWS DynamoDB [Video] Hands-On Amazon DynamoDB for Developers [Video] Limitations and criticisms of CAP Theorem It’s worth noting that the CAP Theorem can pose problems. As with most things, in truth, things are a little more complicated. Even Eric Brewer is circumspect about the theorem, especially as what we expect from distributed databases. Back in 2012, twelve years after he first put his theorem into the world, he wrote that: “Although designers still need to choose between consistency and availability when partitions are present, there is an incredible range of flexibility for handling partitions and recovering from them. The modern CAP goal should be to maximize combinations of consistency and availability that make sense for the specific application. Such an approach incorporates plans for operation during a partition and for recovery afterward, thus helping designers think about CAP beyond its historically perceived limitations.” So, this means we must think about the trade-off between consistency and availability as a balancing act, rather than a binary design decision. Elsewhere, there have been more robust criticisms of CAP Theorem. Software engineer Martin Kleppmann, for example, pleaded Please stop calling databases CP or AP in 2015. In a blog post he argues that CAP Theorem only works if you adhere to specific definitions of consistency, availability, and partition tolerance. “If your use of words matches the precise definitions of the proof, then the CAP theorem applies to you," he writes. “But if you’re using some other notion of consistency or availability, you can’t expect the CAP theorem to still apply.” The consequences of this are much like those described in Brewer’s piece from 2012. You need to take a nuanced approach to database trade-offs in which you think them through on your own terms and up against your own needs. The PACELC Theorem One of the developments of this line of argument is an extension to the CAP Theorem: the PACELC Theorem. This moves beyond thinking about consistency and availability and instead places an emphasis on the trade-off between consistency and latency. The PACELC Theorem builds on the CAP Theorem (the ‘PAC’) and adds an else (the ‘E’). What this means is that while you need to choose between availability and consistency if communication between partitions has failed in a distributed system, even if things are running properly and there are no network issues, there is still going to be a trade-off between consistency and latency (the ‘LC’). Conclusion: Learn to align context with technical specs Although the CAP Theorem might seem somewhat outdated, it is valuable in providing a way to think about database architecture design. It not only forces engineers and architects to ask questions about what they want from the technologies they use, but it also forces them to think carefully about the requirements of a given project. What are the business goals? What are user expectations? The PACELC Theorem builds on CAP in an effective way. However, the most important thing about these frameworks is how they help you to think about your problems. Of course the CAP Theorem has limitations. Because it abstracts a problem it is necessarily going to lack nuance. There are going to be things it simplifies. It’s important, as Kleppmann reminds us - to be mindful of these nuances. But at the same time, we shouldn’t let an obsession with nuance and detail allow us to miss the bigger picture.
Read more
  • 0
  • 0
  • 25593

article-image-using-handlebars-express
Packt
10 Aug 2015
17 min read
Save for later

Using Handlebars with Express

Packt
10 Aug 2015
17 min read
In this article written by Paul Wellens, author of the book Practical Web Development, we cover a brief description about the following topics: Templates Node.js Express 4 Templates Templates come in many shapes or forms. Traditionally, they are non-executable files with some pre-formatted text, used as the basis of a bazillion documents that can be generated with a computer program. I worked on a project where I had to write a program that would take a Microsoft Word template, containing parameters like $first, $name, $phone, and so on, and generate a specific Word document for every student in a school. Web templating does something very similar. It uses a templating processor that takes data from a source of information, typically a database and a template, a generic HTML file with some kind of parameters inside. The processor then merges the data and template to generate a bunch of static web pages or dynamically generates HTML on the fly. If you have been using PHP to create dynamic webpages, you will have been busy with web templating. Why? Because you have been inserting PHP code inside HTML files in between the <?php and ?> strings. Your templating processor was the Apache web server that has many additional roles. By the time your browser gets to see the result of your code, it is pure HTML. This makes this an example of server side templating. You could also use Ajax and PHP to transfer data in the JSON format and then have the browser process that data using JavaScript to create the HTML you need. Combine this with using templates and you will have client side templating. Node.js What Le Sacre du Printemps by Stravinsky did to the world of classical music, Node.js may have done to the world of web development. At its introduction, it shocked the world. By now, Node.js is considered by many as the coolest thing. Just like Le Sacre is a totally different kind of music—but by now every child who has seen Fantasia has heard it—Node.js is a different way of doing web development. Rather than writing an application and using a web server to soup up your code to a browser, the application and the web server are one and the same. This may sound scary, but you should not worry as there is an entire community that developed modules you can obtain using the npm tool. Before showing you an example, I need to point out an extremely important feature of Node.js: the language in which you will write your web server and application is JavaScript. So Node.js gives you server side JavaScript. Installing Node.js How to install Node.js will be different, depending on your OS, but the result is the same everywhere. It gives you two programs: Node and npm. npm The node packaging manager (npm)is the tool that you use to look for and install modules. Each time you write code that needs a module, you will have to add a line like this in: var module = require('module'); The module will have to be installed first, or the code will fail. This is how it is done: npm install module or npm -g install module The latter will attempt to install the module globally, the former, in the directory where the command is issued. It will typically install the module in a folder called node_modules. node The node program is the command to use to start your Node.js program, for example: node myprogram.js node will start and interpret your code. Type Ctrl-C to stop node. Now create a file myprogram.js containing the following text: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); So, if you installed Node.js and the required http module, typing node myprogram.js in a terminal window, your console will start up a web server. And, when you type http://localhost:8080 in a browser, you will see the world famous two word program example on your screen. This is the equivalent of getting the It works! thing, after testing your Apache web server installation. As a matter of fact, if you go to http://localhost:8080/it/does/not/matterwhat, the same will appear. Not very useful maybe, but it is a web server. Serving up static content This does not work in a way we are used to. URLs typically point to a file (or a folder, in which case the server looks for an index.html file) , foo.html, or bar.php, and when present, it is served up to the client. So what if we want to do this with Node.js? We will need a module. Several exist to do the job. We will use node-static in our example. But first we need to install it: npm install node-static In our app, we will now create not only a web server, but a fileserver as well. It will serve all the files in the local directory public. It is good to have all the so called static content together in a separate folder. These are basically all the files that will be served up to and interpreted by the client. As we will now end up with a mix of client code and server code, it is a good practice to separate them. When you use the Express framework, you have an option to have Express create these things for you. So, here is a second, more complete, Node.js example, including all its static content. hello.js, our node.js app var http = require('http'); var static = require('node-static'); var fileServer = new static.Server('./public'); http.createServer(function (req, res) { fileServer.serve(req,res); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); hello.html is stored in ./public. <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Hello world document</title> <link href="./styles/hello.css" rel="stylesheet"> </head> <body> <h1>Hello, World</h1> </body> </html> hello.css is stored in public/styles. body { background-color:#FFDEAD; } h1 { color:teal; margin-left:30px; } .bigbutton { height:40px; color: white; background-color:teal; margin-left:150px; margin-top:50px; padding:15 15 25 15; font-size:18px; } So, if we now visit http://localhost:8080/hello, we will see our, by now too familiar, Hello World message with some basic styling, proving that our file server also delivered the CSS file. You can easily take it one step further and add JavaScript and the jQuery library and put it in, for example, public/js/hello.js and public/js/jquery.js respectively. Too many notes With Node.js, you only install the modules that you need, so it does not include the kitchen sink by default! You will benefit from that for as far as performance goes. Back in California, I have been a proud product manager of a PC UNIX product, and one of our coolest value-add was a tool, called kconfig, that would allow people to customize what would be inside the UNIX kernel, so that it would only contain what was needed. This is what Node.js reminds me of. And it is written in C, as was UNIX. Deja vu. However, if we wanted our Node.js web server to do everything the Apache Web Server does, we would need a lot of modules. Our application code needs to be added to that as well. That means a lot of modules. Like the critics in the movie Amadeus said: Too many notes. Express 4 A good way to get the job done with fewer notes is by using the Express framework. On the expressjs.com website, it is called a minimal and flexible Node.js web application framework, providing a robust set of features for building web applications. This is a good way to describe what Express can do for you. It is minimal, so there is little overhead for the framework itself. It is flexible, so you can add just what you need. It gives a robust set of features, which means you do not have to create them yourselves, and they have been tested by an ever growing community. But we need to get back to templating, so all we are going to do here is explain how to get Express, and give one example. Installing Express As Express is also a node module, we install it as such. In your project directory for your application, type: npm install express You will notice that a folder called express has been created inside node_modules, and inside that one, there is another collection of node-modules. These are examples of what is called middleware. In the code example that follows, we assume app.js as the name of our JavaScript file, and app for the variable that you will use in that file for your instance of Express. This is for the sake of brevity. It would be better to use a string that matches your project name. We will now use Express to rewrite the hello.js example. All static resources in the public directory can remain untouched. The only change is in the node app itself: var express = require('express'); var path = require('path'); var app = express(); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options )); app.listen(app.get('port'), function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); This code uses so called middleware (static) that is included with express. There is a lot more available from third parties. Well, compared to our node.js example, it is about the same number of lines. But it looks a lot cleaner and it does more for us. You no longer need to explicitly include the HTTP module and other such things. Templating and Express We need to get back to templating now. Imagine all the JavaScript ecosystem we just described. Yes, we could still put our client JavaScript code in between the <script> tags but what about the server JavaScript code? There is no such thing as <?javascript ?> ! Node.js and Express, support several templating languages that allow you to separate layout and content, and which have the template system do the work of fetching the content and injecting it into the HTML. The default templating processor for Express appears to be Jade, which uses its own, albeit more compact than HTML, language. Unfortunately, that would mean that you have to learn yet another syntax to produce something. We propose to use handlebars.js. There are two reasons why we have chosen handlebars.js: It uses <html> as the language It is available on both the client and server side Getting the handlebars module for Express Several Express modules exist for handlebars. We happen to like the one with the surprising name express-handlebars. So, we install it, as follows: npm install express-handlebars Layouts I almost called this section templating without templates as our first example will not use a parameter inside the templates. Most websites will consist of several pages, either static or dynamically generated ones. All these pages usually have common parts; a header and footer part, a navigation part or menu, and so on. This is the layout of our site. What distinguishes one page from another, usually, is some part in the body of the page where the home page has different information than the other pages. With express-handlebars, you can separate layout and content. We will start with a very simple example. Inside your project folder that contains public, create a folder, views, with a subdirectory layout. Inside the layouts subfolder, create a file called main.handlebars. This is your default layout. Building on top of the previous example, have it say: <!doctype html> <html> <head> <title>Handlebars demo</title> </head> <link href="./styles/hello.css" rel="stylesheet"> <body> {{{body}}} </body> </html> Notice the {{{body}}} part. This token will be replaced by HTML. Handlebars escapes HTML. If we want our HTML to stay intact, we use {{{ }}}, instead of {{ }}. Body is a reserved word for handlebars. Create, in the folder views, a file called hello.handlebars with the following content. This will be one (of many) example of the HTML {{{body}}}, which will be replaced by: <h1>Hello, World</h1> Let’s create a few more june.handlebars with: <h1>Hello, June Lake</h1> And bodie.handlebars containing: <h1>Hello, Bodie</h1> Our first handlebars example Now, create a file, handlehello.js, in the project folder. For convenience, we will keep the relevant code of the previous Express example: var express = require('express'); var path = require('path'); var app = express(); var exphbs = require(‘express-handlebars’); app.engine('handlebars', exphbs({defaultLayout: 'main'})); app.set('view engine', 'handlebars'); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', etag: false, extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options  )); app.get('/', function(req, res) { res.render('hello');   // this is the important part }); app.get('/bodie', function(req, res) { res.render('bodie'); }); app.get('/june', function(req, res) { res.render('june'); }); app.listen(app.get('port'),  function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); Everything that worked before still works, but if you type http://localhost:3000/, you will see a page with the layout from main.handlebars and {{{body}}} replaced by, you guessed it, the same Hello World with basic markup that looks the same as our hello.html example. Let’s look at the new code. First, of course, we need to add a require statement for our express-handlebars module, giving us an instance of express-handlebars. The next two lines specify what the view engine is for this app and what the extension is that is used for the templates and layouts. We pass one option to express-handlebars, defaultLayout, setting the default layout to be main. This way, we could have different versions of our app with different layouts, for example, one using Bootstrap and another using Foundation. The res.render calls determine which views need to be rendered, so if you type http:// localhost:3000/june, you will get Hello, June Lake, rather than Hello World. But this is not at all useful, as in this implementation, you still have a separate file for each Hello flavor. Let’s create a true template instead. Templates In the views folder, create a file, town.handlebars, with the following content: {{!-- Our first template with tokens --}} <h1>Hello, {{town}} </h1> Please note the comment line. This is the syntax for a handlebars comment. You could HTML comments as well, of course, but the advantage of using handlebars comments is that it will not show up in the final output. Next, add this to your JavaScript file: app.get('/lee', function(req, res) { res.render('town', { town: "Lee Vining"}); }); Now, we have a template that we can use over and over again with different context, in this example, a different town name. All you have to do is pass a different second argument to the res.render call, and {{town}} in the template, will be replaced by the value of town in the object. In general, what is passed as the second argument is referred to as the context. Helpers The token can also be replaced by the output of a function. After all, this is JavaScript. In the context of handlebars, we call those helpers. You can write your own, or use some of the cool built-in ones, such as #if and #each. #if/else Let us update town.handlebars as follows: {{#if town}} <h1>Hello, {{town}} </h1> {{else}} <h1>Hello, World </h1> {{/if}} This should be self explanatory. If the variable town has a value, use it, if not, then show the world. Note that what comes after #if can only be something that is either true of false, zero or not. The helper does not support a construct such as #if x < y. #each A very useful built-in helper is #each, which allows you to walk through an array of things and generate HTML accordingly. This is an example of the code that could be inside your app and the template you could use in your view folder: app.js code snippet var californiapeople = {    people: [ {“name":"Adams","first":"Ansel","profession":"photographer",    "born"       :"SanFrancisco"}, {“name":"Muir","first":"John","profession":"naturalist",    "born":"Scotland"}, {“name":"Schwarzenegger","first":"Arnold",    "profession":"governator","born":"Germany"}, {“name":"Wellens","first":"Paul","profession":"author",    "born":"Belgium"} ]   }; app.get('/californiapeople', function(req, res) { res.render('californiapeople', californiapeople); }); template (californiapeople.handlebars) <table class=“cooltable”> {{#each people}}    <tr><td>{{first}}</td><td>{{name}}</td>    <td>{{profession}}</td></tr> {{/each}} </table> Now we are well on our way to do some true templating. You can also write your own helpers, which is beyond the scope of an introductory article. However, before we leave you, there is one cool feature of handlebars you need to know about: partials. Partials In web development, where you dynamically generate HTML to be part of a web page, it is often the case that you repetitively need to do the same thing, albeit on a different page. There is a cool feature in express-handlebars that allows you to do that very same thing: partials. Partials are templates you can refer to inside a template, using a special syntax and drastically shortening your code that way. The partials are stored in a separate folder. By default, that would be views/partials, but you can even use subfolders. Let's redo the previous example but with a partial. So, our template is going to be extremely petite: {{!-- people.handlebars inside views  --}}    {{> peoplepartial }} Notice the > sign; this is what indicates a partial. Now, here is the familiar looking partial template: {{!-- peoplepartialhandlebars inside views/partials --}} <h1>Famous California people </h1> <table> {{#each people}} <tr><td>{{first}}</td><td>{{name}}</td> <td>{{profession}}</td></tr> {{/each}} </table> And, following is the JavaScript code that triggers it: app.get('/people', function(req, res) { res.render('people', californiapeople); }); So, we give it the same context but the view that is rendered is ridiculously simplistic, as there is a partial underneath that will take care of everything. Of course, these were all examples to demonstrate how handlebars and Express can work together really well, nothing more than that. Summary In this article, we talked about using templates in web development. Then, we zoomed in on using Node.js and Express, and introduced Handlebars.js. Handlebars.js is cool, as it lets you separate logic from layout and you can use it server-side (which is where we focused on), as well as client-side. Moreover, you will still be able to use HTML for your views and layouts, unlike with other templating processors. For those of you new to Node.js, I compared it to what Le Sacre du Printemps was to music. To all of you, I recommend the recording by the Los Angeles Philharmonic and Esa-Pekka Salonen. I had season tickets for this guy and went to his inaugural concert with Mahler’s third symphony. PHP had not been written yet, but this particular performance I had heard on the radio while on the road in California, and it was magnificent. Check it out. And, also check out Express and handlebars. Resources for Article: Let's Build with AngularJS and Bootstrap The Bootstrap grid system MODx Web Development: Creating Lists
Read more
  • 0
  • 2
  • 25495

article-image-really-basic-guide-to-batch-file-programming
Richard Gall
31 May 2018
3 min read
Save for later

A really basic guide to batch file programming

Richard Gall
31 May 2018
3 min read
Batch file programming is a way of making a computer do things simply by creating, yes, you guessed it, a batch file. It's a way of doing things you might ordinarily do in the command prompt, but automates some tasks, which means you don't have to write so much code. If it sounds straightforward, that's because it is, generally. Which is why it's worth learning... Batch file programming is a good place to start learning how computers work Of course, if you already know your way around batch files, I'm sure you'll agree it's a good way for someone relatively experienced in software to get to know their machine a little better. If you know someone that you think would get a lot from learning batch file programming share this short guide with them! Why would I write a batch script? There are a number of reasons you might write batch scripts. It's particularly useful for resolving network issues, installing a number of programs on different machines, even organizing files and folders on your computer. Imagine you have a recurring issue - with a batch file you can solve it quickly and easily wherever you are without having to write copious lines of code in the command line. Or maybe your desktop simply looks like a mess; with a little knowledge of batch file programming you can clean things up without too much effort. How to write a batch file Clearly, batch file programming can make your life a lot easier. Let's take a look at the key steps to begin writing batch scripts. Step 1: Open your text editor Batch file programming is really about writing commands - so you'll need your text editor open to begin. Notepad, wordpad, it doesn't matter! Step 2: Begin writing code As we've already seen, batch file programming is really about writing commands for your computer. The code is essentially the same as what you would write in the command prompt. Here are a few batch file commands you might want to know to get started: ipconfig - this presents network information like your IP and MAC address. start “” [website] - this opens a specified website in your browser. rem - this is used if you want to make a comment or remark in your code (ie. for documentation purposes) pause - this, as you'd expect, pauses the script so it can be read before it continues. echo - this command will display text in the command prompt. %%a - this command refers to every file in a given folder if - this is a conditional command The list of batch file commands is pretty long. There are plenty of other resources with an exhaustive list of commands you can use, but a good place to begin is this page on Wikipedia. Step 3: Save your batch file Once you've written your commands in the text editor, you'll then need to save your document as a batch file. Title it, and suffix it with the .bat extension. You'll also need to make sure save as type is set as 'All files'. That's basically it when it comes to batch file programming. Of course, there are some complex things you can do, but once you know the basics, getting into the code is where you can start to experiment.  Read next Jupyter and Python scripting Python Scripting Essentials
Read more
  • 0
  • 0
  • 25409
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-rust-is-the-future-of-systems-programming-c-is-the-new-assembly-intel-principal-engineer-josh-triplett
Bhagyashree R
27 Aug 2019
10 min read
Save for later

“Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett

Bhagyashree R
27 Aug 2019
10 min read
At Open Source Technology Summit (OSTS) 2019, Josh Triplett, a Principal Engineer at Intel gave an insight into what Intel is contributing to bring the most loved language, Rust to full parity with C. In his talk titled Intel and Rust: the Future of Systems Programming, he also spoke about the history of systems programming, how C became the “default” systems programming language, what features of Rust gives it an edge over C, and much more. Until now, OSTS was Intel's closed event where the company's business and tech leaders come together to discuss the various trends, technologies, and innovations that will help shape the open-source ecosystem. However, this year was different as the company welcomed non-Intel attendees including media, partners, and developers for the first time. The event hosts keynotes, more than 50 technical sessions, panels, demos covering all the open source technologies Intel is involved in. These include integrated software stacks (edge, AI, infrastructure), firmware, embedded and IoT projects, and cloud system software. This year the event happened from May 14-16 at Stevenson, Washington. What is systems programming Systems programming is the development and management of software that serves as a platform for other software to be built upon. The system software also directly or closely interfaces with computer hardware in order to gain necessary performance and expose abstractions. Unlike application programming where software is created to provide services to the user, it aims to produce software that provides services to the computer hardware. Triplett broadly defines systems programming as “anything that isn't an app.” It includes things like BIOS, firmware, boot loaders, operating systems kernels, embedded and similar types of low-level code, virtual machine implementations. Triplett also counts a web browser as a system software as it is more than “just an app,” they are actually “platforms for websites and web apps,” he says. How C became the “default” systems programming language Previously, most system software including BIOS, boot loaders, and firmware were written in Assembly. In the 1960s, experiments to bring hardware support in high-level languages started, which resulted in the creation of languages such as PL/S, BLISS, BCPL, and extended ALGOL. Then in the 1970s, Dennis Ritchie created the C programming language for the Unix operating system. Derived from the typeless B programming language, C was packed with powerful high-level functionalities and detailed features that were best suited for writing an operating system. Several UNIX components including its kernel were eventually rewritten in C. Many other system software including the Oracle database, a large portion of Windows source code, Linux operating system, were all written in C. C was seeing a huge adoption at this point. But, what exactly made developers comfortable moving to C? Triplett believes that in order to make this move from one language to another, developers have to be comfortable in terms of two things: features and parity. First, the language should offer “sufficiently compelling” features. “It can’t just be a little bit better. It has to be substantially better to warrant the effort and engineering time needed to move,” he adds. As compared to Assembly, C had a lot to offer. It had some degree of type safety, provided portability, better productivity with high-level constructs, and much more readable code. Second, the language has to provide parity, which means developers had to be confident that it is no less capable than Assembly. He states, “It can’t just be better, it also has to be no worse.” In addition to being faster and expressing any type of data that Assembly was able to, it also had what Triplett calls “escape hatch.”  This means you are allowed to make the move incrementally and also combine Assembly if required. Triplett believes that C is now becoming what Assembly was years ago. “C is the new Assembly,” he concludes. Developers are looking for a high-level language that not only addresses the problems in C that can’t be fixed but also leverage other exciting features that these languages provide. Such a language that aims to be compelling enough to make developers move from C should be memory safe, provide automatic memory management,  security, and much more. “Any language that wants to be better than C has to offer a lot more than just protection from buffer overflows if it's actually going to be a compelling alternative. People care about usability and productivity. They care about writing code that is self-explanatory, which accomplishes more work in less code. It also needs to address security issues. Usability and productivity go hand in hand with security. The less code you need to write to accomplish something, the less chance you have of introducing bugs security bugs or otherwise,” he explains. Comparing Rust with C Back in 2006, Graydon Hoare, a Mozilla employee started writing Rust as a personal project. Mozilla, in 2009, started sponsoring the project and also expanded the team to drive further development of the language. One of the reasons why Mozilla got interested is that Firefox was written in more than 4 million lines of C++ code and had quite a bit of highly critical vulnerabilities. Rust was built with safety and concurrency in mind making it the perfect choice for rewriting many components of Firefox under Project Quantum. It is also using Rust to develop Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Many other companies have also started using Rust for their projects including Microsoft, Google, Facebook, Amazon, Dropbox, Fastly, Chef, Baidu, and more. Rust addresses the memory management problem in C. It offers automatic memory management so that developers do not have to manually call free on every object. What sets it apart from other modern languages is that it does not have a garbage collector or runtime system of any kind. Rust instead has the concepts of ownership, borrowing, references, and lifetimes. “Rust has a system of declaring whether any given use of an object is the owner of that object or whether it's just borrowing that object temporarily. If you're just borrowing an object the compiler will keep track of that. It'll make sure that the original sticks around as long as you reference it. Rust makes sure that the owner of the object frees it when it's done and it inserts the call to free at compile time with no extra runtime overhead,” Triplett explains. Not having a runtime is also a plus for Rust. Triplett believes that languages that have a runtime are difficult to use as a system programming language. He adds, “You have to initialize that runtime before you can call any code, you have to use that runtime to call functions, and the runtime itself might run extra code behind your back at unexpected times.” Rust also aims to provide safe concurrent programming. The same features that make it memory safe, keep track of things like which thread own which object, which objects can be passed between threads, and which objects require acquiring locks. These features make Rust compelling enough for developers to choose for systems programming. However, talking about the second criteria, Rust does not have enough parity with C yet. “Achieving parity with C is exactly what got me involved in Rust,” says Triplett Teaching Rust about C compatible unions Triplett's first contribution to the Rust programming language was in the form of the 1444 RFC, which was started in 2015 and got accepted in 2016. This RFC proposed to bring native support for C-compatible unions in Rust that would be defined via a new "contextual keyword" union. Triplett understood the need for this proposal when he wanted to build a virtual machine in Rust and the Linux kernel interface for that /dev/kvm required unions. "I worked with the Rust community and with the language team to get unions into Rust and because of that work I'm actually now part of the Rust language governance team helping to evaluate and guide other changes into the language," he adds. He talked about this RFC in much detail at the very first RustConf in 2016: https://www.youtube.com/watch?v=U8Gl3RTXf88 Support for unnamed struct and union types Another feature that Triplett worked on was the support for unnamed struct and union types in Rust. This has been a widespread C compiler extension for decades and was also included in the C11 standard. This allowed developers to group and layout fields in arbitrary ways to match C data structures used in the Foreign Function Interface (FFI). With this proposal implemented, Rust will be able to represent such types using the same names as the structures without interposing artificial field names that will confuse users of well-established interfaces from existing platforms. A stabilized support for inline Assembly in Rust Systems programming often involves low-level manipulations and requires low-level details of the processors such as privileged instructions. For this, Rust supports using inline Assembly via the ‘asm!’ macro. However, it is only present in the nightly compiler and not yet stabilized. Triplett in a collaboration with other Rust developers is writing a proposal to introduce more robust syntax for inline Assembly. To know more in detail about support for inline Assembly, check out this pre-RFC. BFLOAT16 support into Rust Many Intel processors including Xeon Scalable ‘Cooper Lake-SP’ now support BFLOAT16, a new floating-point format. This truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format was mainly designed for deep learning. This format is also used in machine learning libraries like Tensorflow that work with huge datasets. It also makes interoperating with existing systems, functions, and storage much easier. This is why Triplett is working on adding support for BFLOAT16 in Rust so that developers would be able to use the full capabilities of their hardware. FFI/C Parity Working Group This was one of the important announcements that Triplett made. He is starting a working group that will focus on achieving full parity with C. Under this group, he aims to collaborate with both the Rust community and other Intel developers to develop the specifications for the remaining features that need to be implemented in Rust for system programming. This group will also focus on bringing support for systems programming using the stable releases of Rust, not just experimental nightly releases of the compiler. In last week’s Reddit discussion, Triplett shared the current status of the working group, “To pre-answer, one question: the FFI / C Parity working group is in the process of being launched, and hasn't quite kicked off yet. I'll be posting about it here and elsewhere when it is, along with the initial goals.” Watch Josh Triplett’s full OSTS talk to know more about Intel’s contribution to Rust: https://www.youtube.com/watch?v=l9hM0h6IQDo [box type="shadow" align="" class="" width=""]Update: We have made the following corrections based on feedback from Josh Triplett: This year OSTS was open to Intel's partners and press. Previously, the article read 'escape patch', but it is 'escape hatch.' RFC 1444 wasn't last year, it was started in 2015 and accepted in 2016. 'dev KVM' is now corrected to '/dev/kvm'[/box] AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity
Read more
  • 0
  • 0
  • 25327

article-image-getting-started-opengl-es-30-using-glsl-30
Packt
04 Jun 2015
27 min read
Save for later

Getting started with OpenGL ES 3.0 Using GLSL 3.0

Packt
04 Jun 2015
27 min read
In this article by Parminder Singh, author of OpenGL ES 3.0 Cookbook, we will program shaders in Open GL ES shading language 3.0, load and compile a shader program, link a shader program, check errors in OpenGL ES 3.0, use the per-vertex attribute to send data to a shader, use uniform variables to send data to a shader, and program OpenGL ES 3.0 Hello World Triangle. (For more resources related to this topic, see here.) OpenGL ES 3.0 stands for Open Graphics Library for embedded systems version 3.0. It is a set of standard API specifications established by the Khronos Group. The Khronos Group is an association of members and organizations that are focused on producing open standards for royalty-free APIs. OpenGL ES 3.0 specifications were publicly released in August 2012. These specifications are backward compatible with OpenGL ES 2.0, which is a well-known de facto standard for embedded systems to render 2D and 3D graphics. Embedded operating systems such as Android, iOS, BlackBerry, Bada, Windows, and many others support OpenGL ES. OpenGL ES 3.0 is a programmable pipeline. A pipeline is a set of events that occur in a predefined fixed sequence, from the moment input data is given to the graphic engine to the output generated data for rendering the frame. A frame refers to an image produced as an output on the screen by the graphics engine. This article will provide OpenGL ES 3.0 development using C/C++, you can refer to the book OpenGL ES 3.0 Cookbook for more information on building OpenGL ES 3.0 applications on Android and iOS platforms. We will begin this article by understanding the basic programming of the OpenGL ES 3.0 with the help of a simple example to render a triangle on the screen. You will learn how to set up and create your first application on both platforms step by step. Understanding EGL: The OpenGL ES APIs require the EGL as a prerequisite before they can effectively be used on the hardware devices. The EGL provides an interface between the OpenGL ES APIs and the underlying native windowing system. Different OS vendors have their own ways to manage the creation of drawing surfaces, communication with hardware devices, and other configurations to manage the rendering context. EGL provides an abstraction, how the underlying system needs to be implemented in a platform-independent way. The EGL provides two important things to OpenGL ES APIs: Rendering context: This stores the data structure and important OpenGL ES states that are essentially required for rendering purpose Drawing surface: This provides the drawing surface to render primitives The following screenshot shows OpenGL ES 3.0 the programmable pipeline architecture. EGL provides the following responsibilities: Checking the available configuration to create rendering context of the device windowing system Creating the OpenGL rendering surface for drawing Compatibility and interfacing with other graphics APIs such as OpenVG, OpenAL, and so on Managing resources such as texture mapping Programming shaders in Open GL ES shading language 3.0 OpenGL ES shading language 3.0 (also called as GLSL) is a C-like language that allows us to writes shaders for programmable processors in the OpenGL ES processing pipeline. Shaders are the small programs that run on the GPU in parallel. OpenGL ES 3.0 supports two types of shaders: vertex shader and fragment shader. Each shader has specific responsibilities. For example, the vertex shader is used to process geometric vertices; however, the fragment shader processes the pixels or fragment color information. More specially, the vertex shader processes the vertex information by applying 2D/3D transformation. The output of the vertex shader goes to the rasterizer where the fragments are produced. The fragments are processed by the fragment shader, which is responsible for coloring them. The order of execution of the shaders is fixed; the vertex shader is always executed first, followed by the fragment shader. Each shader can share its processed data with the next stage in the pipeline. Getting ready There are two types of processors in the OpenGL ES 3.0 processing pipeline to execute vertex shader and fragment shader executables; it is called programmable processing unit: Vertex processor: The vertex processor is a programmable unit that operates on the incoming vertices and related data. It uses the vertex shader executable and run it on the vertex processor. The vertex shader needs to be programmed, compiled, and linked first in order to generate an executable, which can then be run on the vertex processor. Fragment processor: The fragment processor uses the fragment shader executable to process fragment or pixel data. The fragment processor is responsible for calculating colors of the fragment. They cannot change the position of the fragments. They also cannot access neighboring fragments. However, they can discard the pixels. The computed color values from this shader are used to update the framebuffer memory and texture memory. How to do it... Here are the sample codes for vertex and fragment shaders: Program the following vertex shader and store it into the vertexShader character type array variable: #version 300 es             in vec4 VertexPosition, VertexColor;       uniform float RadianAngle; out vec4     TriangleColor;     mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle),                    -sin(RadianAngle),cos(RadianAngle)); void main() { gl_Position = mat4(rotation)*VertexPosition; TriangleColor = VertexColor; } Program the following fragment shader and store it into another character array type variable called fragmentShader: #version 300 es         precision mediump float; in vec4   TriangleColor; out vec4 FragColor;     void main() {           FragColor = TriangleColor; }; How it works... Like most of the languages, the shader program also starts its control from the main() function. In both shader programs, the first line, #version 300 es, specifies the GLES shading language version number, which is 3.0 in the present case. The vertex shader receives a per-vertex input variable VertexPosition. The data type of this variable is vec4, which is one of the inbuilt data types provided by OpenGL ES Shading Language. The in keyword in the beginning of the variable specifies that it is an incoming variable and it receives some data outside the scope of our current shader program. Similarly, the out keyword specifies that the variable is used to send some data value to the next stage of the shader. Similarly, the color information data is received in VertexColor. This color information is passed to TriangleColor, which sends this information to the fragment shader, and is the next stage of the processing pipeline. The RadianAngle is a uniform type of variable that contains the rotation angle. This angle is used to calculate the rotation matrix to make the rendering triangle revolve. The input values received by VertexPosition are multiplied using the rotation matrix, which will rotate the geometry of our triangle. This value is assigned to gl_Position. The gl_Position is an inbuilt variable of the vertex shader. This variable is supposed to write the vertex position in the homogeneous form. This value can be used by any of the fixed functionality stages, such as primitive assembly, rasterization, culling, and so on. In the fragment shader, the precision keyword specifies the default precision of all floating types (and aggregates, such as mat4 and vec4) to be mediump. The acceptable values of such declared types need to fall within the range specified by the declared precision. OpenGL ES Shading Language supports three types of the precision: lowp, mediump, and highp. Specifying the precision in the fragment shader is compulsory. However, for vertex, if the precision is not specified, it is considered to be highest (highp). The FragColor is an out variable, which sends the calculated color values for each fragment to the next stage. It accepts the value in the RGBA color format. There's more… As mentioned there are three types of precision qualifiers, the following table describes these, the range and precision of these precision qualifiers are shown here: Loading and compiling a shader program The shader program created needs to be loaded and compiled into a binary form. This article will be helpful in understanding the procedure of loading and compiling a shader program. Getting ready Compiling and linking a shader is necessary so that these programs are understandable and executable by the underlying graphics hardware/platform (that is, the vertex and fragment processors). How to do it... In order to load and compile the shader source, use the following steps: Create a NativeTemplate.h/NativeTemplate.cpp and define a function named loadAndCompileShader in it. Use the following code, and proceed to the next step for detailed information about this function: GLuint loadAndCompileShader(GLenum shaderType, const char* sourceCode) { GLuint shader = glCreateShader(shaderType); // Create the shader if ( shader ) {      // Pass the shader source code      glShaderSource(shader, 1, &sourceCode, NULL);      glCompileShader(shader); // Compile the shader source code           // Check the status of compilation      GLint compiled = 0;      glGetShaderiv(shader,GL_COMPILE_STATUS,&compiled);      if (!compiled) {        GLint infoLen = 0;       glGetShaderiv(shader,GL_INFO_LOG_LENGTH, &infoLen);        if (infoLen) {          char* buf = (char*) malloc(infoLen);          if (buf) {            glGetShaderInfoLog(shader, infoLen, NULL, buf);            printf("Could not compile shader %s:" buf);            free(buf);          }          glDeleteShader(shader); // Delete the shader program          shader = 0;        }    } } return shader; } This function is responsible for loading and compiling a shader source. The argument shaderType accepts the type of shader that needs to be loaded and compiled; it can be GL_VERTEX_SHADER or GL_FRAGMENT_SHADER. The sourceCode specifies the source program of the corresponding shader. Create an empty shader object using the glCreateShader OpenGL ES 3.0 API. This API returns a non-zero value if the object is successfully created. This value is used as a handle to reference this object. On failure, this function returns 0. The shaderType argument specifies the type of the shader to be created. It must be either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER: GLuint shader = glCreateShader(shaderType); Unlike in C++, where object creation is transparent, in OpenGL ES, the objects are created behind the curtains. You can access, use, and delete the objects as and when required. All the objects are identified by a unique identifier, which can be used for programming purposes. The created empty shader object (shader) needs to be bound first with the shader source in order to compile it. This binding is performed by using the glShaderSource API: // Load the shader source code glShaderSource(shader, 1, &sourceCode, NULL); The API sets the shader code string in the shader object, shader. The source string is simply copied in the shader object; it is not parsed or scanned. Compile the shader using the glCompileShader API. It accepts a shader object handle shader:        glCompileShader(shader);   // Compile the shader The compilation status of the shader is stored as a state of the shader object. This state can be retrieved using the glGetShaderiv OpenGL ES API:      GLint compiled = 0;   // Check compilation status      glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled); The glGetShaderiv API accepts the handle of the shader and GL_COMPILE_STATUS as an argument to check the status of the compilation. It retrieves the status in the compiled variable. The compiled returns GL_TRUE if the last compilation was successful. Otherwise, it returns GL_FALSE. Use glGetShaderInfoLog to get the error report. The shader is deleted if the shader source cannot be compiled. Delete the shader object using the glDeleteShader API. Return the shader object ID if the shader is compiled successfully: return shader; // Return the shader object ID How it works... The loadAndCompileShader function first creates an empty shader object. This empty object is referenced by the shader variable. This object is bound with the source code of the corresponding shader. The source code is compiled through a shader object using the glCompileShader API. If the compilation is successful, the shader object handle is returned successfully. Otherwise, the shader object returns 0 and needs to be deleted explicitly using glDeleteShader. The status of the compilation can be checked using glGetShaderiv with GL_COMPILE_STATUS. There's more... In order to differentiate among various versions of OpenGL ES and GL shading language, it is useful to get this information from the current driver of your device. This will be helpful to make the program robust and manageable by avoiding errors caused by version upgrade or application being installed on older versions of OpenGL ES and GLSL. The other vital information can be queried from the current driver, such as the vendor, renderer, and available extensions supported by the device driver. This information can be queried using the glGetString API. This API accepts a symbolic constant and returns the queried system metrics in the string form. The printGLString wrapper function in our program helps in printing device metrics: static void printGLString(const char *name, GLenum s) {    printf("GL %s = %sn", name, (const char *) glGetString(s)); } Linking a shader program Linking is a process of aggregating a set (vertex and fragment) of shaders into one program that maps to the entirety of the programmable phases of the OpenGL ES 3.0 graphics pipeline. The shaders are compiled using shader objects. These objects are used to create special objects called program objects to link it to the OpenGL ES 3.0 pipeline. How to do it... The following instructions provide a step-by-step procedure to link as shader: Create a new function, linkShader, in NativeTemplate.cpp. This will be the wrapper function to link a shader program to the OpenGL ES 3.0 pipeline. Follow these steps to understand this program in detail: GLuint linkShader(GLuint vertShaderID,GLuint fragShaderID){ if (!vertShaderID || !fragShaderID){ // Fails! return return 0; } // Create an empty program object GLuint program = glCreateProgram(); if (program) { // Attach vertex and fragment shader to it glAttachShader(program, vertShaderID); glAttachShader(program, fragShaderID);   // Link the program glLinkProgram(program); GLint linkStatus = GL_FALSE; glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);   if (linkStatus != GL_TRUE) { GLint bufLength = 0; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength); if (bufLength) { char* buf = (char*) malloc(bufLength); if(buf) { glGetProgramInfoLog(program,bufLength,NULL, buf); printf("Could not link program:n%sn", buf); free(buf); } } glDeleteProgram(program); program = 0; } } return program; } Create a program object with glCreateProgram. This API creates an empty program object using which the shader objects will be linked: GLuint program = glCreateProgram(); //Create shader program Attach shader objects to the program object using the glAttachShader API. It is necessary to attach the shaders to the program object in order to create the program executable: glAttachShader(program, vertShaderID); glAttachShader(program, fragShaderID); How it works... The linkShader wrapper function links the shader. It accepts two parameters: vertShaderID and fragShaderID. They are identifiers of the compiled shader objects. The createProgram function creates a program object. It is another OpenGL ES object to which shader objects are attached using glAttachShader. The shader objects can be detached from the program object if they are no longer in need. The program object is responsible for creating the executable program that runs on the programmable processor. A program in OpenGL ES is an executable in the OpenGL ES 3.0 pipeline that runs on the vertex and fragment processors. The program object is linked using glLinkShader. If the linking fails, the program object must be deleted using glDeleteProgram. When a program object is deleted it automatically detached the shader objects associated with it. The shader objects need to be deleted explicitly. If a program object is requested for deletion, it will only be deleted until it's not being used by some other rendering context in the current OpenGL ES state. If the program's object link successfully, then one or more executable will be created, depending on the number of shaders attached with the program. The executable can be used at runtime with the help of the glUseProgram API. It makes the executable a part of the current OpenGL ES state. Checking errors in OpenGL ES 3.0 While programming, it is very common to get unexpected results or errors in the programmed source code. It's important to make sure that the program does not generate any error. In such a case, you would like to handle the error gracefully. OpenGL ES 3.0 allows us to check the error using a simple routine called getGlError. The following wrapper function prints all the error messages occurred in the programming: static void checkGlError(const char* op) { for(GLint error = glGetError(); error; error= glGetError()){ printf("after %s() glError (0x%x)n", op, error); } } Here are few examples of code that produce OpenGL ES errors: glEnable(GL_TRIANGLES);   // Gives a GL_INVALID_ENUM error   // Gives a GL_INVALID_VALUE when attribID >= GL_MAX_VERTEX_ATTRIBS glEnableVertexAttribArray(attribID); How it works... When OpenGL ES detects an error, it records the error into an error flag. Each error has a unique numeric code and symbolic name. OpenGL ES does not track each time an error has occurred. Due to performance reasons, detecting errors may degrade the rendering performance therefore, the error flag is not set until the glGetError routine is called. If there is no error detected, this routine will always return GL_NO_ERRORS. In distributed environment, there may be several error flags, therefore, it is advisable to call the glGetError routine in the loop, as this routine can record multiple error flags. Using the per-vertex attribute to send data to a shader The per-vertex attribute in the shader programming helps receive data in the vertex shader from OpenGL ES program for each unique vertex attribute. The received data value is not shared among the vertices. The vertex coordinates, normal coordinates, texture coordinates, color information, and so on are the example of per-vertex attributes. The per-vertex attributes are meant for vertex shaders only, they cannot be directly available to the fragment shader. Instead, they are shared via the vertex shader throughout variables. Typically, the shaders are executed on the GPU that allows parallel processing of several vertices at the same time using multicore processors. In order to process the vertex information in the vertex shader, we need some mechanism that sends the data residing on the client side (CPU) to the shader on the server side (GPU). This article will be helpful to understand the use of per-vertex attributes to communicate with shaders. Getting ready The vertex shader contains two per-vertex attributes named VertexPosition and VertexColor: // Incoming vertex info from program to vertex shader in vec4 VertexPosition; in vec4 VertexColor; The VertexPosition contains the 3D coordinates of the triangle that defines the shape of the object that we intend to draw on the screen. The VertexColor contains the color information on each vertex of this geometry. In the vertex shader, a non-negative attribute location ID uniquely identifies each vertex attribute. This attribute location is assigned at the compile time if not specified in the vertex shader program. Basically, the logic of sending data to their shader is very simple. It's a two-step process: Query attribute: Query the vertex attribute location ID from the shader. Attach data to the attribute: Attach this ID to the data. This will create a bridge between the data and the per-vertex attribute specified using the ID. The OpenGL ES processing pipeline takes care of sending data. How to do it... Follow this procedure to send data to a shader using the per-vertex attribute: Declare two global variables in NativeTemplate.cpp to store the queried attribute location IDs of VertexPosition and VertexColor: GLuint positionAttribHandle; GLuint colorAttribHandle; Query the vertex attribute location using the glGetAttribLocation API: positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle    = glGetAttribLocation (programID, "VertexColor"); This API provides a convenient way to query an attribute location from a shader. The return value must be greater than or equals to 0 in order to ensure that attribute with given name exists. Send the data to the shader using the glVertexAttribPointer OpenGL ES API: // Send data to shader using queried attrib location glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); The data associated with geometry is passed in the form of an array using the generic vertex attribute with the help of the glVertexAttribPointer API. It's important to enable the attribute location. This allows us to access data on the shader side. By default, the vertex attributes are disabled. Similarly, the attribute can be disabled using glDisableVertexAttribArray. This API has the same syntax as that of glEnableVertexAttribArray. Store the incoming per-vertex attribute color VertexColor into the outgoing attribute TriangleColor in order to send it to the next stage (fragment shader): in vec4 VertexColor; // Incoming data from CPU out vec4 TriangleColor; // Outgoing to next stage void main() { . . . TriangleColor = VertexColor; } Receive the color information from the vertex shader and set the fragment color: in vec4 TriangleColor; // Incoming from vertex shader out vec4 FragColor; // The fragment color void main() { FragColor = TriangleColor; }; How it works... The per-vertex attribute variables VertexPosition and VertexColor defined in the vertex shader are the lifelines of the vertex shader. These lifelines constantly provide the data information from the client side (OpenGL ES program or CPU) to server side (GPU). Each per-vertex attribute has a unique attribute location available in the shader that can be queried using glGetAttribLocation. The per-vertex queried attribute locations are stored in positionAttribHandle; colorAttribHandle must be bound with the data using attribute location with glVertexAttribPointer. This API establishes a logical connection between client and server side. Now, the data is ready to flow from our data structures to the shader. The last important thing is the enabling of the attribute on the shader side for optimization purposes. By default, all the attribute are disabled. Therefore, even if the data is supplied for the client side, it is not visible at the server side. The glEnableVertexAttribArray API allows us to enable the per-vertex attributes on the shader side. Using uniform variables to send data to a shader The uniform variables contain the data values that are global. They are shared by all vertices and fragments in the vertex and fragment shaders. Generally, some information that is not specific to the per-vertex is treated in the form of uniform variables. The uniform variable could exist in both the vertex and fragment shaders. Getting ready The vertex shader we programmed in the programming shaders in OpenGL ES shading language 3.0 contains a uniform variable RadianAngle. This variable is used to rotate the rendered triangle: // Uniform variable for rotating triangle uniform float RadianAngle; This variable will be updated on the client side (CPU) and send to the shader at server side (GPU) using special OpenGL ES 3.0 APIs. Similar to per-vertex attributes for uniform variables, we need to query and bind data in order to make it available in the shader. How to do it... Follow these steps to send data to a shader using uniform variables: Declare a global variable in NativeTemplate.cpp to store the queried attribute location IDs of radianAngle: GLuint radianAngle; Query the uniform variable location using the glGetUniformLocation API: radianAngle=glGetUniformLocation(programID,"RadianAngle"); Send the updated radian value to the shader using the glUniform1f API: float degree = 0; // Global degree variable float radian; // Global radian variable radian = degree++/57.2957795; // Update angle and convert it into radian glUniform1f(radianAngle, radian); // Send updated data in the vertex shader uniform Use a general form of 2D rotation to apply on the entire incoming vertex coordinates: . . . . uniform float RadianAngle; mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle), -sin(RadianAngle),cos(RadianAngle)); void main() { gl_Position = mat4(rotation)*VertexPosition; . . . . . } How it works... The uniform variable RadianAngle defined in the vertex shader is used to apply rotation transformation on the incoming per-vertex attribute VertexPosition. On the client side, this uniform variable is queried using glGetUniformLocation. This API returns the index of the uniform variable and stores it in radianAngle. This index will be used to bind the updated data information that is stored the radian with the glUniform1f OpenGL ES 3.0 API. Finally, the updated data reaches the vertex shader executable, where the general form of the Euler rotation is calculated: mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle), -sin(RadianAngle),cos(RadianAngle)); The rotation transformation is calculated in the form of 2 x 2 matrix rotation, which is later promoted to a 4 x 4 matrix when multiplied by VertexPosition. The resultant vertices cause to rotate the triangle in a 2D space. Programming OpenGL ES 3.0 Hello World Triangle The NativeTemplate.h/cpp file contains OpenGL ES 3.0 code, which demonstrates a rotating colored triangle. The output of this file is not an executable on its own. It needs a host application that provides the necessary OpenGL ES 3.0 prerequisites to render this program on a device screen. Developing Android OpenGL ES 3.0 application Developing iOS OpenGL ES 3.0 application This will provide all the necessary prerequisites that are required to set up OpenGL ES, rendering and querying necessary attributes from shaders to render our OpenGL ES 3.0 "Hello World Triangle" program. In this program, we will render a simple colored triangle on the screen. Getting ready OpenGL ES requires a physical size (pixels) to define a 2D rendering surface called a viewport. This is used to define the OpenGL ES Framebuffer size. A buffer in OpenGL ES is a 2D array in the memory that represents pixels in the viewport region. OpenGL ES has three types of buffers: color buffer, depth buffer, and stencil buffer. These buffers are collectively known as a framebuffer. All the drawings commands effect the information in the framebuffer. The life cycle of this is broadly divided into three states: Initialization: Shaders are compiled and linked to create program objects Resizing: This state defines the viewport size of rendering surface Rendering: This state uses the shader program object to render geometry on screen How to do it... Follow these steps to program this: Use the NativeTemplate.cpp file and create a createProgramExec function. This is a high-level function to load, compile, and link a shader program. This function will return the program object ID after successful execution: GLuint createProgramExec(const char* VS, const char* FS) { GLuint vsID = loadAndCompileShader(GL_VERTEX_SHADER, VS); GLuint fsID = loadAndCompileShader(GL_FRAGMENT_SHADER, FS); return linkShader(vsID, fsID); } Visit the loading and compiling a shader program and linking shader program for more information on the working of loadAndCompileShader and linkShader. Use NativeTemplate.cpp, create a function GraphicsInit and create the shader program object by calling createProgramExec: GLuint programID; // Global shader program handler bool GraphicsInit(){ printOpenGLESInfo(); // Print GLES3.0 system metrics // Create program object and cache the ID programID = createProgramExec(vertexShader, fragmentShader); if (!programID) { // Failure !!! return printf("Could not create program."); return false; } checkGlError("GraphicsInit"); // Check for errors } Create a new function GraphicsResize. This will set the viewport region: bool GraphicsResize( int width, int height ){ glViewport(0, 0, width, height); } The viewport determines the portion of the OpenGL ES surface window on which the rendering of the primitives will be performed. The viewport in OpenGL ES is set using the glViewPort API. Create the gTriangleVertices global variable that contains the vertices of the triangle: GLfloat gTriangleVertices[] = { { 0.0f, 0.5f}, {-0.5f, - 0.5f}, { 0.5f, -0.5f} }; Create the GraphicsRender renderer function. This function is responsible for rendering the scene. Add the following code in it and perform the following steps to understand this function:        bool GraphicsRender(){ glClear( GL_COLOR_BUFFER_BIT ); // Which buffer to clear? – color buffer glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Clear color with black color   glUseProgram( programID ); // Use shader program and apply radian = degree++/57.2957795; // Query and send the uniform variable. radianAngle = glGetUniformLocation(programID, "RadianAngle"); glUniform1f(radianAngle, radian); // Query 'VertexPosition' from vertex shader positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID, "VertexColor"); // Send data to shader using queried attribute glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); glEnableVertexAttribArray(positionAttribHandle); // Enable vertex position attribute glEnableVertexAttribArray(colorAttribHandle); glDrawArrays(GL_TRIANGLES, 0, 3); // Draw 3 triangle vertices from 0th index } Choose the appropriate buffer from the framebuffer (color, depth, and stencil) that we want to clear each time the frame is rendered using the glClear API. In this, we want to clear color buffer. The glClear API can be used to select the buffers that need to be cleared. This API accepts a bitwise OR argument mask that can be used to set any combination of buffers. Query the VertexPosition generic vertex attribute location ID from the vertex shader into positionAttribHandle using glGetAttribLocation. This location will be used to send triangle vertex data that is stored in gTriangleVertices to the shader using glVertexAttribPointer. Follow the same instruction in order to get the handle of VertexColor into colorAttributeHandle: positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID, "VertexColor"); glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); Enable the generic vertex attribute location using positionAttribHandle before the rendering call and render the triangle geometry. Similarly, for the per-vertex color information, use colorAttribHandle: glEnableVertexAttribArray(positionAttribHandle); glDrawArrays(GL_TRIANGLES, 0, 3); How it works... When the application starts, the control begins with GraphicsInit, where the system metrics are printed out to make sure that the device supports OpenGL ES 3.0. The OpenGL ES programmable pipeline requires vertex shader and fragment shader program executables in the rendering pipeline. The program object contains one or more executables after attaching the compiled shader objects and linking them to program. In the createProgramExec function the vertex and fragment shaders are compiled and linked, in order to generate the program object. The GraphicsResize function generates the viewport of the given dimension. This is used internally by OpenGL ES 3.0 to maintain the framebuffer. In our current application, it is used to manage color buffer. Finally, the rendering of the scene is performed by GraphicsRender, this function clears the color buffer with black background and renders the triangle on the screen. It uses a shader object program and sets it as the current rendering state using the glUseProgram API. Each time a frame is rendered, data is sent from the client side (CPU) to the shader executable on the server side (GPU) using glVertexAttribPointer. This function uses the queried generic vertex attribute to bind the data with OpenGL ES pipeline. There's more... There are other buffers also available in OpenGL ES 3.0: Depth buffer: This is used to prevent background pixels from rendering if there is a closer pixel available. The rule of prevention of the pixels can be controlled using special depth rules provided by OpenGL ES 3.0. Stencil buffer: The stencil buffer stores the per-pixel information and is used to limit the area of rendering. The OpenGL ES API allows us to control each buffer separately. These buffers can be enabled and disabled as per the requirement of the rendering. The OpenGL ES can use any of these buffers (including color buffer) directly to act differently. These buffers can be set via preset values by using OpenGL ES APIs, such as glClearColor, glClearDepthf, and glClearStencil. Summary This article covered different aspects of OpenGL ES 3.0. Resources for Article: Further resources on this subject: OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects [article] OpenGL 4.0: Building a C++ Shader Program Class [article] Introduction to Modern OpenGL [article]
Read more
  • 0
  • 0
  • 25219

article-image-how-to-recover-deleted-data-from-an-android-device-tutorial
Sugandha Lahoti
04 Feb 2019
11 min read
Save for later

How to recover deleted data from an Android device [Tutorial]

Sugandha Lahoti
04 Feb 2019
11 min read
In this tutorial, we are going to learn about data recovery techniques that enable us to view data that has been deleted from a device. Deleted data could contain highly sensitive information and thus data recovery is a crucial aspect of mobile forensics. This article will cover the following topics: Data recovery overview Recovering data deleted from an SD card Recovering data deleted from a phone's internal storage This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book is a comprehensive guide to Android forensics, from setting up the workstation to analyzing key artifacts. Data recovery overview Data recovery is a powerful concept within digital forensics. It is the process of retrieving deleted data from a or SD card when it cannot be accessed normally. Being able to recover data that has been deleted by a user could help solve civil or criminal cases. This is because many accused just delete data from their device hoping that the evidence will be destroyed. Thus, in most criminal cases, deleted data could be crucial because it may contain information the user wanted to erase from their Android device. For example, consider the scenario where a mobile phone has been seized from a terrorist. Wouldn't it be of the greatest importance to know which items were deleted by them? Access to any deleted SMS messages, pictures, dialed numbers, and so on could be of critical importance as they may reveal a lot of sensitive information. From a normal user's point of view, recovering data that has been deleted would usually mean referring to the operating system's built-in solutions, such as the Recycle Bin in Windows. While it's true that data can be recovered from these locations, due to an increase in user awareness, these options often don't work. For instance, on a desktop computer, people now use Shift + Del whenever they want to delete a file completely from their desktop. Similarly, in mobile environments, users are aware of the restore operations provided by apps and so on. In spite of these situations, data recovery techniques allow a forensic investigator to access the data that has been deleted from the device. With respect to Android, it is possible to recover most of the deleted data, including SMS, pictures, application data, and so on. But it is important to seize the device in a proper manner and follow certain procedures, otherwise, data might be deleted permanently. To ensure that the deleted data is not lost forever, it is recommended to keep the following points in mind: Do not use the phone for any activity after seizing it. The deleted text message exists on the device until space is needed by some other incoming data, so the phone must not be used for any sort of activity to prevent the data from being overwritten. Even when the phone is not used, without any intervention from our end, data can be overwritten. For instance, an incoming SMS would automatically occupy the space, which overwrites the deleted data. Also, remote wipe commands can wipe the content present on the device. To prevent such events, you can consider the option of placing the device in a Faraday bag. Thus, care should be taken to prevent delivery of any new messages or data through any means of communication. How can deleted files be recovered? When a user deletes any data from a device, the data is not actually erased from the device and continues to exist on it. What gets deleted is the pointer to that data. All filesystems contain metadata, which maintains information about the hierarchy of files, filenames, and so on. Deletion will not really erase the data but instead removes the file system metadata. Thus, when text messages or any other files are deleted from a device, they are just made invisible to the user, but the files are still present on the device as long as they are not overwritten by some other data. Hence, there is the possibility of recovering them before new data is added and occupies the space. Deleting the pointer and marking the space as available is an extremely fast operation compared to actually erasing all the data from the device. Hence, to increase performance, operating systems just delete the metadata. Recovering deleted data on an Android device involves three scenarios: Recovering data that is deleted from the SD card such as pictures, videos, and so on Recovering data that is deleted from SQLite databases such as SMS, chats, web history, and so on Recovering data that is deleted from the device's internal storage The following sections cover the techniques that can be used to recover deleted data from SD cards, and the internal storage of the Android device. Recovering deleted data from SD cards Data present on an SD card can reveal lots of information that is useful during a forensic investigation. The fact that pictures, videos, voice recordings, and application data are stored on the SD card adds weight to this. As mentioned in the previous chapters, Android devices often use FAT32 or exFAT file systems on their SD card. The main reason for this is that these file systems are widely supported by most operating systems, including Windows, Linux, and macOS X. The maximum file size on a FAT32 formatted drive is around 4 GB. With increasingly high-resolution formats now available, this limit is commonly reached, that's why newer devices support exFAT: this file system doesn't have such limitations. Recovering the data deleted from an external SD is pretty easy if it can be mounted as a drive. If the SD card is removable, it can be mounted as a drive by connecting it to a computer using a card reader. Any files can be transferred to the SD card while it's mounted. Some of the older devices that use USB mass storage also mount the device to a drive when connected through a USB cable. In order to make sure that the original evidence is not modified, a physical image of the disk is taken and all further experimentation is done on the image itself. Similarly, in the case of SD card analysis, an image of the SD card needs to be taken. Once the imaging is done, we have a raw image file. In our example, we will use FTK Imager by AccessData, which is an imaging utility. In addition to creating disk images, it can also be used to explore the contents of a disk image. The following are the steps that can be followed to recover the contents of an SD card using this tool: Start FTK Imager and click on File and then Add Evidence Item... in the menu, as shown in the following screenshot: Adding evidence source to FTK Imager Select Image File in the Select Source dialog and click on Next. In the Select File dialog, browse to the location where you downloaded the sdcard.dd file, select it, and click on Finish, as shown in the following screenshot: Selecting the image file for analysis in FTK Imager FTK Imager's default display will appear with the contents of the SD card visible in the View pane at the lower right. You can also click on the Properties tab below the lower left pane to view the properties for the disk image. Now, on the left pane, the drive has opened. You can open folders by clicking on the + sign. When highlighting the folder, contents are shown on the right pane. When a file is selected, its contents can be seen on the bottom pane. As shown in the following screenshot, the deleted files will have a red X over the icon derived from their file extension: Deleted files shown with red X over the icons As shown in the following screenshot, to export the file, right-click on the file that contains the picture and select Export Files...: Sometimes, only a fragment of the file is recoverable, which cannot be read or viewed directly. In that case, we need to look through free or unallocated space for more data. Carving can be used to recover files from free and unallocated space. PhotoRec is one of the tools that can help you to do that. You will learn more about file carving with PhotoRec in the following sections. Recovering deleted data from internal memory Recovering files deleted from Android's internal memory, such as app data and so on, is not as easy as recovering such data from SD cards and SQLite databases, but, of course, it's not impossible. Many commercial forensic tools are capable of recovering deleted data from Android devices, of course, if a physical acquisition is possible and the user data partition isn't encrypted.  But this is not very common for modern devices, especially those running most recent versions of the operating system, such as Oreo and Pie. Most Android devices, especially modern smartphones, and tablets, use the EXT4 file system to organize data in their internal storage. This file system is very common for Linux-based devices. So, if we want to recover deleted data from the device's internal storage, we need a tool capable of recovering deleted files from the EXT4 file system. One such tool is extundelete. The tool is available for downloading here: http://extundelete.sourceforge.net/. To recover the contents of an inode, extundelete searches a file system's journal for an old copy of that inode. Information contained in the inode helps the tool to locate the file within the file system. To recover not only the file's contents, but also its name, extundelete is able to search the deleted entries in a directory to match the inode number of a file to a file name. To use this tool, you will need a Linux workstation. Most forensic Linux distributions have it already on board. For example, the following is a screenshot from SIFT Workstation—a popular digital forensics and incident response Linux distribution created by Rob Lee and his team from the SANS Institute (https://digital-forensics.sans.org/community/downloads): extundelete command-line options Before you can start the recovery process, you will need to mount a previously imaged userdata partition. In this example, we are going to use an Android device imaged via the chip-off technique. First of all, we need to determine the location of the userdata partition within the image. To do this, we can use mmls from the Sleuth Kit, as shown in the following screenshot: Android device partitions As you can see in the screenshot, the userdata partition is the last one and starts in sector 9199616. To make sure the userdata partition is EXT4 formatted, let's use fsstat, as shown in the following example: A part of fsstat output All you need now is to mount the userdata partition and run extundelete against it, as shown in the following example: extundelete /userdata/partition/mount/point --restore-all All recovered files will be saved to a subdirectory of the current directory named RECOVERED_FILES. If you are interested in recovering files before or after the specified date, you can use the --before date and --after-date options. It's important to note that these dates must be in UNIX Epoch format. There are quite a lot of both online and offline tools capable of converting timestamps, for example, you can use https://www.epochconverter.com/. As you can see, this method isn't very easy and fast, but there is a better way: using Autopsy, an open source digital forensic tool In the following example, we used a built-in file extension filter to find all the images on the Android device, and found a lot of deleted artifacts: Recovering deleted files from an EXT4 partition with Autopsy Summary Data recovery is the process of retrieving deleted data from the device and thus is a very important concept in forensics. In this chapter, we have seen various techniques to recover deleted data from both the SD card and the internal memory. While recovering the data from a removable SD card is easy, recovering data from internal memory involves a few complications. SQLite file parsing and file carving techniques aid a forensic analyst in recovering the deleted items present in the internal memory of an Android device. In order to understand the forensic perspective and the analysis of Android apps, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 25160

article-image-tic-tac-toe-game-in-asp-net-core-tutorial
Aaron Lazar
13 Aug 2018
28 min read
Save for later

Building a Tic-tac-toe game in ASP.Net Core 2.0 [Tutorial]

Aaron Lazar
13 Aug 2018
28 min read
Learning is more fun if we do it while making games. With this thought, let's continue our quest to learn .NET Core 2.0 by writing a Tic-tac-toe game in .NET Core 2.0. We will develop the game in the ASP.NET Core 2.0 web app, using SignalR Core. We will follow a step-by-step approach and use Visual Studio 2017 as the primary IDE, but will list the steps needed while using the Visual Studio Code editor as well. Let's do the project setup first and then we will dive into the coding. This tutorial has been extracted from the book .NET Core 2.0 By Example, by Rishabh Verma and Neha Shrivastava. Installing SignalR Core NuGet package Create a new ASP.NET Core 2.0 MVC app named TicTacToeGame. With this, we will have a basic working ASP.NET Core 2.0 MVC app in place. However, to leverage SignalR Core in our app, we need to install SignalR Core NuGet and the client packages. To install the SignalR Core NuGet package, we can perform one of the following two approaches in the Visual Studio IDE: In the context menu of the TicTacToeGame project, click on Manage NuGet Packages. It will open the NuGet Package Manager for the project. In the Browse section, search for the Microsoft.AspNetCore.SignalR package and click Install. This will install SignalR Core in the app. Please note that currently the package is in the preview stage and hence the pre-release checkbox has to be ticked: Edit the TicTacToeGame.csproj file, add the following code snippet in the ItemGroup code containing package references, and click Save. As soon as the file is saved, the tooling will take care of restoring the packages and in a while, the SignalR package will be installed. This approach can be used with Visual Studio Code as well. Although Visual Studio Code detects the unresolved dependencies and may prompt you to restore the package, it is recommended that immediately after editing and saving the file, you run the dotnet restore command in the terminal window at the location of the project: <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> <PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final" /> </ItemGroup> Now we have server-side packages installed. We still need to install the client-side package of SignalR, which is available through npm. To do so, we need to first ascertain whether we have npm installed on the machine or not. If not, we need to install it. npm is distributed with Node.js, so we need to download and install Node.js from https://nodejs.org/en/. The installation is quite straightforward. Once this installation is done, open a Command Prompt at the project location and run the following command: npm install @aspnet/signalr-client This will install the SignalR client package. Just go to the package location (npm creates a node_modules folder in the project directory). The relative path from the project directory would be \node_modules\@aspnet\signalr-client\dist\browser. From this location, copy the signalr-client-1.0.0-alpha1-final.js file into the wwwroot\js folder. In the current version, the name is signalr-client-1.0.0-alpha1-final.js. With this, we are done with the project setup and we are ready to use SignalR goodness as well. So let's dive into the coding. Coding the game In this section, we will implement our gaming solution. The end output will be the working two-player Tic-Tac-Toe game. We will do the coding in steps for ease of understanding:  In the Startup class, we modify the ConfigureServices method to add SignalR to the container, by writing the following code: //// Adds SignalR to the services container. services.AddSignalR(); In the Configure method of the same class, we configure the pipeline to use SignalR and intercept and wire up the request containing gameHub to our SignalR hub that we will be creating with the following code: //// Use - SignalR & let it know to intercept and map any request having gameHub. app.UseSignalR(routes => { routes.MapHub<GameHub>("gameHub"); }); The following is the code for both methods, for the sake of clarity and completion. Other methods and properties are removed for brevity: // This method gets called by the run-time. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddMvc(); //// Adds SignalR to the services container. services.AddSignalR(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler("/Home/Error"); } app.UseStaticFiles(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); //// Use - SignalR & let it know to intercept and map any request having gameHub. app.UseSignalR(routes => { routes.MapHub<GameHub>("gameHub"); }); The previous two steps set up SignalR for us. Now, let's start with the coding of the player registration form. We want the player to be registered with a name and display the picture. Later, the server will also need to know whether the player is playing, waiting for a move, searching for an opponent, and so on. Let's create the Player model in the Models folder in the app. The code comments are self-explanatory: /// <summary> /// The player class. Each player of Tic-Tac-Toe game would be an instance of this class. /// </summary> internal class Player { /// <summary> /// Gets or sets the name of the player. This would be set at the time user registers. /// </summary> public string Name { get; set; } /// <summary> /// Gets or sets the opponent player. The player against whom the player would be playing. /// This is determined/ set when the players click Find Opponent Button in the UI. /// </summary> public Player Opponent { get; set; } /// <summary> /// Gets or sets a value indicating whether the player is playing. /// This is set when the player starts a game. /// </summary> public bool IsPlaying { get; set; } /// <summary> /// Gets or sets a value indicating whether the player is waiting for opponent to make a move. /// </summary> public bool WaitingForMove { get; set; } /// <summary> /// Gets or sets a value indicating whether the player is searching for opponent. /// </summary> public bool IsSearchingOpponent { get; set; } /// <summary> /// Gets or sets the time when the player registered. /// </summary> public DateTime RegisterTime { get; set; } /// <summary> /// Gets or sets the image of the player. /// This would be set at the time of registration, if the user selects the image. /// </summary> public string Image { get; set; } /// <summary> /// Gets or sets the connection id of the player connection with the gameHub. /// </summary> public string ConnectionId { get; set; } } Now, we need to have a UI in place so that the player can fill in the form and register. We also need to show the image preview to the player when he/she browses the image. To do so, we will use the Index.cshtml view of the HomeController class that comes with the default MVC template. We will refer to the following two .js files in the _Layout.cshtml partial view so that they are available to all the views. Alternatively, you could add these in the Index.cshtml view as well, but its highly recommended that common scripts should be added in _Layout.cshtml. The version of the script file may be different in your case. These are the currently available latest versions. Although jQuery is not required to be the library of choice for us, we will use jQuery to keep the code clean, simple, and compact. With these references, we have jQuery and SignalR available to us on the client side: <script src="~/lib/jquery/dist/jquery.js"></script> <!-- jQuery--> <script src="~/js/signalr-client-1.0.0-alpha1-final.js"></script> <!-- SignalR--> After adding these references, create the simple HTML UI for the image preview and registration, as follows: <div id="divPreviewImage"> <!-- To display the browsed image--> <fieldset> <div class="form-group"> <div class="col-lg-2"> <image src="" id="previewImage" style="height:100px;width:100px;border:solid 2px dotted; float:left" /> </div> <div class="col-lg-10" id="divOpponentPlayer"> <!-- To display image of opponent player--> <image src="" id="opponentImage" style="height:100px;width:100px;border:solid 2px dotted; float:right;" /> </div> </div> </fieldset> </div> <div id="divRegister"> <!-- Our Registration form--> <fieldset> <legend>Register</legend> <div class="form-group"> <label for="name" class="col-lg-2 control- label">Name</label> <div class="col-lg-10"> <input type="text" class="form-control" id="name" placeholder="Name"> </div> </div> <div class="form-group"> <label for="image" class="col-lg-2 control- label">Avatar</label> <div class="col-lg-10"> <input type="file" class="form-control" id="image" /> </div> </div> <div class="form-group"> <div class="col-lg-10 col-lg-offset-2"> <button type="button" class="btn btn-primary" id="btnRegister">Register</button> </div> </div> </fieldset> </div> When the player registers by clicking the Register button, the player's details need to be sent to the server. To do this, we will write the JavaScript to send details to our gameHub: let hubUrl = '/gameHub'; let httpConnection = new signalR.HttpConnection(hubUrl); let hubConnection = new signalR.HubConnection(httpConnection); var playerName = ""; var playerImage = ""; var hash = "#"; hubConnection.start(); $("#btnRegister").click(function () { //// Fires on button click playerName = $('#name').val(); //// Sets the player name with the input name. playerImage = $('#previewImage').attr('src'); //// Sets the player image variable with specified image var data = playerName.concat(hash, playerImage); //// The registration data to be sent to server. hubConnection.invoke('RegisterPlayer', data); //// Invoke the "RegisterPlayer" method on gameHub. }); $("#image").change(function () { //// Fires when image is changed. readURL(this); //// HTML 5 way to read the image as data url. }); function readURL(input) { if (input.files && input.files[0]) { //// Go in only if image is specified. var reader = new FileReader(); reader.onload = imageIsLoaded; reader.readAsDataURL(input.files[0]); } } function imageIsLoaded(e) { if (e.target.result) { $('#previewImage').attr('src', e.target.result); //// Sets the image source for preview. $("#divPreviewImage").show(); } }; The player now has a UI to input the name and image, see the preview image, and click Register. On clicking the Register button, we are sending the concatenated name and image to the gameHub on the server through hubConnection.invoke('RegisterPlayer', data);  So, it's quite simple for the client to make a call to the server. Initialize the hubConnection by specifying hub name as we did in the first three lines of the preceding code snippet. Start the connection by hubConnection.start();, and then invoke the server hub method by calling the invoke method, specifying the hub method name and the parameter it expects. We have not yet created the hub, so let's create the GameHub class on the server: /// <summary> /// The Game Hub class derived from Hub /// </summary> public class GameHub : Hub { /// <summary> /// To keep the list of all the connected players registered with the game hub. We could have /// used normal list but used concurrent bag as its thread safe. /// </summary> private static readonly ConcurrentBag<Player> players = new ConcurrentBag<Player>(); /// <summary> /// Registers the player with name and image. /// </summary> /// <param name="nameAndImageData">The name and image data sent by the player.</param> public void RegisterPlayer(string nameAndImageData) { var splitData = nameAndImageData?.Split(new char[] { '#' }, StringSplitOptions.None); string name = splitData[0]; string image = splitData[1]; var player = players?.FirstOrDefault(x => x.ConnectionId == Context.ConnectionId); if (player == null) { player = new Player { ConnectionId = Context.ConnectionId, Name = name, IsPlaying = false, IsSearchingOpponent = false, RegisterTime = DateTime.UtcNow, Image = image }; if (!players.Any(j => j.Name == name)) { players.Add(player); } } this.OnRegisterationComplete(Context.ConnectionId); } /// <summary> /// Fires on completion of registration. /// </summary> /// <param name="connectionId">The connectionId of the player which registered</param> public void OnRegisterationComplete(string connectionId) { //// Notify this connection id that the registration is complete. this.Clients.Client(connectionId). InvokeAsync(Constants.RegistrationComplete); } } The code comments make it self-explanatory. The class should derive from the SignalR Hub class for it to be recognized as Hub. There are two methods of interest which can be overridden. Notice that both the methods follow the async pattern and hence return Task: Task OnConnectedAsync(): This method fires when a client/player connects to the hub. Task OnDisconnectedAsync(Exception exception): This method fires when a client/player disconnects or looses the connection. We will override this method to handle the scenario where the player disconnects. There are also a few properties that the hub class exposes: Context: This property is of type HubCallerContext and gives us access to the following properties: Connection: Gives access to the current connection User: Gives access to the ClaimsPrincipal of the user who is currently connected ConnectionId: Gives the current connection ID string Clients: This property is of type IHubClients and gives us the way to communicate to all the clients via the client proxy Groups: This property is of type IGroupManager and provides a way to add and remove connections to the group asynchronously To keep the things simple, we are not using a database to keep track of our registered players. Rather we will use an in-memory collection to keep the registered players. We could have used a normal list of players, such as List<Player>, but then we would need all the thread safety and use one of the thread safety primitives, such as lock, monitor, and so on, so we are going with ConcurrentBag<Player>, which is thread safe and reasonable for our game development. That explains the declaration of the players collection in the class. We will need to do some housekeeping to add players to this collection when they resister and remove them when they disconnect. We saw in previous step that the client invoked the RegisterPlayer method of the hub on the server, passing in the name and image data. So we defined a public method in our hub, named RegisterPlayer, accepting the name and image data string concatenated through #. This is just one of the simple ways of accepting the client data for demonstration purposes, we can also use strongly typed parameters. In this method, we split the string on # and extract the name as the first part and the image as the second part. We then check if the player with the current connection ID already exists in our players collection. If it doesn't, we create a Player object with default values and add them to our players collection. We are distinguishing the player based on the name for demonstration purposes, but we can add an Id property in the Player class and make different players have the same name also. After the registration is complete, the server needs to update the player, that the registration is complete and the player can then look for the opponent. To do so, we make a call to the OnRegistrationComplete method which invokes a method called  registrationComplete on the client with the current connection ID. Let's understand the code to invoke the method on the client: this.Clients.Client(connectionId).InvokeAsync(Constants.RegistrationComplete); On the Clients property, we can choose a client having a specific connection ID (in this case, the current connection ID from the Context) and then call InvokeAsync to invoke a method on the client specifying the method name and parameters as required. In the preceding case method, the name is registrationComplete with no parameters. Now we know how to invoke a server method from the client and also how to invoke the client method from the server. We also know how to select a specific client and invoke a method there. We can invoke the client method from the server, for all the clients, a group of clients, or a specific client, so rest of the coding stuff would be just a repetition of these two concepts. Next, we need to implement the registrationComplete method on the client. On registration completion, the registration form should be hidden and the player should be able to find an opponent to play against. To do so, we would write JavaScript code to hide the registration form and show the UI for finding the opponent. On clicking the Find Opponent button, we need the server to pair us against an opponent, so we need to invoke a hub method on server to find opponent. The server can respond us with two outcomes: It finds an opponent player to play against. In this case, the game can start so we need to simulate the coin toss, determine the player who can make the first move, and start the game. This would be a game board in the client-user interface. It doesn't find an opponent and asks the player to wait for another player to register and search for an opponent. This would be a no opponent found screen in the client. In both the cases, the server would do some processing and invoke a method on the client. Since we need a lot of different user interfaces for different scenarios, let's code the HTML markup inside div to make it easier to show and hide sections based on the server response. We will add the following code snippet in the body. The comments specify the purpose of each of the div elements and markup inside them: <div id="divFindOpponentPlayer"> <!-- Section to display Find Opponent --> <fieldset> <legend>Find a player to play against!</legend> <div class="form-group"> <input type="button" class="btn btn-primary" id="btnFindOpponentPlayer" value="Find Opponent Player" /> </div> </fieldset> </div> <div id="divFindingOpponentPlayer"> <!-- Section to display opponent not found, wait --> <fieldset> <legend>Its lonely here!</legend> <div class="form-group"> Looking for an opponent player. Waiting for someone to join! </div> </fieldset> </div> <div id="divGameInformation" class="form-group"> <!-- Section to display game information--> <div class="form-group" id="divGameInfo"></div> <div class="form-group" id="divInfo"></div> </div> <div id="divGame" style="clear:both"> <!-- Section where the game board would be displayed --> <fieldset> <legend>Game On</legend> <div id="divGameBoard" style="width:380px"></div> </fieldset> </div> The following client-side code would take care of Steps 7 and 8. Though the comments are self-explanatory, we will quickly see what all stuff is that is going on here. We handle the registartionComplete method and display the Find Opponent Player section. This section has a button to find an opponent player called btnFindOpponentPlayer. We define the event handler of the button to invoke the FindOpponent method on the hub. We will see the hub method implementation later, but we know that the hub method would either find an opponent or would not find an opponent, so we have defined the methods opponentFound and opponentNotFound, respectively, to handle these scenarios. In the opponentNotFound method, we just display a section in which we say, we do not have an opponent player. In the opponentFound method, we display the game section, game information section, opponent display picture section, and draw the Tic-Tac-Toe game board as a 3×3 grid using CSS styling. All the other sections are hidden: $("#btnFindOpponentPlayer").click(function () { hubConnection.invoke('FindOpponent'); }); hubConnection.on('registrationComplete', data => { //// Fires on registration complete. Invoked by server hub $("#divRegister").hide(); // hide the registration div $("#divFindOpponentPlayer").show(); // display find opponent player div. }); hubConnection.on('opponentNotFound', data => { //// Fires when no opponent is found. $('#divFindOpponentPlayer').hide(); //// hide the find opponent player section. $('#divFindingOpponentPlayer').show(); //// display the finding opponent player div. }); hubConnection.on('opponentFound', (data, image) => { //// Fires when opponent player is found. $('#divFindOpponentPlayer').hide(); $('#divFindingOpponentPlayer').hide(); $('#divGame').show(); //// Show game board section. $('#divGameInformation').show(); //// Show game information $('#divOpponentPlayer').show(); //// Show opponent player image. opponentImage = image; //// sets the opponent player image for display $('#opponentImage').attr('src', opponentImage); //// Binds the opponent player image $('#divGameInfo').html("<br/><span><strong> Hey " + playerName + "! You are playing against <i>" + data + "</i> </strong></span>"); //// displays the information of opponent that the player is playing against. //// Draw the tic-tac-toe game board, A 3x3 grid :) by proper styling. for (var i = 0; i < 9; i++) { $("#divGameBoard").append("<span class='marker' id=" + i + " style='display:block;border:2px solid black;height:100px;width:100px;float:left;margin:10px;'>" + i + "</span>"); } }); First we need to have a Game object to track a game, players involved, moves left, and check if there is a winner. We will have a Game class defined as per the following code. The comments detail the purpose of the methods and the properties defined: internal class Game { /// <summary> /// Gets or sets the value indicating whether the game is over. /// </summary> public bool IsOver { get; private set; } /// <summary> /// Gets or sets the value indicating whether the game is draw. /// </summary> public bool IsDraw { get; private set; } /// <summary> /// Gets or sets Player 1 of the game /// </summary> public Player Player1 { get; set; } /// <summary> /// Gets or sets Player 2 of the game /// </summary> public Player Player2 { get; set; } /// <summary> /// For internal housekeeping, To keep track of value in each of the box in the grid. /// </summary> private readonly int[] field = new int[9]; /// <summary> /// The number of moves left. We start the game with 9 moves remaining in a 3x3 grid. /// </summary> private int movesLeft = 9; /// <summary> /// Initializes a new instance of the <see cref="Game"/> class. /// </summary> public Game() { //// Initialize the game for (var i = 0; i < field.Length; i++) { field[i] = -1; } } /// <summary> /// Place the player number at a given position for a player /// </summary> /// <param name="player">The player number would be 0 or 1</param> /// <param name="position">The position where player number would be placed, should be between 0 and ///8, both inclusive</param> /// <returns>Boolean true if game is over and we have a winner.</returns> public bool Play(int player, int position) { if (this.IsOver) { return false; } //// Place the player number at the given position this.PlacePlayerNumber(player, position); //// Check if we have a winner. If this returns true, //// game would be over and would have a winner, else game would continue. return this.CheckWinner(); } } Now we have the entire game mystery solved with the Game class. We know when the game is over, we have the method to place the player marker, and check the winner. The following server side-code on the GameHub will handle Steps 7 and 8: /// <summary> /// The list of games going on. /// </summary> private static readonly ConcurrentBag<Game> games = new ConcurrentBag<Game>(); /// <summary> /// To simulate the coin toss. Like heads and tails, 0 belongs to one player and 1 to opponent. /// </summary> private static readonly Random toss = new Random(); /// <summary> /// Finds the opponent for the player and sets the Seraching for Opponent property of player to true. /// We will use the connection id from context to identify the current player. /// Once we have 2 players looking to play, we can pair them and simulate coin toss to start the game. /// </summary> public void FindOpponent() { //// First fetch the player from our players collection having current connection id var player = players.FirstOrDefault(x => x.ConnectionId == Context.ConnectionId); if (player == null) { //// Since player would be registered before making this call, //// we should not reach here. If we are here, something somewhere in the flow above is broken. return; } //// Set that player is seraching for opponent. player.IsSearchingOpponent = true; //// We will follow a queue, so find a player who registered earlier as opponent. //// This would only be the case if more than 2 players are looking for opponent. var opponent = players.Where(x => x.ConnectionId != Context.ConnectionId && x.IsSearchingOpponent && !x.IsPlaying).OrderBy(x =>x.RegisterTime).FirstOrDefault(); if (opponent == null) { //// Could not find any opponent, invoke opponentNotFound method in the client. Clients.Client(Context.ConnectionId) .InvokeAsync(Constants.OpponentNotFound); return; } //// Set both players as playing. player.IsPlaying = true; player.IsSearchingOpponent = false; //// Make him unsearchable for opponent search opponent.IsPlaying = true; opponent.IsSearchingOpponent = false; //// Set each other as opponents. player.Opponent = opponent; opponent.Opponent = player; //// Notify both players that they can play by invoking opponentFound method for both the players. //// Also pass the opponent name and opoonet image, so that they can visualize it. //// Here we are directly using connection id, but group is a good candidate and use here. Clients.Client(Context.ConnectionId) .InvokeAsync(Constants.OpponentFound, opponent.Name, opponent.Image); Clients.Client(opponent.ConnectionId) .InvokeAsync(Constants.OpponentFound, player.Name, player.Image); //// Create a new game with these 2 player and add it to games collection. games.Add(new Game { Player1 = player, Player2 = opponent }); } Here, we have created a games collection to keep track of ongoing games and a Random field named toss to simulate the coin toss. How FindOpponent works is documented in the comments and is intuitive to understand. Once the game starts, each player has to make a move and then wait for the opponent to make a move, until the game ends. The move is made by clicking on the available grid cells. Here, we need to ensure that cell position that is already marked by one of the players is not changed or marked. So, as soon as a valid cell is marked, we set its CSS class to notAvailable so we know that the cell is taken. While clicking on a cell, we will check whether the cell has notAvailablestyle. If yes, it cannot be marked. If not, the cell can be marked and we then send the marked position to the server hub. We also see the waitingForMove, moveMade, gameOver, and opponentDisconnected events invoked by the server based on the game state. The code is commented and is pretty straightforward. The moveMade method in the following code makes use of the MoveInformation class, which we will define at the server for sharing move information with both players: //// Triggers on clicking the grid cell. $(document).on('click', '.marker', function () { if ($(this).hasClass("notAvailable")) { //// Cell is already taken. return; } hubConnection.invoke('MakeAMove', $(this)[0].id); //// Cell is valid, send details to hub. }); //// Fires when player has to make a move. hubConnection.on('waitingForMove', data => { $('#divInfo').html("<br/><span><strong> Your turn <i>" + playerName + "</i>! Make a winning move! </strong></span>"); }); //// Fires when move is made by either player. hubConnection.on('moveMade', data => { if (data.Image == playerImage) { //// Move made by player. $("#" + data.ImagePosition).addClass("notAvailable"); $("#" + data.ImagePosition).css('background-image', 'url(' + data.Image + ')'); $('#divInfo').html("<br/><strong>Waiting for <i>" + data.OpponentName + "</i> to make a move. </strong>"); } else { $("#" + data.ImagePosition).addClass("notAvailable"); $("#" + data.ImagePosition).css('background-image', 'url(' + data.Image + ')'); $('#divInfo').html("<br/><strong>Waiting for <i>" + data.OpponentName + "</i> to make a move. </strong>"); } }); //// Fires when the game ends. hubConnection.on('gameOver', data => { $('#divGame').hide(); $('#divInfo').html("<br/><span><strong>Hey " + playerName + "! " + data + " </strong></span>"); $('#divGameBoard').html(" "); $('#divGameInfo').html(" "); $('#divOpponentPlayer').hide(); }); //// Fires when the opponent disconnects. hubConnection.on('opponentDisconnected', data => { $("#divRegister").hide(); $('#divGame').hide(); $('#divGameInfo').html(" "); $('#divInfo').html("<br/><span><strong>Hey " + playerName + "! Your opponent disconnected or left the battle! You are the winner ! Hip Hip Hurray!!!</strong></span>"); }); After every move, both players need to be updated by the server about the move made, so that both players' game boards are in sync. So, on the server side we will need an additional model called MoveInformation, which will contain information on the latest move made by the player and the server will send this model to both the clients to keep them in sync: /// <summary> /// While playing the game, players would make moves. This class contains the information of those moves. /// </summary> internal class MoveInformation { /// <summary> /// Gets or sets the opponent name. /// </summary> public string OpponentName { get; set; } /// <summary> /// Gets or sets the player who made the move. /// </summary> public string MoveMadeBy { get; set; } /// <summary> /// Gets or sets the image position. The position in the game board (0-8) where the player placed his /// image. /// </summary> public int ImagePosition { get; set; } /// <summary> /// Gets or sets the image. The image of the player that he placed in the board (0-8) /// </summary> public string Image { get; set; } } Finally, we will wire up the remaining methods in the GameHub class to complete the game coding. The MakeAMove method is called every time a player makes a move. Also, we have overidden the OnDisconnectedAsync method to inform a player when their opponent disconnects. In this method, we also keep our players and games list current. The comments in the code explain the workings of the methods: /// <summary> /// Invoked by the player to make a move on the board. /// </summary> /// <param name="position">The position to place the player</param> public void MakeAMove(int position) { //// Lets find a game from our list of games where one of the player has the same connection Id as the current connection has. var game = games?.FirstOrDefault(x => x.Player1.ConnectionId == Context.ConnectionId || x.Player2.ConnectionId == Context.ConnectionId); if (game == null || game.IsOver) { //// No such game exist! return; } //// Designate 0 for player 1 int symbol = 0; if (game.Player2.ConnectionId == Context.ConnectionId) { //// Designate 1 for player 2. symbol = 1; } var player = symbol == 0 ? game.Player1 : game.Player2; if (player.WaitingForMove) { return; } //// Update both the players that move is made. Clients.Client(game.Player1.ConnectionId) .InvokeAsync(Constants.MoveMade, new MoveInformation { OpponentName = player.Name, ImagePosition = position, Image = player.Image }); Clients.Client(game.Player2.ConnectionId) .InvokeAsync(Constants.MoveMade, new MoveInformation { OpponentName = player.Name, ImagePosition = position, Image = player.Image }); //// Place the symbol and look for a winner after every move. if (game.Play(symbol, position)) { Remove<Game>(games, game); Clients.Client(game.Player1.ConnectionId) .InvokeAsync(Constants.GameOver, $"The winner is {player.Name}"); Clients.Client(game.Player2.ConnectionId) .InvokeAsync(Constants.GameOver, $"The winner is {player.Name}"); player.IsPlaying = false; player.Opponent.IsPlaying = false; this.Clients.Client(player.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); this.Clients.Client(player.Opponent.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); } //// If no one won and its a tame draw, update the players that the game is over and let them look for new game to play. if (game.IsOver && game.IsDraw) { Remove<Game>(games, game); Clients.Client(game.Player1.ConnectionId) .InvokeAsync(Constants.GameOver, "Its a tame draw!!!"); Clients.Client(game.Player2.ConnectionId) .InvokeAsync(Constants.GameOver, "Its a tame draw!!!"); player.IsPlaying = false; player.Opponent.IsPlaying = false; this.Clients.Client(player.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); this.Clients.Client(player.Opponent.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); } if (!game.IsOver) { player.WaitingForMove = !player.WaitingForMove; player.Opponent.WaitingForMove = !player.Opponent.WaitingForMove; Clients.Client(player.Opponent.ConnectionId) .InvokeAsync(Constants.WaitingForOpponent, player.Opponent.Name); Clients.Client(player.ConnectionId) .InvokeAsync(Constants.WaitingForOpponent, player.Opponent.Name); } } With this, we are done with the coding of the game and are ready to run the game app. So there you have it! You've just built your first game in .NET Core! The detailed source code can be downloaded from Github. If you're interested in learning more, head on over to get the book, .NET Core 2.0 By Example, by Rishabh Verma and Neha Shrivastava. Applying Single Responsibility principle from SOLID in .NET Core Unit Testing in .NET Core with Visual Studio 2017 for better code quality Get to know ASP.NET Core Web API [Tutorial]
Read more
  • 0
  • 1
  • 25130
article-image-creating-test-suites-specs-and-expectations-jest
Packt
12 Aug 2015
7 min read
Save for later

Creating test suites, specs and expectations in Jest

Packt
12 Aug 2015
7 min read
In this article by Artemij Fedosejev, the author of React.js Essentials, we will take a look at test suites, specs, and expectations. To write a test for JavaScript functions, you need a testing framework. Fortunately, Facebook built their own unit test framework for JavaScript called Jest. It is built on top of Jasmine - another well-known JavaScript test framework. If you’re familiar with Jasmine you’ll find Jest's approach to testing very similar. However I'll make no assumptions about your prior experience with testing frameworks and discuss the basics first. The fundamental idea of unit testing is that you test only one piece of functionality in your application that usually is implemented by one function. And you test it in isolation - meaning that all other parts of your application which that function depends on are not used by your tests. Instead, they are imitated by your tests. To imitate a JavaScript object is to create a fake one that simulates the behavior of the real object. In unit testing the fake object is called mock and the process of creating it is called mocking. Jest automatically mocks dependencies when you're running your tests. Better yet, it automatically finds tests to execute in your repository. Let's take a look at the example. Create a directory called ./snapterest/source/js/utils/ and create a new file called TweetUtils.js within it, with the following contents: function getListOfTweetIds(tweets) {  return Object.keys(tweets);}module.exports.getListOfTweetIds = getListOfTweetIds; TweetUtils.js file is a module with the getListOfTweetIds() utility function for our application to use. Given an object with tweets, getListOfTweetIds() returns an array of tweet IDs. Using the CommonJS module pattern we export this function: module.exports.getListOfTweetIds = getListOfTweetIds; Jest Unit Testing Now let's write our first unit test with Jest. We'll be testing our getListOfTweetIds() function. Create a new directory: ./snapterest/source/js/utils/__tests__/. Jest will run any tests in any __tests__ directories that it finds within your project structure. So it's important to name your directories with tests: __tests__. Create a TweetUtils-test.js file inside of __tests__:jest.dontMock('../TweetUtils');describe('Tweet utilities module', function () {  it('returns an array of tweet ids', function () {    var TweetUtils = require('../TweetUtils');    var tweetsMock = {      tweet1: {},      tweet2: {},      tweet3: {}    };    var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3'];    var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock);    expect(actualListOfTweetIds).toBe(expectedListOfTweetIds);  });}); First we tell Jest not to mock our TweetUtils module: jest.dontMock('../TweetUtils'); We do this because Jest will automatically mock modules returned by the require() function. In our test we're requiring the TweetUtils module: var TweetUtils = require('../TweetUtils'); Without the jest.dontMock('../TweetUtils') call, Jest would return an imitation of our TweetUtils module, instead of the real one. But in this case we actually need the real TweetUtils module, because that's what we're testing. Creating test suites Next we call a global Jest function describe(). In our TweetUtils-test.js file we're not just creating a single test, instead we're creating a suite of tests. A suite is a collection of tests that collectively test a bigger unit of functionality. For example a suite can have multiple tests which tests all individual parts of a larger module. In our example, we have a TweetUtils module with a number of utility functions. In that situation we would create a suite for the TweetUtils module and then create tests for each individual utility function, like getListOfTweetIds(). describe defines a suite and takes two parameters: Suite name - the description of what is being tested: 'Tweet utilities module'. Suit implementation: the function that implements this suite. In our example, the suite is: describe('Tweet utilities module', function () {  // Suite implementation goes here...}); Defining specs How do you create an individual test? In Jest, individual tests are called specs. They are defined by calling another global Jest function it(). Just like describe(), it() takes two parameters: Spec name: the title that describes what is being tested by this spec: 'returns an array of tweet ids'. Spec implementation: the function that implements this spec. In our example, the spec is: it('returns an array of tweet ids', function () {  // Spec implementation goes here...}); Let's take a closer look at the implementation of our spec: var TweetUtils = require('../TweetUtils');var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}};var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3'];var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock);expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); This spec tests whether getListOfTweetIds() method of our TweetUtils module returns an array of tweet IDs when given an object with tweets. First we import the TweetUtils module: var TweetUtils = require('../TweetUtils'); Then we create a mock object that simulates the real tweets object: var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}}; The only requirement for this mock object is to have tweet IDs as object keys. The values are not important hence we choose empty objects. Key names are not important either, so we can name them tweet1, tweet2 and tweet3. This mock object doesn't fully simulate the real tweet object. Its sole purpose is to simulate the fact that its keys are tweet IDs. The next step is to create an expected list of tweet IDs: var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3']; We know what tweet IDs to expect because we've mocked a tweets object with the same IDs. The next step is to extract the actual tweet IDs from our mocked tweets object. For that we use getListOfTweetIds()that takes the tweets object and returns an array of tweet IDs: var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock); We pass tweetsMock to that method and store the results in actualListOfTweetIds. The reason this variable is named actualListOfTweetIds is because this list of tweet IDs is produced by the actual getListOfTweetIds() function that we're testing. Setting Expectations The final step will introduce us to a new important concept: expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); Let's think about the process of testing. We need to take an actual value produced by the method that we're testing - getListOfTweetIds(), and match it to the expected value that we know in advance. The result of that match will determine if our test has passed or failed. The reason why we can guess what getListOfTweetIds() will return in advance is because we've prepared the input for it - that's our mock object: var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}}; So we can expect the following output from calling TweetUtils.getListOfTweetIds(tweetsMock): ['tweet1', 'tweet2', 'tweet3'] But because something can go wrong inside of getListOfTweetIds() we cannot guarantee this result - we can only expect it. That's why we need to create an expectation. In Jest, an Expectation is built using expect()which takes an actual value, for example: actualListOfTweetIds. expect(actualListOfTweetIds) Then we chain it with a Matcher function that compares the actual value with the expected value and tells Jest whether the expectation was met. expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); In our example we use the toEqual() matcher function to compare two arrays. Click here for a list of all built-in matcher functions in Jest. And that's how you create a spec. A spec contains one or more expectations. Each expectation tests the state of your code. A spec can be either a passing spec or a failing spec. A spec is a passing spec only when all expectations are met, otherwise it's a failing spec. Well done, you've written your first testing suite with a single spec that has one expectation. Continue reading React.js Essentials to continue your journey into testing.
Read more
  • 0
  • 0
  • 25084

article-image-python-ldap-applications-part-1-installing-and-configuring-python-ldap-library-and-bin
Packt
22 Oct 2009
16 min read
Save for later

Configuring and securing PYTHON LDAP Applications Part 1

Packt
22 Oct 2009
16 min read
This article mini-series by Matt Butcher will look at the Python application programmers interface (API) for the LDAP libraries, and using this API, we will connect to our OpenLDAP server and manipulate the directory information tree. More specifically, we will cover the following in this article series: Installing and configuring the Python-LDAP library. Binding to an LDAP directory. Comparing attributes between the client and server. Performing searches on the directory. Modifying the directory information tree with add, delete, and modify operations. Modifying directory passwords. Working with LDAP schemas. This first part will deal with installation and configuration of the Python-LDAP library. We will then see how the binding operation is performed. Installing Python-LDAP There are a couple of LDAP libraries available for Python, but the most popular is the Python-LDAP module, which (as with the PHP API) uses the OpenLDAP C library as a base for providing network access to an LDAP server. Like OpenLDAP, the Python-LDAP API is Open Source. It works on Linux, Windows, Mac OS X, BSD, and probably other UNIX operating systems as well (platforms that have both Python and OpenLDAP available). The source code is available at the official Python-LDAP website: http://python-ldap.sourceforge.net. Here pre-compiled binaries for many platforms are available, but we will install the version in the Ubuntu repository. Before installing Python-LDAP, you will need to have the Python scripting language installed. Typically, this is installed by default on Ubuntu (and on most flavors of Linux). Installing Python-LDAP requires only one command: $ sudo apt-get install python-ldap This will choose the module or modules that match the installed Python version. That is, if you are running Python 2.4 (the stable version, at the time of writing), this will install the python2.4-ldap package. The library, which consists of several Python packages, will be installed into /usr/lib/python2.4/site-packages/. In Ubuntu, there is no need to run further configuration in order to make use of the Python-LDAP library. We are ready to dive into the API. If you install by hand, either from source or from the binary packages, you may need to add the Python-LDAP library to your Python path. See the Python documentation for details. The Python-LDAP API is well documented. The documentation is available online at the official Python-LDAP website: http://python-ldap.sourceforge.net/docs.shtml. You may find it more convenient to download a copy of the documentation and use it locally. In previous versions of Ubuntu Python-LDAP documentation was available in the package python-ldap-doc, which could be installed with apt-get. Also, many of the Python-LDAP functions and objects have documentation strings that can be accessed from the Python interpreter like this: >>> print ldap.initialize.__doc__ Return LDAPObject instance by opening LDAP connection to LDAP host specified by LDAP URL Parameters: uri LDAP URL containing at least connection scheme and hostport, e.g. ldap://localhost:389 trace_level If non-zero a trace output of LDAP calls is generated. trace_file File object where to write the trace output to. Default is to use stdout. The documentation string usually contains a brief description of the function or object, and is a useful quick reference. A Quick Overview of the Python LDAP API Now that the package is installed, let's take a quick look at what was installed. The Python-LDAP package comes with nine different modules: ldap: This is the main LDAP module. It contains the functions  necessary for performing LDAP operations, such as binding, searching,  adding, and modifying. ldap.async: Python can do synchronous and asynchronous transactions.  This module provides utilities that are useful when performing asynchronous  operations. ldap.cidict: This contains the cidict class, which is a case-insensitive  dictionary. Although LDAP is case-insensitive when it comes to attribute  names, it is often necessary to perform case-insensitive operations on  dictionary keys. ldap.modlist: Utility functions for creating modification records (for  performing the LDAP modify operation) are in this package. ldap.filter: This module provides a couple of utility functions for  creating LDAP search filters. ldap.sasl: Python-LDAP's SASL support is partially contained in this  package. It is not documented in the online documentation, but there are  plenty of notes in the doc strings in this module. ldap.schema: This module contains classes that describe the subschema  subentry records. It can be used to access schema information. ldapurl: This module provides a class for generating and parsing LDAP  URLs. ldif: This module is used to parse or write LDIF-formatted LDAP records. Most of the commonly used LDAP features are in the ldap module, and we will be focused mainly on using that. Since many of the submodules have only a couple of functions, we will use them in passing but treat them as separate objects of discussion. A Note on the Python Examples The Python interpreter (python) can be run interactively. Running Python in an interactive mode can be very useful for discovery and debugging. Further, since it prints useful information directly to the console, it can be useful for demonstration purposes. In many of the examples below, the code is shown as it would be entered in the interactive shell. Here is an example: >>> h = "Hello World">>> h'Hello World'>>> print hHello World Lines that begin with >>> and ... are interpreter prompts (similar to $ in shell). Examples with the >>> are run in the interpreter interactively. Some code, however, will be typed into a file (as usual). This code will not have lines beginning with the interpreter prompt. They tend to look more like this: h = “Hello World”print h Most of the time, features are introduced using the interpreter, but lengthier examples are done in the form of a Python script. Where it might be confusing, I will explicitly say in the text which of the two methods I am using. Connecting and Binding to the Directory Now that we have the library installed, we are ready to use the API. The Python-LDAP API connects and binds in two stages. Initializing the LDAP system is done with the ldap.initialize() function. The initialize() method returns an LDAPObject object, which contains methods for performing LDAP operations and retrieving information about the LDAP connection and transactions. A basic initialization is done like this: >>> import ldap>>> con = ldap.initialize('ldap://localhost') The first line of this example imports the ldap module, that contains the initialize() method as well as the LDAPObject that we will make frequent use of. The second line initializes the LDAP code, and returns an LDAPObject that we will use to connect to the server. The initialize() function takes a simple LDAP URL (protocol://host:port) as a parameter. Sometimes, you may prefer to pass in simply host and port information. This can be done with the connect(host, port) function, that also returns an LDAPObject object. In addition, if you need to check or set any LDAP options, you should use the get_option() and set_option() functions before binding. For instance, we can set the connection to require a TLS certificate by setting the OPT_X_TLS_DEMAND option: >>> con.get_option(ldap.OPT_X_TLS_DEMAND)0>>> con.set_option(ldap.OPT_X_TLS_DEMAND, True)>>> con.get_option(ldap.OPT_X_TLS_DEMAND)1 A Safe Connection In most production environments, security is a major concern. As we have seen in previous chapters, one major component of security in network-based LDAP services is the use of SSL/TLS-based connections. There are two ways to get transport-layer security with the Python-LDAP module. The first is to connect to the LDAPS (LDAP over SSL) port. This is done by passing the correct parameter to the initialize() function. Instead of using the ldap:// protocol, which will make an unverified unencrypted connection to port 389, use an ldaps:// protocol, which will make an SSL connection to port 636 (you can specify alternate an alternate port by appending a colon (:) and then the port number to the end of the URL). Or, instead of using LDAPS, you can perform a Start TLS operation before binding to the server: >>> import ldap>>> con = ldap.initialize('ldap://localhost')>>> con.start_tls_s() Note that while the call to ldap.initialize() does not actually open a connection, the call to ldap.start_tls_s() does create a connection. Exceptions Connecting to an LDAP server may result in the raising of an exception, so in production code, it is best to wrap the connection attempt inside of a try/except block. Here is a fragment of a script: #!/usr/bin/env pythonimport ldap, sysserver = 'ldap://localhost'l = ldap.initialize(server)try: l.start_tls_s()except ldap.LDAPError, e: print e.message['info'] if type(e.message) == dict and e.message.has_key('desc'): print e.message['desc'] else: print e sys.exit() In the case above, if the start_tls_s() method results in an error, it will be caught. The except clause checks if the returned message is a dict (which it should always be), and also checks if it has the description ('desc') field. If so, it prints the description. Otherwise, it prints the entire message. There are a few dozen exceptions that the Python-LDAP library might raise, but all of them are subclasses of the LDAPError class, and can be caught by the line: except ldap.LDAPError, e: Within an LDAPError object, there is a dictionary, called message, which contains the 'info' and 'desc' fields. The 'info' field contains the information returned from the server, and the 'desc' field contains a description of the error. In general, it is best to use try/except blocks around LDAP operations in order to catch any errors that might occur during processing. Binding Once we have an LDAPObject instance, we can bind to the LDAP directory. The Python-LDAP API supports both simple and SASL binding methods, and there are five different bind methods: bind(): Takes three required parameters: a DN, a password (or credential, for SASL), and a string indicating what type of bind method to use. Currently, only ldap.AUTH_SIMPLE is supported. This is asynchronous. Example: con.bind(dn, pw, ldap.AUTH_SIMPLE) bind_s(): This one is same as above, but it is synchronous, and returns  information about the status of the bind. simple_bind(): This performs a simple bind. This has two optional  parameters: DN and password. If no parameter is specified, this will bind as  anonymous. This is asynchronous. simple_bind_s(): This is the synchronous version of the above. sasl_interactive_bind_s(): This performs an SASL bind, and it takes two parameters: an SASL identifier and an SASL authentication string. First, for many Python LDAP functions, including almost all of the LDAP operations, there are both synchronous and asynchronous versions. Synchronous versions, which will block until the server returns a result, have method names that end with _s. The other operations – those that do not end with _s – are asynchronous. An asynchronous message will begin an operation, and then return control to the program. The operation will continue in the background. It is the responsibility of the program to periodically check on the operation to see if it has been completed. Since they wait to return any results until the operation has been completed, synchronous methods will often have different return values than their asynchronous counterparts. Synchronized methods may return the results obtained from the server, or they may have void returns. Asynchronous methods, on the other hand, will always return a message identifier. This identifier can be used to access the results of the operation. Here's an example of the different results for the two different forms of simple bind. First, the synchronous bind: >>> dn = "uid=matt,ou=users,dc=example,dc=com">>> pw = "secret">>> con.simple_bind_s( dn, pw ) (97, [])>>> Notice that this method returns a tuple. Now, look at the asynchronous version: >>> con.simple_bind( dn, pw )8>>> con.result(8)(97, []) In this case, the simple_bind() method returned 8 – the message identification number for the result. We can use the result() method to fetch the resulting information. The result() method returns a two-item tuple, where the first item is the status code (97 means success), and the second is a list of messages from the server. In this case, the list is empty. Notes on Getting ResultsThere are two noteworthy caveats about fetching results. First, a particular result can only be fetched once. You cannot call result() with the same message ID multiple times. Second, you can execute multiple asynchronous operations without checking the results. The consequence of doing this is that all of the results will be stored until they are fetched. This consumes memory, and can lead to confusing results if result() or result( ldap.RES_ANY ) is called. Later in this chapter, we will see more sophisticated uses of synchronous and asynchronous methods, but for now we will continue looking at methods of binding. The bind() and bind_s() methods work the same way, but they require a third parameter, specifying which sort of authentication mechanism to use. Unfortunately, at the time of this writing, only the AUTH_SIMPLE form of binding (plain old simple bind) is supported by this mechanism: >>> con.bind_s( dn, pw, ldap.AUTH_SIMPLE ) (97, []) This performs a simple bind to the server. Exceptions A bind can fail for a number of reasons, the most common being that the connection failed (the CONNECT_ERROR exception) or authentication failed (INVALID_CREDENTIALS). In production code, it is a good idea to check for these exceptions using try/except blocks. By checking for them separately, you can distinguish between, say, authentication failures and other, more serious failures: l = ldap.initialize(server)try: #l.start_tls_s() l.bind_s(user_dn, user_pw)except ldap.INVALID_CREDENTIALS: print "Your username or password is incorrect." sys.exit()except ldap.LDAPError, e: if type(e.message) == dict and e.message.has_key('desc'): print e.message['desc'] else: print e sys.exit() In this case, if the failure is due to the user entering the wrong DN or password, a message to that effect is printed. Otherwise, the error description provided by the LDAP library is printed. SASL Interactive Binds SASL is a robust authentication mechanism, but the flexibility and adaptability of SASL comes at the cost of additional complexity. This additional complexity is evident in the Python-LDAP module. SASL binding is implemented differently than the other bind methods. First, there is no asynchronous version of the SASL bind method (not all thread safety issues have been worked out in this module, yet). Since the SASL code is not as stable as the rest of the API, you may want to stick to simple binding (with SSL/TLS protection) rather than rely upon SASL support. There is only one SASL binding method, sasl_interactive_bind_s(). This method takes two arguments. The first is a DN string. It is almost always left blank, since with SASL, we usually authenticate with some other identifier. The second argument is an sasl object (or a subclass of an sasl object). The sasl object contains a dictionary of information that the SASL subsystem uses to perform authentication. Each different SASL mechanism is implemented as a class that is a subclass of the sasl object. There are a handful of different subclasses that come with the Python-LDAP module, though you can create your own if you need support for a different mechanism. cram_md5: This class implements the CRAM-MD5 SASL mechanism. A new cram_md5 object can be created with a constructor that passes in the authentication ID, a password, and an optional authorization ID. digest_md5: This implements the DIGEST-MD5 SASL mechanism. Like  cram_md5(), this object can be constructed with an authentication ID, a  password, and an optional authorization ID. gssapi: This implements the GSSAPI mechanism, an its constructor has only the optional authorization ID. It is used to perform Kerberos V authentication. external: This implements the EXTERNAL SASL mechanism, that uses an underlying transport security mechanism (like SSL/TLS). Its constructor only takes the optional authorization ID. Our LDAP server is configured to allow DIGEST-MD5 SASL connections, so we will walk through an example of performing this sort of SASL authentication. >>> import ldap>>> import ldap.sasl>>> user_name = "matt">>> pw = "secret">>> >>> con = ldap.initialize("ldap://localhost")>>> auth_tokens = ldap.sasl.digest_md5( user_name, pw )>>> >>> con.sasl_interactive_bind_s( "", auth_tokens )0 To begin with, we import the ldap and ldap.sasl packages, and we store the user name and password information in a couple of variables. After initializing a connection, we need to create a new sasl object – on that will contain the information necessary to perform DIGEST-MD5 authentication. We do this by constructing a new digest_md5 object: >>> auth_tokens = ldap.sasl.digest_md5( user_name, pw ) Now, auth_tokens points to our new SASL object. Next, we need to bind. This is done with the sasl_interactive_bind_s() method of the LDAPObject: >>> con.sasl_interactive_bind_s( "", auth_tokens ) If a SASL interactive bind is successful, then this method will return an integer. Otherwise, an INVALID_CREDENTIALS exception will be raised: >>> auth_tokens = ldap.sasl.digest_md5( "foo", pw )>>> try:... con.sasl_interactive_bind_s( "", auth_tokens )... except ldap.INVALID_CREDENTIALS, e :... print e... {'info': 'SASL(-13): user not found: no secret in database', 'desc': 'Invalid credentials'} In this case, the user foo was not found in the SASL DB, and the SASL subsystem returned an error.
Read more
  • 0
  • 0
  • 24981

article-image-extracting-data-physically-dd
Packt
30 Apr 2015
10 min read
Save for later

Extracting data physically with dd

Packt
30 Apr 2015
10 min read
 In this article by Rohit Tamma and Donnie Tindall, authors of the book Learning Android Forensics, we will cover physical data extraction using free and open source tools wherever possible. The majority of the material covered in this article will use the ADB methods. (For more resources related to this topic, see here.) The dd command should be familiar to any examiner who has done traditional hard drive forensics. The dd command is a Linux command-line utility used by definition to convert and copy files, but is frequently used in forensics to create bit-by-bit images of entire drives. Many variations of the dd commands also exist and are commonly used, such as dcfldd, dc3dd, ddrescue, and dd_rescue. As the dd command is built for Linux-based systems, it is frequently included on Android platforms. This means that a method for creating an image of the device often already exists on the device! The dd command has many options that can be set, of which only forensically important options are listed here. The format of the dd command is as follows: dd if=/dev/block/mmcblk0 of=/sdcard/blk0.img bs=4096 conv=notrunc,noerror,sync if: This option specifies the path of the input file to read from. of: This option specifies the path of the output file to write to. bs: This option specifies the block size. Data is read and written in the size of the block specified, defaults to 512 bytes if not specified. conv: This option specifies the conversion options as its attributes: notrunc: This option does not truncate the output file. noerror: This option continues imaging if an error is encountered. sync: In conjunction with the noerror option, this option writes x00 for blocks with an error. This is important for maintaining file offsets within the image. Do not mix up the if and of flags, this could result in overwriting the target device! A full list of command options can be found at http://man7.org/linux/man-pages/man1/dd.1.html. Note that there is an important correlation between the block size and the noerror and sync flags: if an error is encountered, x00 will be written for the entire block that was read (as determined by the block size). Thus, smaller block sizes result in less data being missed in the event of an error. The downside is that, typically, smaller block sizes result in a slower transfer rate. An examiner will have to decide whether a timely or more accurate acquisition is preferred. Booting into recovery mode for the imaging process is the most forensically sound method. Determining what to image When imaging a computer, an examiner must first find what the drive is mounted as; /dev/sda, for example. The same is true when imaging an Android device. The first step is to launch the ADB shell and view the /proc/partitions file using the following command: cat /proc/partitions The output will show all partitions on the device: In the output shown in the preceding screenshot, mmcblk0 is the entirety of the flash memory on the device. To image the entire flash memory, we could use /dev/blk/mmcblk0 as the input file flag (if) for the dd command. Everything following it, indicated by p1- p29, is a partition of the flash memory. The size is shown in blocks, in this case the block size is 1024 bytes for a total internal storage size of approximately 32 GB. To obtain a full image of the device's internal memory, we would run the dd command with mmcblk0 as the input file. However, we know that most of these partitions are unlikely to be forensically interesting; we're most likely only interested in a few of them. To view the corresponding names for each partition, we can look in the device's by-name directory. This does not exist on every device, and is sometimes in a different path, but for this device it is found at /dev/block/msm_sdcc.1/by-name. By navigating to that directory and running the ls -al command, we can see to where each block is symbolically linked as shown in the following screenshot: If our investigation was only interested in the userdata partition, we now know that it is mmcblk0p28, and could use that as the input file to the dd command. If the by-name directory does not exist on the device, it may not be possible to identify every partition on the device. However, many of them can still be found by using the mount command within the ADB shell. Note that the following screenshot is from a different device that does not contain a by-name directory, so the data partition is not mmcblk0p28: On this device, the data partition is mmcblk0p34. If the mount command does not work, the same information can be found using the cat /proc/mounts command. Other options to identify partitions depending on the device are the cat /proc/mtd or cat /proc/yaffs commands; these may work on older devices. Newer devices may include an fstab file in the root directory (typically called fstab.<device>) that will list mountable partitions. Writing to an SD card The output file of the dd command can be written to the device's SD card. This should only be done if the suspect SD card can be removed and replaced with a forensically sterile SD to ensure that the dd command's output is not overwriting evidence. Obviously, if writing to an SD card, ensure that the SD card is larger than the partition being imaged. On newer devices, the /sdcard partition is actually a symbolic link to /data/media. In this case, using the dd command to copy the /data partition to the SD card won't work, and could corrupt the device because the input file is essentially being written to itself. To determine where the SD card is symbolically linked to, simply open the ADB shell and run the ls -al command. If the SD card partition is not shown, the SD likely needs to be mounted in recovery mode using the steps shown in this article, Extracting Data Logically from Android Devices. In the following example, /sdcard is symbolically linked to /data/media. This indicates that the dd command's output should not be written to the SD card. In the example that follows, the /sdcard is not a symbolic link to /data, so the dd command's output can be used to write the /data partition image to the SD card: On older devices, the SD card may not even be symbolically linked. After determining which block to read and to where the SD card is symbolically linked, image the /data partition to the /sdcard, using the following command: dd if=/dev/block/mmcblk0p28 of=/sdcard/data.img bs=512 conv=notrunc,noerror,sync Now, an image of the /data partition exists on the SD card. It can be pulled to the examiner's machine with the ADB pull command, or simply read from the SD card. Writing directly to an examiner's computer with netcat If the image cannot be written to the SD card, an examiner can use netcat to write the image directly to their machine. The netcat tool is a Linux-based tool used for transferring data over a network connection. We recommend using a Linux or a Mac computer for using netcat as it is built-in, though Windows versions do exist. The examples below were done on a Mac. Installing netcat on the device Very few Android devices, if any, come with netcat installed. To check, simply open the ADB shell and type nc. If it returns saying nc is not found, netcat will have to be installed manually on the device. Netcat compiled for Android can be found at many places online. We have shared the version we used at http://sourceforge.net/projects/androidforensics-netcat/files/. If we look back at the results from our mount command in the previous section, we can see that the /dev partition is mounted as tmpfs. The Linux term tmpfs means that the partition is meant to appear as an actual filesystem on the device, but is truly only stored in RAM. This means we can push netcat here without making any permanent changes to the device using the following command on the examiner's computer: adb push nc /dev/Examiner_Folder/nc The command should have created the Examiner_Folder in /dev, and nc should be in it. This can be verified by running the following command in the ADB shell: ls /dev/Examiner_Folder Using netcat Now that the netcat binary is on the device, we need to give it permission to execute from the ADB shell. This can be done as follows: chomd +x /dev/Examiner_Folder/nc We will need two terminal windows open with the ADB shell open in one of them. The other will be used to listen to the data being sent from the device. Now we need to enable port forwarding over ADB from the examiner's computer: adb forward tcp:9999 tcp:9999 9999 is the port we chose to use for netcat; it can be any arbitrary port number between 1023 and 65535 on a Linux or Mac system (1023 and below are reserved for system processes, and require root permission to use). Windows will allow any port to be assigned. In the terminal window with ADB shell, run the following command: dd if=/dev/block/mmcblk0p34 bs=512 conv=notrunc,noerror,sync | /dev/Examiner_Folder/nc –l –p 9999 mmcblk0p34 is the user data partition on this device, however, the entire flash memory or any other partition could also be imaged with this method. In most cases, it is best practice to image the entirety of the flash memory in order to acquire all possible data from the device. Some commercial forensic tools may also require the entire memory image, and may not properly handle an image of a single partition. In the other terminal window, run: nc 127.0.0.1 9999 > data_partition.img The data_partition.img file should now be created in the current directory of the examiner's computer. When the data is finished transferring, netcat in both terminals will terminate and return to the command prompt. The process can take a significant amount of time depending on the size of the image. Summary This article discussed techniques used for physically imaging internal memory or SD cards and some of the common problems associated with dd are as follows: Usually pre-installed on device May not work on MTD blocks Does not obtain Out-of-Band area Additionally, each imaging technique can be used to either save the image on the device (typically on the SD card), or used with netcat to write the file to the examiner's computer: Writing to SD card: Easy, doesn't require additional binaries to be pushed to the device Familiar to most examiners Cannot be used if SD card is symbolically linked to the partition being imaged Cannot be used if the entire memory is being imaged Using netcat: Usually requires yet another binary to be pushed to the device Somewhat complicated, must follow steps exactly Works no matter what is being imaged May be more time consuming than writing to the SD Resources for Article: Further resources on this subject: Reversing Android Applications [article] Introduction to Mobile Forensics [article] Processing the Case [article]
Read more
  • 0
  • 0
  • 24846
article-image-top-5-deep-learning-architectures
Amey Varangaonkar
24 Jul 2018
9 min read
Save for later

Top 5 Deep Learning Architectures

Amey Varangaonkar
24 Jul 2018
9 min read
If you are a deep learning practitioner or someone who wants to get into the world of deep learning, you might be well acquainted with neural networks already. Neural networks, inspired by biological neural networks, are pretty useful when it comes to solving complex, multi-layered computational problems. Deep learning has stood out pretty well in several high-profile research fields - including facial and speech recognition, natural language processing, machine translation, and more. In this article, we look at the top 5 popular and widely-used deep learning architectures you should know in order to advance your knowledge or deep learning research. Convolutional Neural Networks Convolutional Neural Networks, or CNNs in short, are the popular choice of neural networks for different Computer Vision tasks such as image recognition. The name ‘convolution’ is derived from a mathematical operation involving the convolution of different functions. There are 4 primary steps or stages in designing a CNN: Convolution: The input signal is received at this stage Subsampling: Inputs received from the convolution layer are smoothened to reduce the sensitivity of the filters to noise or any other variation Activation: This layer controls how the signal flows from one layer to the other, similar to the neurons in our brain Fully connected: In this stage, all the layers of the network are connected with every neuron from a preceding layer to the neurons from the subsequent layer Here is an in-depth look at the CNN Architecture and its working, as explained by the popular AI Researcher Giancarlo Zaccone. A sample CNN in action Advantages of CNN Very good for visual recognition Once a segment within a particular sector of an image is learned, the CNN can recognize that segment present anywhere else in the image Disadvantages of CNN CNN is highly dependent on the size and quality of the training data Highly susceptible to noise Recurrent Neural Networks Recurrent Neural Networks (RNNs) have been very popular in areas where the sequence in which the information is presented is crucial. As a result, they find a lot applications in real-world domains such as natural language processing, speech synthesis and machine translation. RNNs are called ‘recurrent’ mainly because a uniform task is performed for every single element of a sequence, with the output dependant on the previous computations as well. Think of these networks as having a memory, where every calculated information is captured, stored and utilized to calculate the final outcome. Over the years, quite a few varieties of RNNs have been researched and developed: Bidirectional RNN - The output in this type of RNN depends not only on the past but also the future outcomes Deep RNN - In this type of RNN, there are multiple layers present per step, allowing for a greater rate of learning and more accuracy RNNs can be used to build industry-standard chatbots that can be used to interact with customers on websites. Given a sequence of signals from an audio wave, RNNs can also be used to predict a correct sequence of phonetic segments with a given probability. Advantages of RNN Unlike a traditional neural network, an RNN shares the same parameters across all steps. This greatly reduces the number of parameters that we need to learn RNNs can be used along with CNNs to generate accurate descriptions for unlabeled images. Disadvantages of RNN RNNs find it difficult to track long-term dependencies. This is especially true in case of long sentences and paragraphs having too many words in between the noun and the verb. RNNs cannot be stacked into very deep models. This is due to the activation function used in RNN models, making the gradient decay over multiple layers. Autoencoders Autoencoders apply the principle of backpropagation in an unsupervised environment. Autoencoders, interestingly, have a close resemblance to PCA (Principal Component Analysis) except that they are more flexible. Some of the popular applications of Autoencoders is anomaly detection - for example detecting fraud in financial transactions in banks. Basically, the core task of autoencoders is to identify and determine what constitutes regular, normal data and then identify the outliers or anomalies. Autoencoders usually represent data through multiple hidden layers such that the output signal is as close to the input signal. There are 4 major types of autoencoders being used today: Vanilla autoencoder - the simplest form of autoencoders there is, i.e. a neural net with one hidden layer Multilayer autoencoder - when one hidden layer is not enough, an autoencoder can be extended to include more hidden layers Convolutional autoencoder - In this type, convolutions are used in the autoencoders instead of fully-connected layers Regularized autoencoder - this type of autoencoders use a special loss function that enables the model to have properties beyond the basic ability to copy a given input to the output. This article demonstrates training an autoencoder using H20, a popular machine learning and AI platform. A basic representation of Autoencoder Advantages of Autoencoders Autoencoders give a resultant model which is primarily based on the data rather than predefined filters Very less complexity means it’s easier to train them Disadvantages of Autoencoders Training time can be very high sometimes If the training data is not representative of the testing data, then the information that comes out of the model can be obscured and unclear Some autoencoders, especially of the variational type, cause a deterministic bias being introduced in the model Generative Adversarial Networks The basic premise of Generative Adversarial Networks (GANs) is the training of two deep learning models simultaneously. These deep learning networks basically compete with each other - one model that tries to generate new instances or examples is called as the generator. The other model that tries to classify if a particular instance originates from the training data or from the generator is called as the discriminator. GANs, a breakthrough recently in the field of deep learning,  was a concept put forth by the popular deep learning expert Ian Goodfellow in 2014. It finds large and important applications in Computer Vision, especially image generation. Read more about the structure and the functionality of the GAN from the official paper submitted by Ian Goodfellow. General architecture of GAN (Source: deeplearning4j) Advantages of GAN Per Goodfellow, GANs allow for efficient training of classifiers in a semi-supervised manner Because of the improved accuracy of the model, the generated data is almost indistinguishable from the original data GANs do not introduce any deterministic bias unlike variational autoencoders Disadvantages of GAN Generator and discriminator working efficiently is crucial to the success of GAN. The whole system fails even if one of them fails Both the generator and discriminator are separate systems and trained with different loss functions. Hence the time required to train the entire system can get quite high. Interested to know more about GANs? Here’s what you need to know about them. ResNets Ever since they gained popularity in 2015, ResNets or Deep Residual Networks have been widely adopted and used by many data scientists and AI researchers. As you already know, CNNs are highly useful when it comes to solving image classification and visual recognition problems. As these tasks become more complex, training of the neural network starts to get a lot more difficult, as additional deep layers are required to compute and enhance the accuracy of the model. Residual learning is a concept designed to tackle this very problem, and the resultant architecture is popularly known as a ResNet. A ResNet consists of a number of residual modules - where each module represents a layer. Each layer consists of a set of functions to be performed on the input. The depth of a ResNet can vary greatly - the one developed by Microsoft researchers for an image classification problem had 152 layers! A basic building block of ResNet (Source: Quora) Advantages of ResNets ResNets are more accurate and require less weights than LSTMs and RNNs in some cases They are highly modular. Hundreds and thousands of residual layers can be added to create a network and then trained. ResNets can be designed to determine how deep a particular network needs to be. Disadvantages of ResNets If the layers in a ResNet are too deep, errors can be hard to detect and cannot be propagated back quickly and correctly. At the same time, if the layers are too narrow, the learning might not be very efficient. Apart from the ones above, a few more deep learning models are being increasingly adopted and preferred by data scientists. These definitely deserve a honorable mention: LSTM: LSTMs are a special kind of Recurrent Neural Networks that include a special memory cell that can hold information for long periods of time. A set of gates is used to determine when a particular information enters the memory and when it is forgotten. SqueezeNet: One of the newer but very powerful deep learning architectures which is extremely efficient for low bandwidth platforms such as mobile. CapsNet: CapsNet, or Capsule Networks, is a recent breakthrough in the field of Deep Learning and neural network modeling. Mainly used for accurate image recognition tasks, and is an advanced variation of the CNNs. SegNet: A popular deep learning architecture especially used to solve the image segmentation problem. Seq2Seq: An upcoming deep learning architecture being increasingly used for machine translation and building efficient chatbots So there you have it! Thanks to the intense efforts in research in deep learning and AI, we now have a variety of deep learning models at our disposal to solve a variety of problems - both functional and computational. What’s even better is that we have the liberty to choose the most appropriate deep learning architecture based on the problem at hand. [box type="shadow" align="" class="" width=""]Editor’s Tip: It is very important to know the best deep learning frameworks you can use to train your models. Here are the top 10 deep learning frameworks for you.[/box] In contrast to the traditional programming approach where we tell the computer what to do, the deep learning models figure out the problem and devise the most appropriate solution on their own - however complex the problem may be. No wonder these deep learning architectures are being researched on and deployed on a large scale by the major market players such as Google, Facebook, Microsoft and many others. Packt Explains… Deep Learning in 90 seconds Behind the scenes: Deep learning evolution and core concepts Facelifting NLP with Deep Learning  
Read more
  • 0
  • 0
  • 24833

article-image-making-game-console-unity-part-1
Eliot Lash
14 Dec 2015
6 min read
Save for later

Making an In-Game Console in Unity Part 1

Eliot Lash
14 Dec 2015
6 min read
This article is intended for intermediate-level Unity developers or above. One of my favorite tools that I learned about while working in the game industry is the in-game console. It’s essentially a bare-bones command-line environment where you can issue text commands to your game and get text output. Unlike Unity’s built-in console (which is really just a log,) it can take input and display output on whatever device the game is running on. I’m going to walk you through making a console using uGUI, Unity’s built-in GUI framework available in Unity 4.6 and later. I’m assuming you have some familiarity with it already, so if not I’d recommend reading the UI Overview before continuing. I’ll also show how to implement a simple input parser. I’ve included a unitypackage with an example scene showing the console all set up, as well as a prefab console that you can drop into an existing scene if you’d like. However, I will walk you through setting everything up from scratch below. If you don’t have a UI Canvas in your scene yet, make one by selecting GameObject > UI > Canvas from the menu bar. Now, let’s get started by making a parent object for our console view. Right click on the Canvas in the hierarchy and select “Create Empty”. There will now be an empty GameObject in the Canvas hierarchy, that we can use as the parent for our console. Rename this object to “ConsoleView”. I personally like to organize my GUI hierarchy a bit to make it easier to do flexible layouts and turn elements of the GUI off and on, so I also made some additional parent objects for “HUD” (GUI elements that are part of a layer that draws over the game, usually while the game is running) and a child of that for “DevHUD”, those HUD elements that are only needed during development. This makes it easier to disable or delete the DevHUD when making a production build of my game. However, this is optional. Enter 2D selection mode and scale the ConsoleView so it fills the width of the Canvas and most of its height. Then set its anchor mode to “stretch top”. Now right click on ConsoleView in the hierarchy and select “Create Empty”. Rename this new child “ConsoleViewContainer”. Drag to the same size as its parent, and set its anchor mode to “stretch stretch”. We need this additional container as the console needs the ability to show and hide during gameplay, so we will be enabling and disabling ConsoleViewContainer. But we still need the ConsoleView object to stay enabled so that it can listen for the special gesture/keypress which the user will input to bring up the console. Next, we’ll create our text input field. Right click on ConsoleViewContainer in the hierarchy and select UI > Input Field. Align the InputField with the upper left corner of ConsoleViewContainer and drag it out to about 80% of the screen width. Then set the anchor mode to “stretch top”. I prefer a dark console, so I changed the Image Color to dark grey. Open up the children of the InputField and you can edit the placeholder text, I set mine to “Console input”. You may also change the Placeholder and Text color to white if you want to use a dark background. On some platforms at this time of writing, Unity won’t handle the native enter/submit button correctly, so we’re going to add a fallback enter button next. (If you’re sure this won’t be an issue on your platforms, you can skip this paragraph and resize the console input to fill the width of the container.) Right click on ConsoleViewContainer in the hierarchy and select UI > Button. Align the button to the right of the InputField and set the anchor to “stretch top”. Rename the Button to EnterBtn. Select its text child in the hierarchy and edit the text to say “Ent”. Next, we’re going to make the view for the console log output. Right click on ConsoleViewContainer in the hierarchy and select UI > Image. Drag the image to fill the area below the InputField and set the anchor to “stretch stretch”. Rename Image to LogView. If you want a dark console (you know you do!) change the image color to black. Now at the bottom of the inspector, click “Add Component” and select UI > Mask. Again, click “Add Component” and select UI > Scroll Rect. Right click on LogView in the hierarchy and select UI > Text. Scale it to fill the LogView and set the anchor to “stretch bottom”. Rename it to LogText. Set the text to bottom align. If you’re doing a dark console, set the text color to white. To make sure we’ve got everything set up properly, add a few paragraphs of placeholder text (my favorite source for this is the hipster ipsum generator.) Now drag the top way up past the top of the canvas to give room for the log scrollback. If it’s too short, the log rotation code we’ll write later might not work properly. Now, we’ll make the scrollbar. Right click on ConsoleViewContainer in the hierarchy and select UI > Scrollbar. In the Scrollbar component, set the Direction to “Bottom To Top”, and set the Value to 0. Size it to fit between the LogView and the edge of the container and set the anchor to “stretch stretch”. Finally, we’ll hook up our complete scroll view. Select LogView and in the Scroll Rect component, drag in LogText to the “Content” property, and Scrollbar into the “Vertical Scrollbar” property. Then, uncheck the “Horizontal” box. Go ahead and run the game to make sure we’ve set everything up correctly. You should be able to drag the scroll bar and watch the text scroll down. If not, go back through the previous steps and try to figure out if you missed anything. This concludes part one. Stay tuned for part two of this series where you will learn how to program the behavior of the console. Find more Unity game development tutorials and content on our Unity page.  About the Author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 2
  • 24819