Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-designing-user-interface
Packt
23 Nov 2016
7 min read
Save for later

Designing a User Interface

Packt
23 Nov 2016
7 min read
In this article by Marcin Jamro, the author of the book Windows Application Development Cookbook, we will see how to add a button in your application. (For more resources related to this topic, see here.) Introduction You know how to start your adventure by developing universal applications for smartphones, tablets, and desktops running on the Windows 10 operating system. In the next step, it is crucial to get to know how to design particular pages within the application to provide the user with a convenient user interface that works smoothly on screens with various resolutions. Fortunately, designing the user interface is really simple using the XAML language, as well as Microsoft Visual Studio Community 2015. A designer can use a set of predefined controls, such as textboxes, checkboxes, images, or buttons. What's more, one can easily arrange controls in various variants, either vertically, horizontally, or in a grid. This is not all; developers could prepare their own controls as well. Such controls could be configured and placed on many pages within the application. It is also possible to prepare dedicated versions of particular pages for various types of devices, such as smartphones and desktops. You have already learned how to place a new control on a page by dragging it from the Toolbox window. In this article, you will see how to add a control as well as how to programmatically handle controls. Thus, some controls could either change their appearance, or the new controls could be added to the page when some specific conditions are met. Another important question is how to provide the user with a consistent user interface within the whole application. While developing solutions for the Windows 10 operating system, such a task could be easily accomplished by applying styles. In this article, you will learn how to specify both page-limited and application-limited styles that could be applied to either particular controls or to all the controls of a given type. At the end, you could ask yourself a simple question, "Why should I restrict access to my new awesome application only to people who know a particular language in which the user interface is prepared?" You should not! And in this article, you will also learn how to localize content and present it in various languages. Of course, the localization will use additional resource files, so translations could be prepared not by a developer, but by a specialist who knows the given language well. Adding a button When developing applications, you can use a set of predefined controls among which a button exists. It allows you to handle the event of pressing the button by a user. Of course, the appearance of the button could be easily adjusted, for instance, by choosing a proper background or border, as you could see in this recipe. The button can present textual content; however, it can also be adjusted to the user's needs, for instance, by choosing a proper color or font size. This is not all because the content shown on the button could not be only textual. For instance, you can prepare a button that presents an image instead of a text, a text over an image, or a text located next to the small icon that visually informs about the operation. Such modifications are presented in the following part of this recipe as well. Getting ready To step through this recipe, you only need the automatically generated project. How to do it… Add a button to the page by modifying the content of the MainPage.xaml file, as follows: <Page (...)> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Button Content="Click me!" Foreground="#0a0a0a" FontWeight="SemiBold" FontSize="20" FontStyle="Italic" Background="LightBlue" BorderBrush="RoyalBlue" BorderThickness="5" Padding="20 10" VerticalAlignment="Center" HorizontalAlignment="Center" /> </Grid> </Page> Generate a method for handling the event of clicking the button by pressing the button (either in a graphical designer or in the XAML code) and double-clicking on the Click field in the Properties window with the Event handlers for the selected element option (the lightning icon) selected. The automatically generated method is as follows: private void Button_Click(object sender, RoutedEventArgs e) { } How it works… In the preceding example, the Button control is placed within a grid. It is centered both vertically and horizontally, as specified by the VerticalAlignment and HorizontalAlignment properties that are set to Center. The background color (Background) is set to LightBlue. The border is specified by two properties, namely BorderBrush and BorderThickness. The first property chooses its color (RoyalBlue), while the other represents its thickness (5 pixels). What's more, the padding (Padding) is set to 20 pixels on the left- and right-hand side and 10 pixels at the top and bottom. The button presents the Click me! text defined as a value of the Content property. The text is shown in the color #0a0a0a with semi-bold italic font with size 20, as specified by the Foreground, FontWeight, FontStyle, and FontSize properties, respectively. If you run the application on a local machine, you should see the following result: It is worth mentioning that the IDE supports a live preview of the designed page. So, you can modify the values of particular properties and have real-time feedback regarding the target appearance directly in the graphical designer. It is a really great feature that does not require you to run the application to see an impact of each introduced change. There's more… As already mentioned, even the Button control has many advanced features. For example, you could place an image instead of a text, present a text over an image, or show an icon next to the text. Such scenarios are presented and explained now. First, let's focus on replacing the textual content with an image by modifying the XAML code that represents the Button control, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Image Source="/Assets/Image.jpg" /> </Button> Of course, you should also add the Image.jpg file to the Assets directory. To do so, navigate to Add | Existing Item… from the context menu of the Assets node in the Solution Explorer window, shown as follows: In the Add Existing Item window, choose the Image.jpg file and click on the Add button. As you could see, the previous example uses the Image control. In this recipe, no more information about such a control is presented because it is the topic of one of the next recipes, namely Adding an image. If you run the application now, you should see a result similar to the following: The second additional example presents a button with a text over an image. To do so, let's modify the XAML code, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Grid> <Image Source="/Assets/Image.jpg" /> <TextBlock Text="Click me!" Foreground="White" FontWeight="Bold" FontSize="28" VerticalAlignment="Bottom" HorizontalAlignment="Center" Margin="10" /> </Grid> </Button> You'll find more information about the Grid, Image, and TextBlock controls in the next recipes, namely Arranging controls in a grid, Adding an image, and Adding a label. For this reason, the usage of such controls is not explained in the current recipe. If you run the application now, you should see a result similar to the following: As the last example, you will see a button that contains both a textual label and an icon. Such a solution could be accomplished using the StackPanel, TextBlock, and Image controls, as you could see in the following code snippet: <Button Background="#353535" VerticalAlignment="Center" HorizontalAlignment="Center" Padding="20"> <StackPanel Orientation="Horizontal"> <Image Source="/Assets/Icon.png" MaxHeight="32" /> <TextBlock Text="Accept" Foreground="White" FontSize="28" Margin="20 0 0 0" /> </StackPanel> </Button> Of course, you should not forget to add the Icon.png file to the Assets directory, as already explained in this recipe. The result should be similar to the following: Resources for Article: Further resources on this subject: Deployment and DevOps [article] Introduction to C# and .NET [article] Customizing Kernel and Boot Sequence [article]
Read more
  • 0
  • 0
  • 1535

article-image-data-access-layer
Packt
09 Nov 2016
13 min read
Save for later

Data Access Layer

Packt
09 Nov 2016
13 min read
In this article by Alexander Zaytsev, author of NHibernate 4.0 Cookbook, we will cover the following topics: Transaction Auto-wrapping for the data access layer Setting up an NHibernate repository Using Named Queries in the data access layer (For more resources related to this topic, see here.) Introduction There are two styles of data access layer common in today's applications. Repositories and Data Access Objects. In reality, the distinction between these two have become quite blurred, but in theory, it's something like this: A repository should act like an in-memory collection. Entities are added to and removed from the collection, and its contents can be enumerated. Queries are typically handled by sending query specifications to the repository. A DAO (Data Access Object) is simply an abstraction of an application's data access. Its purpose is to hide the implementation details of the database access, from the consuming code. The first recipe shows the beginnings of a typical data access object. The remaining recipes show how to set up a repository-based data access layer with NHibernate's various APIs. Transaction Auto-wrapping for the data access layer In this recipe, we'll show you how we can set up the data access layer to wrap all data access in NHibernate transactions automatically. How to do it... Create a new class library named Eg.Core.Data. Install NHibernate to Eg.Core.Data using NuGet Package Manager Console. Add the following two DOA classes: public class DataAccessObject<T, TId> where T : Entity<TId> { private readonly ISessionFactory _sessionFactory; private ISession session { get { return _sessionFactory.GetCurrentSession(); } } public DataAccessObject(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } public T Get(TId id) { return WithinTransaction(() => session.Get<T>(id)); } public T Load(TId id) { return WithinTransaction(() => session.Load<T>(id)); } public void Save(T entity) { WithinTransaction(() => session.SaveOrUpdate(entity)); } public void Delete(T entity) { WithinTransaction(() => session.Delete(entity)); } private TResult WithinTransaction<TResult>(Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } private void WithinTransaction(Action action) { WithinTransaction<bool>(() => { action.Invoke(); return false; }); } } public class DataAccessObject<T> : DataAccessObject<T, Guid> where T : Entity { } How it works... NHibernate requires that all data access occurs inside an NHibernate transaction. Remember, the ambient transaction created by TransactionScope is not a substitute for an NHibernate transaction This recipe, however, shows a more explicit approach. To ensure that at least all our data access layer calls are wrapped in transactions, we create a private WithinTransaction method that accepts a delegate, consisting of some data access methods, such as session.Save or session.Get. This WithinTransaction method first checks if the session has an active transaction. If it does, the delegate is invoked immediately. If it doesn't, a new NHibernate transaction is created, the delegate is invoked, and finally the transaction is committed. If the data access method throws an exception, the transaction will be rolled back automatically as the exception bubbles up to the using block. There's more... This transactional auto-wrapping can also be set up using SessionWrapper from the unofficial NHibernate AddIns project at https://bitbucket.org/fabiomaulo/unhaddins. This class wraps a standard NHibernate session. By default, it will throw an exception when the session is used without an NHibernate transaction. However, it can be configured to check for and create a transaction automatically, much in the same way I've shown you here. See also Setting up an NHibernate repository Setting up an NHibernate Repository Many developers prefer the repository pattern over data access objects. In this recipe, we'll show you how to set up the repository pattern with NHibernate. How to do it... Create a new, empty class library project named Eg.Core.Data. Add a reference to Eg.Core project. Add the following IRepository interface: public interface IRepository<T>: IEnumerable<T> where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } Create a new, empty class library project named Eg.Core.Data.Impl. Add references to the Eg.Core and Eg.Core.Data projects. Add a new abstract class named NHibernateBase using the following code: protected readonly ISessionFactory _sessionFactory; protected virtual ISession session { get { return _sessionFactory.GetCurrentSession(); } } public NHibernateBase(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } protected virtual TResult WithinTransaction<TResult>( Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } protected virtual void WithinTransaction(Action action) { WithinTransaction<bool>(() => { action.Invoke(); return false; }); } Add a new class named NHibernateRepository using the following code: public class NHibernateRepository<T> : NHibernateBase, IRepository<T> where T : Entity { public NHibernateRepository( ISessionFactory sessionFactory) : base(sessionFactory) { } public void Add(T item) { WithinTransaction(() => session.Save(item)); } public bool Contains(T item) { if (item.Id == default(Guid)) return false; return WithinTransaction(() => session.Get<T>(item.Id)) != null; } public int Count { get { return WithinTransaction(() => session.Query<T>().Count()); } } public bool Remove(T item) { WithinTransaction(() => session.Delete(item)); return true; } public IEnumerator<T> GetEnumerator() { return WithinTransaction(() => session.Query<T>() .Take(1000).GetEnumerator()); } IEnumerator IEnumerable.GetEnumerator() { return WithinTransaction(() => GetEnumerator()); } } How it works... The repository pattern, as explained in http://martinfowler.com/eaaCatalog/repository.html, has two key features: It behaves as an in-memory collection Query specifications are submitted to the repository for satisfaction. In this recipe, we are concerned only with the first feature, behaving as an in-memory collection. The remaining recipes in this article will build on this base, and show various methods for satisfying the second point. Because our repository should act like an in-memory collection, it makes sense that our IRepository<T> interface should resemble ICollection<T>. Our NHibernateBase class provides both contextual session management and the automatic transaction wrapping explained in the previous recipe. NHibernateRepository simply implements the members of IRepository<T>. There's more... The Repository pattern reduces data access to its absolute simplest form, but this simplification comes with a price. We lose much of the power of NHibernate behind an abstraction layer. Our application must either do without even basic session methods like Merge, Refresh, and Load, or allow them to leak through the abstraction. See also Transaction Auto-wrapping for the data access layer Using Named Queries in the data access layer Using Named Queries in the data access layer Named Queries encapsulated in query objects is a powerful combination. In this recipe, we'll show you how to use Named Queries with your data access layer. Getting ready To complete this recipe you will need Common Service Locator from Microsoft Patterns & Practices. The documentation and source code could be found at http://commonservicelocator.codeplex.com. Complete the previous recipe Setting up an NHibernate repository. Include the Eg.Core.Data.Impl assembly as an additional mapping assembly in your test project's App.Config with the following xml: <mapping assembly="Eg.Core.Data.Impl"/> How to do it... In the Eg.Core.Data project, add a folder for the Queries namespace. Add the following IQuery interfaces: public interface IQuery { } public interface IQuery<TResult> : IQuery { TResult Execute(); } Add the following IQueryFactory interface: public interface IQueryFactory { TQuery CreateQuery<TQuery>() where TQuery :IQuery; } Change the IRepository interface to implement the IQueryFactory interface, as shown in the following code: public interface IRepository<T> : IEnumerable<T>, IQueryFactory where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } In the Eg.Core.Data.Impl project, change the NHibernateRepository constructor and add the _queryFactory field, as shown in the following code: private readonly IQueryFactory _queryFactory; public NHibernateRepository( ISessionFactory sessionFactory, IQueryFactory queryFactory) : base(sessionFactory) { _queryFactory = queryFactory; } Add the following method to NHibernateRepository: public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _queryFactory.CreateQuery<TQuery>(); } In the Eg.Core.Data.Impl project, add a folder for the Queries namespace. Install Common Service Locator using NuGet Package Manager Console, using the command. Install-Package CommonServiceLocator To the Queries namespace, add this QueryFactory class: public class QueryFactory : IQueryFactory { private readonly IServiceLocator _serviceLocator; public QueryFactory(IServiceLocator serviceLocator) { _serviceLocator = serviceLocator; } public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _serviceLocator.GetInstance<TQuery>(); } } Add the following NHibernateQueryBase class: public abstract class NHibernateQueryBase<TResult> : NHibernateBase, IQuery<TResult> { protected NHibernateQueryBase( ISessionFactory sessionFactory) : base(sessionFactory) { } public abstract TResult Execute(); } Add an empty INamedQuery interface, as shown in the following code: public interface INamedQuery { string QueryName { get; } } Add a NamedQueryBase class, as shown in the following code: public abstract class NamedQueryBase<TResult> : NHibernateQueryBase<TResult>, INamedQuery { protected NamedQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var nhQuery = GetNamedQuery(); return Transact(() => Execute(nhQuery)); } protected abstract TResult Execute(IQuery query); protected virtual IQuery GetNamedQuery() { var nhQuery = session.GetNamedQuery(QueryName); SetParameters(nhQuery); return nhQuery; } protected abstract void SetParameters(IQuery nhQuery); public virtual string QueryName { get { return GetType().Name; } } } In Eg.Core.Data.Impl.Test, add a test fixture named QueryTests inherited from NHibernateFixture. Add the following test and three helper methods: [Test] public void NamedQueryCheck() { var errors = new StringBuilder(); var queryObjectTypes = GetNamedQueryObjectTypes(); var mappedQueries = GetNamedQueryNames(); foreach (var queryType in queryObjectTypes) { var query = GetQuery(queryType); if (!mappedQueries.Contains(query.QueryName)) { errors.AppendFormat( "Query object {0} references non-existent " + "named query {1}.", queryType, query.QueryName); errors.AppendLine(); } } if (errors.Length != 0) Assert.Fail(errors.ToString()); } private IEnumerable<Type> GetNamedQueryObjectTypes() { var namedQueryType = typeof(INamedQuery); var queryImplAssembly = typeof(BookWithISBN).Assembly; var types = from t in queryImplAssembly.GetTypes() where namedQueryType.IsAssignableFrom(t) && t.IsClass && !t.IsAbstract select t; return types; } private IEnumerable<string> GetNamedQueryNames() { var nhCfg = NHConfigurator.Configuration; var mappedQueries = nhCfg.NamedQueries.Keys .Union(nhCfg.NamedSQLQueries.Keys); return mappedQueries; } private INamedQuery GetQuery(Type queryType) { return (INamedQuery) Activator.CreateInstance( queryType, new object[] { SessionFactory }); } For our example query, in the Queries namespace of Eg.Core.Data, add the following interface: public interface IBookWithISBN : IQuery<Book> { string ISBN { get; set; } } Add the implementation to the Queries namespace of Eg.Core.Data.Impl using the following code: public class BookWithISBN : NamedQueryBase<Book>, IBookWithISBN { public BookWithISBN(ISessionFactory sessionFactory) : base(sessionFactory) { } public string ISBN { get; set; } protected override void SetParameters( NHibernate.IQuery nhQuery) { nhQuery.SetParameter("isbn", ISBN); } protected override Book Execute(NHibernate.IQuery query) { return query.UniqueResult<Book>(); } } Finally, add the embedded resource mapping, BookWithISBN.hbm.xml, to Eg.Core.Data.Impl with the following xml code: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping > <query name="BookWithISBN"> <![CDATA[ from Book b where b.ISBN = :isbn ]]> </query> </hibernate-mapping> How it works... As we learned in the previous recipe, according to the repository pattern, the repository is responsible for fulfilling queries, based on the specifications submitted to it. These specifications are limiting. They only concern themselves with whether a particular item matches the given criteria. They don't care for other necessary technical details, such as eager loading of children, batching, query caching, and so on. We need something more powerful than simple where clauses. We lose too much to the abstraction. The query object pattern defines a query object as a group of criteria that can self-organize in to a SQL query. The query object is not responsible for the execution of this SQL. This is handled elsewhere, by some generic query runner, perhaps inside the repository. While a query object can better express the different technical requirements, such as eager loading, batching, and query caching, a generic query runner can't easily implement those concerns for every possible query, especially across the half-dozen query APIs provided by NHibernate. These details about the execution are specific to each query, and should be handled by the query object. This enhanced query object pattern, as Fabio Maulo has named it, not only self-organizes into SQL but also executes the query, returning the results. In this way, the technical concerns of a query's execution are defined and cared for with the query itself, rather than spreading into some highly complex, generic query runner. According to the abstraction we've built, the repository represents the collection of entities that we are querying. Since the two are already logically linked, if we allow the repository to build the query objects, we can add some context to our code. For example, suppose we have an application service that runs product queries. When we inject dependencies, we could specify IQueryFactory directly. This doesn't give us much information beyond "This service runs queries." If, however, we inject IRepository<Product>, we have a much better idea about what data the service is using. The IQuery interface is simply a marker interface for our query objects. Besides advertising the purpose of our query objects, it allows us to easily identify them with reflection. The IQuery<TResult> interface is implemented by each query object. It specifies only the return type and a single method to execute the query. The IQueryFactory interface defines a service to create query objects. For the purpose of explanation, the implementation of this service, QueryFactory, is a simple service locator. IQueryFactory is used internally by the repository to instantiate query objects. The NamedQueryBase class handles most of the plumbing for query objects, based on named HQL and SQL queries. As a convention, the name of the query is the name of the query object type. That is, the underlying named query for BookWithISBN is also named BookWithISBN. Each individual query object must simply implement SetParameters and Execute(NHibernate.IQuery query), which usually consists of a simple call to query.List<SomeEntity>() or query.UniqueResult<SomeEntity>(). The INamedQuery interface both identifies the query objects based on Named Queries, and provides access to the query name. The NamedQueryCheck test uses this to verify that each INamedQuery query object has a matching named query. Each query has an interface. This interface is used to request the query object from the repository. It also defines any parameters used in the query. In this example, IBookWithISBN has a single string parameter, ISBN. The implementation of this query object sets the :isbn parameter on the internal NHibernate query, executes it, and returns the matching Book object. Finally, we also create a mapping containing the named query BookWithISBN, which is loaded into the configuration with the rest of our mappings. The code used in the query object setup would look like the following code: var query = bookRepository.CreateQuery<IBookWithISBN>(); query.ISBN = "12345"; var book = query.Execute(); See also Transaction Auto-wrapping for the data access layer Setting up an NHibernate repository Summary In this article we learned how to transact Auto-wrapping for the data access layer, setting up an NHibernate repository and how to use Named Queries in the data access layer Resources for Article: Further resources on this subject: Memory Management [article] Getting Started with Spring Security [article] Design with Spring AOP [article]
Read more
  • 0
  • 0
  • 1379

article-image-introduction-scala
Packt
01 Nov 2016
8 min read
Save for later

Introduction to Scala

Packt
01 Nov 2016
8 min read
In this article by Diego Pacheco, the author of the book, Building applications with Scala, we will see the following topics: Writing a program for Scala Hello World using the REPL Scala language – the basics Scala variables – var and val Creating immutable variables (For more resources related to this topic, see here.) Scala Hello World using the REPL Let's get started. Go ahead, open your terminal, and type $ scala in order to open the Scala REPL. Once the REPL is open, you can just type "Hello World". By doing this, you are performing two operations – eval and print. The Scala REPL will create a variable called res0 and store your string there, and then it will print the content of the res0 variable. Scala REPL Hello World program $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> "Hello World" res0: String = Hello World scala> Scala is a hybrid language, which means it is both object-oriented (OO) and functional. You can create classes and objects in Scala. Next, we will create a complete Hello World application using classes. Scala OO Hello World program $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> object HelloWorld { | def main(args:Array[String]) = println("Hello World") | } defined object HelloWorld scala> HelloWorld.main(null) Hello World scala> First things first, you need to realize that we use the word object instead of class. The Scala language has different constructs, compared with Java. Object is a Singleton in Scala. It's the same as you code the Singleton pattern in Java. Next, we see the word def that is used in Scala to create functions. In this program, we create the main function just as we do in Java, and we call the built-in function, println, in order to print the String Hello World. Scala imports some java objects and packages by default. Coding in Scala does not require you to type, for instance, System.out.println("Hello World"), but you can if you want to, as shown in the following:. $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> System.out.println("Hello World") Hello World scala> We can and we will do better. Scala has some abstractions for a console application. We can write this code with less lines of code. To accomplish this goal, we need to extend the Scala class App. When we extend from App, we are performing inheritance, and we don't need to define the main function. We can just put all the code on the body of the class, which is very convenient, and which makes the code clean and simple to read. Scala HelloWorld App in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> object HelloWorld extends App { | println("Hello World") | } defined object HelloWorld scala> HelloWorld object HelloWorld scala> HelloWorld.main(null) Hello World scala> After coding the HelloWorld object in the Scala REPL, we can ask the REPL what HelloWorld is and, as you might realize, the REPL answers that HelloWorld is an object. This is a very convenient Scala way to code console applications because we can have a Hello World application with just three lines of code. Sadly, the same program in Java requires way more code, as you will see in the next section. Java is a great language for performance, but it is a verbose language compared with Scala. Java Hello World application package scalabook.javacode.chap1; public class HelloWorld { public static void main(String args[]){ System.out.println("Hello World"); } } The Java application required six lines of code, while in Scala, we were able to do the same with 50% less code(three lines of code). This is a very simple application; when we are coding complex applications, the difference gets bigger as a Scala application ends up with way lesser code than that of Java. Remember that we use an object in Scala in order to have a Singleton(Design Pattern that makes sure you have just one instance of a class), and if we want to do the same in Java, the code would be something like this: package scalabook.javacode.chap1; public class HelloWorldSingleton { private HelloWorldSingleton(){} private static class SingletonHelper{ private static final HelloWorldSingleton INSTANCE = new HelloWorldSingleton(); } public static HelloWorldSingleton getInstance(){ return SingletonHelper.INSTANCE; } public void sayHello(){ System.out.println("Hello World"); } public static void main(String[] args) { getInstance().sayHello(); } } It's not just about the size of the code, but it is all about consistency and the language providing more abstractions for you. If you write less code, you will have less bugs in your software at the end of the day. Scala language – the basics Scala is a statically typed language with a very expressive type system, which enforces abstractions in a safe yet coherent manner. All values in Scala are Java objects (but primitives that are unboxed at runtime) because at the end of the day, Scala runs on the Java JVM. Scala enforces immutability as a core functional programing principle. This enforcement happens in multiple aspects of the Scala language, for instance, when you create a variable, you do it in an immutable way, and when you use a collection, you use an immutable collection. Scala also lets you use mutable variables and mutable structures, but it favors immutable ones by design. Scala variables – var and val When you are coding in Scala, you create variables using either the var operator or the val operator. The var operator allows you to create mutable states, which is fine as long as you make it local, stick to the core functional programing principles, and avoid mutable shared state. Using var in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> var x = 10 x: Int = 10 scala> x res0: Int = 10 scala> x = 11 x: Int = 11 scala> x res1: Int = 11 scala> However, Scala has a more interesting construct called val. Using the val operator makes your variables immutable, which means that you can't change their values after you set them. If you try to change the value of a val variable in Scala, the compiler will give you an error. As a Scala developer, you should use val as much as possible because that's a good functional programing mindset, and it will make your programs better and more correct. In Scala, everything is an object; there are no primitives – the var and val rules apply for everything, be it Int, String, or even a class. Using val in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x = 10 x: Int = 10 scala> x res0: Int = 10 scala> x = 11 <console>:12: error: reassignment to val x = 11 ^ scala> x res1: Int = 10 scala> Creating immutable variables Right. Now let's see how we can define the most common types in Scala, such as Int, Double, Boolean, and String. Remember that you can create these variables using val or var, depending on your requirement. Scala variable types at the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x = 10 x: Int = 10 scala> val y = 11.1 y: Double = 11.1 scala> val b = true b: Boolean = true scala> val f = false f: Boolean = false scala> val s = "A Simple String" s: String = A Simple String scala> For these variables, we did not define the type. The Scala language figures it out for us. However, it is possible to specify the type if you want. In Scala, the type comes after the name of the variable, as shown in the following section. Scala variables with explicit typing at the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x:Int = 10 x: Int = 10 scala> val y:Double = 11.1 y: Double = 11.1 scala> val s:String = "My String " s: String = "My String " scala> val b:Boolean = true b: Boolean = true scala> Summary In this article, we learned about some basic constructs and concepts of the Scala language, with functions, collections, and OO in Scala. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Creating Your First Plug-in [article] Content-based recommendation [article]
Read more
  • 0
  • 0
  • 1554

article-image-applying-themes-sails-applications-part-2
Luis Lobo
14 Oct 2016
4 min read
Save for later

Applying Themes to Sails Applications, Part 2

Luis Lobo
14 Oct 2016
4 min read
In Part 1 of this series covering themes in the Sails Framework, we bootstrapped our sample Sails app (step 1). Here in Part 2, we will complete steps 2 and 3, compiling our theme’s CSS and the necessary Less files and setting up the theme Sails hook to complete our application. Step 2 – Adding a task for compiling our theme's CSS and the necessary Less files Let’s pick things back up where we left of in Part 1. We now want to customize our page to have our burrito style. We need to add a task that compiles our themes. Edit your /tasks/config/less.js so that it looks like this one: module.exports = function (grunt) { grunt.config.set('less', { dev: { files: [{ expand: true, cwd: 'assets/styles/', src: ['importer.less'], dest: '.tmp/public/styles/', ext: '.css' }, { expand: true, cwd: 'assets/themes/export', src: ['*.less'], dest: '.tmp/public/themes/', ext: '.css' }] } }); grunt.loadNpmTasks('grunt-contrib-less'); }; Basically, we added a second object to the files section, which tells the Less compiler task to look for any Less file in assets/themes/export, compile it, and put the resulting CSS in the .tmp/public/themes folder. In case you were not aware of it, the .tmp/public folder is the one Sails uses to publish its assets. We now create two themes: one is default.less and the other is burrito.less, which is based on default.less. We also have two other Less files, each one holding the variables for each theme. This technique allows you to have one base theme and many other themes based on the default. /assets/themes/variables.less @app-navbar-background-color: red; @app-navbar-brand-color: white; /assets/themes/variablesBurrito.less @app-navbar-background-color: green; @app-navbar-brand-color: yellow; /assets/themes/export/default.less @import "../variables.less"; .navbar-inverse { background-color: @app-navbar-background-color; .navbar-brand { color: @app-navbar-brand-color; } } /assets/themes/export/burrito.less @import "default.less"; @import "../variablesBurrito.less"; So, burrito.less just inherits from default.less but overrides the variables with the ones on its own, creating a new theme based on the default. If you lift Sails now, you will notice that the Navigation bar has a red background on white. Step 3 – Setting up the theme Sails hook The last step involves creating a Hook, a Node module that adds functionality to the Sails corethat catches the hostname, and if it has burrito in it, sets the new theme. First, let’s create the folder for the hook: mkdir -p ./api/hooks/theme Now create a file named index.js in that folder with this content: /** * theme hook - Sets the correct CSS to be displayed */ module.exports = function (sails) { return { routes: { before: { 'all /*': function (req, res, next) { if (!req.isSocket) { // makes theme variable available in views res.locals.theme = sails.hooks.theme.getTheme(req); } returnnext(); } } }, /** * getTheme defines which css needs to be used for this request * In this case, we select the theme by pattern matching certain words from the hostname */ getTheme: function (req) { var hostname = 'default'; var theme = 'default'; try { hostname = req.get('host').toLowerCase(); } catch(e) { // host may not be available always (ie, socket calls. If you need that, add a Host header in your // sails socket configuration) } // if burrito is found on the hostname, change the theme if (hostname.indexOf('burrito') > -1) { theme = 'burrito'; } return theme; } }; }; Finally, to test our configuration, we need to add a host entry in our OS hosts file. In Linux/Unix-based operating systems, you have to edit /etc/hosts (with sudo or root). Add the following line: 127.0.0.1 burrito.smartdelivery.localwww.smartdelivery.local Now navigate using those host names, first to www.smartdelivery.local: And lastly, navigate to burrito.smartdelivery.local: You now have your Burrito Smart Delivery! And you have a Themed Sails Application! I hope you have enjoyed this series.  You can get the source code from here. Enjoy! About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing software products, solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 0
  • 1820

article-image-how-to-apply-themes-sails-applications-part-1
Luis Lobo
29 Sep 2016
8 min read
Save for later

How to Apply Themes to Sails Applications, Part 1

Luis Lobo
29 Sep 2016
8 min read
The Sails Framework is a popular MVC framework that is designed for building practical, production-ready Node.js apps. Themes customize the look and feel of your app, but Sails does not come with a configuration or setting for handling themes by itself. This two-part post shows one of the ways you can set up theming for your Sails application, thus making use of some of Sails’ capabilities. You may have an application that needs to handle theming for different reasons, like custom branding, licensing, dynamic theme configuration, and so on. You can adjust the theming of your application, based on external factors, like patterns in the domain of the site you are browsing. Imagine you have an application that handles deliveries that you customize per client. So, your app renders the default theme when browsed as http://www.smartdelivery.com, but when logged in as a customer, let's say, "Burrito", it changes the domain name as http://burrito.smartdelivery.com. In this series we make use of Less as our language to define our CSS. Sails already handles Less right out of the box. The default Less file is located in /assets/styles/importer.less. We will also use Bootstrap as our base CSS Framework, importing its Less file into our importer.less file. The technique showed here consists of having a base CSS, and a theme CSS that varies according to the host name. Step 1 - Adding Bootstrap to Sails We use Bower to add Bootstrap to our project. First, install it by issuing the following command: npm install bower --save-dev Then, initialize the Bower configuration file. node_modules/bower/bin/bower init This command allows us to configure our bower.json file. Answer the questions asked by bower. ? name sails-themed-application ? description Sails Themed Application ? main file app.js ? keywords ? authors lobo ? license MIT ? homepage ? set currently installed components as dependencies? Yes ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? No { name: 'sails-themed-application', description: 'Sails Themed Application', main: 'app.js', authors: [ 'lobo' ], license: 'MIT', homepage: '', ignore: [ '**/.*', 'node_modules', 'bower_components', 'assets/vendor', 'test', 'tests' ] } This generates a bower.json file in the root of your project. Now we need to tell bower to install everything in a specific directory. Create a file named .bowerrc and put this configuration into it: {"directory" : "assets/vendor"} Finally, install Bootstrap: node_modules/bower/bin/bower install bootstrap --save --production This action creates a folder in assets named vendor, with boostrap inside of it. Since Bootstrap uses JQuery, you also have a jquery folder: ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ ├── templates │ ├── themes │ └── vendor │ ├── bootstrap │ │ ├── dist │ │ │ ├── css │ │ │ ├── fonts │ │ │ └── js │ │ ├── fonts │ │ ├── grunt │ │ ├── js │ │ ├── less │ │ │ └── mixins │ │ └── nuget │ └── jquery │ ├── dist │ ├── external │ │ └── sizzle │ │ └── dist │ └── src │ ├── ajax │ │ └── var │ ├── attributes │ ├── core │ │ └── var │ ├── css │ │ └── var │ ├── data │ │ └── var │ ├── effects │ ├── event │ ├── exports │ ├── manipulation │ │ └── var │ ├── queue │ ├── traversing │ │ └── var │ └── var ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views We need now to add Bootstrap into our importer. Edit /assets/styles/importer.less and add this instruction at the end of it: @import "../vendor/bootstrap/less/bootstrap.less"; Now you need to tell Sails where to import Bootstrap and JQuery JavaScript files from. Edit /tasks/pipeline.js and add the following code after it loads the sails.io.js file: // Load sails.io before everything else 'js/dependencies/sails.io.js', // <ADD THESE LINES> // JQuery JS 'vendor/jquery/dist/jquery.min.js', // Bootstrap JS 'vendor/bootstrap/dist/js/bootstrap.min.js', // </ADD THESE LINES> Now you have to edit your views layout and pages to use the Bootstrap style. In this series I created an application from scratch, so I have the default views and layouts. In your layout, insert the following line after your tag: <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> This loads a second CSS file, which defaults to /themes/default.css, into your views. As a sample, here are the /views/layout.ejs and /views/homepage.ejs I changed (the text under the headings is random text): /views/layout.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags --> <title><%= typeof title == 'undefined' ? 'Sails Themed Application' : title %></title> <!--STYLES--> <link rel="stylesheet" href="/styles/importer.css"> <!--STYLES END--> <!-- THIS IS WHERE THE THEME CSS IS LOADED --> <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> </head> <body> <%- body %> <!--TEMPLATES--> <!--TEMPLATES END--> <!--SCRIPTS--> <script src="/js/dependencies/sails.io.js"></script> <script src="/vendor/jquery/dist/jquery.min.js"></script> <script src="/vendor/bootstrap/dist/js/bootstrap.min.js"></script> <!--SCRIPTS END--> </body> </html> Notice the lines after the <!--STYLES END--> tag. /views/homepage.ejs <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Project name</a> </div> <div id="navbar" class="navbar-collapse collapse"> <form class="navbar-form navbar-right"> <div class="form-group"> <input type="text" placeholder="Email" class="form-control"> </div> <div class="form-group"> <input type="password" placeholder="Password" class="form-control"> </div> <button type="submit" class="btn btn-success">Sign in</button> </form> </div><!--/.navbar-collapse --> </div> </nav> <!-- Main jumbotron for a primary marketing message or call to action --> <div class="jumbotron"> <div class="container"> <h1>Hello, world!</h1> <p>This is a template for a simple marketing or informational website. It includes a large callout called a jumbotron and three supporting pieces of content. Use it as a starting point to create something more unique.</p> <p><a class="btn btn-primary btn-lg" href="#" role="button">Learn more &raquo;</a></p> </div> </div> <div class="container"> <!-- Example row of columns --> <div class="row"> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> </div> <hr> <footer> <p>&copy; 2015 Company, Inc.</p> </footer> </div> <!-- /container --> You can now lift Sails and see your Bootstrapped Sails application. Now that we have our Bootstrapped Sails app set up, in Part 2 we will compile our theme’s CSS and the necessary Less files, and we will set bup the theme Sails hook to complete our application. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing Software products and solutions, frameworks, and platforms for several kinds of industries. In the last years he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 4015

article-image-how-to-add-frameworks-with-carthage
Fabrizio Brancati
27 Sep 2016
5 min read
Save for later

How to Add Frameworks to iOS Applications with Carthage

Fabrizio Brancati
27 Sep 2016
5 min read
With the advent of iOS 8, Apple allowed the option of creating dynamic frameworks. In this post, you will learn how to create a dynamic framework from the ground up, and you will use Carthage to add frameworks to your Apps. Let’s get started! Creating Xcode project Open Xcode and create a new project. Select Frameworks & Library under the iOS menù from the templates and then Cocoa Touch Framework. Type a name for your framework and select Swift for the language. Now we will create a framework that helps to store data using NSUserDefaults. We can name it DataStore, which is a generic name, in case we want to expand it in the future to allow for the use of other data stores such as CoreData. The project is now empty and you have to add your first class, so add a new Swift file and name it DataStore, like the framework name. You need to create the class: public enum DataStoreType { case UserDefaults } public class DataStore { private init() {} public static func save(data: AnyObject, forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().setObject(data, forKey: key) } } public static func read(forKey key: String, in store: DataStoreType) -> AnyObject? { switch store { case .UserDefaults: return NSUserDefaults.standardUserDefaults().objectForKey(key) } } public static func delete(forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().removeObjectForKey(key) } } } Here we have created a DataStoreType enum to allow the expand feature in the future, and the DataStore class with the functions to save, read and delete. That’s it! You have just created the framework! How to use the framework To use the created framework, build it with CMD + B, right-click on the framework in the Products folder in the Xcode project, and click on Show in Finder. To use it you must drag and dropbox this file in your project. In this case, we will create an example project to show you how to do it. Add the framework to your App project by adding it in the Embedded Binaries section in the General page of the Xcode project. Note that if you see it duplicated in the Linked Frameworks and Libraries section, you can remove the first one. You have just included your framework in the App. Now we have to use it, so import it (I will import it in the ViewController class for test purposes, but you can include it whenever you want). And let’s use the DataStore framework by saving and reading a String from the NSUserDefaults. This is the code: import UIKit import DataStore class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. DataStore.save("Test", forKey: "Test", in: .UserDefaults) print(DataStore.read(forKey: "Test", in: .UserDefaults)!) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } Build the App and see the framework do its work! You should see this in the Xcode console: Test Now you have created a framework in Swift and you have used it with an App! Note that the framework created for the iOS Simulator is different from the one created for a device, because is built for a different architetture. To build a universal framework, you can use Carthage, which is shown in the next section. Using Carthage Carthage is a decentralized dependency manager that builds your dependencies and provides you with binary frameworks. To install it you can download the Carthage.pkg file from GitHub or with Homebrew: brew update brew install carthage Because Carthage is only able to build a framework from Git, we will use Alamofire, a popular HTTP networking library available on GitHub. Oper the project folder and create a file named Cartfile. Here is where we will tell Carthage what it has to build and in what version: github “Alamofire/Alamofire” We don’t specify a version because this is only a test, but it’s good practice. Here you can see an example, but opening the Terminal App, going into the project folder, and typing: carthage update You should see Carthage do some things, but when it has finished, with Finder go to project folder, then Carthage, Build, iOS and there is where the framework is[VP1] . To add it to the App, we have to do more work than what we have done before. Drag and drop the framework from the Carthage/Build/iOS folder in the Linked Frameworks and Libraries section on the General setting tab of the Xcode project. On the Build Phases tab, click on the + icon and choose New Run Script Phase with the following script: /usr/local/bin/carthage copy-frameworks Now you can add the paths of the frameworks under Input Files, which in this case is: $(SRCROOT)/FrameworkTest/Carthage/Build/iOS/Alamofire.framework This script works around an App Store submission bug triggered by universal binaries and ensures that the necessary bitcode-related files and dSYMs are copied when archiving. Now you only have to import the frameworks in your Swift file and use it like we did earlier in this post! Summary In this post, you learned how to create a custom framework for creating shared code between your apps, along with the creation of a GitHub repository to share your open source framework with the community of developers. You also learned how to use Carthage for your GitHub repository, or with a popular framework like Alamofire, and how to import it in your Apps. About The Author Fabrizio Brancati is a mobile app developer and web developer currently working and living in Milan, Italy, with a passion for innovation and discover new things. He develops with Objective-C for iOS 3 and iPod touch. When Swift came out, he learned it and was so excited that he remade an Objective-C framework available on GitHub in Swift (BFKit / BFKit-Swift). Software development is his driving passion, and he loves when others make use of his software.
Read more
  • 0
  • 0
  • 4015
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-add-unit-tests-sails-framework-application
Luis Lobo
26 Sep 2016
8 min read
Save for later

How to add Unit Tests to a Sails Framework Application

Luis Lobo
26 Sep 2016
8 min read
There are different ways to implement Unit Tests for a Node.js application. Most of them use Mocha, for their test framework, Chai as the assertion library, and some of them include Istanbul for Code Coverage. We will be using those tools, not entering in deep detail on how to use them but rather on how to successfully configure and implement them for a Sails project. 1) Creating a new application from scratch (if you don't have one already) First of all, let’s create a Sails application from scratch. The Sails version in use for this article is 0.12.3. If you already have a Sails application, then you can continue to step 2. Issuing the following command creates the new application: $ sails new sails-test-article Once we create it, we will have the following file structure: ./sails-test-article ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ └── templates ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views 2) Create a basic test structure We want a folder structure that contains all our tests. For now we will only add unit tests. In this project we want to test only services and controllers. Add necessary modules npm install --save-dev mocha chai istanbul supertest Folder structure Let's create the test folder structure that supports our tests: mkdir -p test/fixtures test/helpers test/unit/controllers test/unit/services After the creation of the folders, we will have this structure: ./sails-test-article ├── api [...] ├── test │ ├── fixtures │ ├── helpers │ └── unit │ ├── controllers │ └── services └── views We now create a mocha.opts file inside the test folder. It contains mocha options, such as a timeout per test run, that will be passed by default to mocha every time it runs. One option per line, as described in mocha opts. --require chai --reporter spec --recursive --ui bdd --globals sails --timeout 5s --slow 2000 Up to this point, we have all our tools set up. We can do a very basic test run: mocha test It prints out this: 0 passing (2ms) Normally, Node.js applications define a test script in the packages.json file. Edit it so that it now looks like this: "scripts": { "debug": "node debug app.js", "start": "node app.js", "test": "mocha test" } We are ready for the next step. 3) Bootstrap file The boostrap.js file is the one that defines the environment that all tests use. Inside it, we define before and after events. In them, we are starting and stopping (or 'lifting' and 'lowering' in Sails language) our Sails application. Since Sails makes globally available models, controller, and services at runtime, we need to start them here. var sails = require('sails'); var _ = require('lodash'); global.chai = require('chai'); global.should = chai.should(); before(function (done) { // Increase the Mocha timeout so that Sails has enough time to lift. this.timeout(5000); sails.lift({ log: { level: 'silent' }, hooks: { grunt: false }, models: { connection: 'unitTestConnection', migrate: 'drop' }, connections: { unitTestConnection: { adapter: 'sails-disk' } } }, function (err, server) { if (err) returndone(err); // here you can load fixtures, etc. done(err, sails); }); }); after(function (done) { // here you can clear fixtures, etc. if (sails && _.isFunction(sails.lower)) { sails.lower(done); } }); This file will be required on each of our tests. That way, each test can individually be run if needed, or run as a whole. 4) Services tests We now are adding two models and one service to show how to test services: Create a Comment model in /api/models/Comment.js: /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; Create a Post model in /api/models/Post.js: /** * Post.js */ module.exports = { attributes: { title: {type: 'string'}, body: {type: 'string'}, timestamp: {type: 'datetime'}, comments: {model: 'Comment'} } }; Create a Post service in /api/services/PostService.js: /** * PostService * * @description :: Service that handles posts */ module.exports = { getPostsWithComments: function () { return Post .find() .populate('comments'); } }; To test the Post service, we need to create a test for it in /test/unit/services/PostService.spec.js. In the case of services, we want to test business logic. So basically, you call your service methods and evaluate the results using an assertion library. In this case, we are using Chai's should. /* global PostService */ // Here is were we init our 'sails' environment and application require('../../bootstrap'); // Here we have our tests describe('The PostService', function () { before(function (done) { Post.create({}) .then(Post.create({}) .then(Post.create({}) .then(function () { done(); }) ) ); }); it('should return all posts with their comments', function (done) { PostService .getPostsWithComments() .then(function (posts) { posts.should.be.an('array'); posts.should.have.length(3); done(); }) .catch(done); }); }); We can now test our service by running: npm test The result should be similar to this one: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostService ✓ should return all posts with their comments 1 passing (979ms) 5) Controllers tests In the case of controllers, we want to validate that our requests are working, that they are returning the correct error codes and the correct data. In this case, we make use of the SuperTest module, which provides HTTP assertions. We add now a Post controller with this content in /api/controllers/PostController.js: /** * PostController */ module.exports = { getPostsWithComments: function (req, res) { PostService.getPostsWithComments() .then(function (posts) { res.ok(posts); }) .catch(res.negotiate); } }; And now we create a Post controller test in: /test/unit/controllers/PostController.spec.js: // Here is were we init our 'sails' environment and application var supertest = require('supertest'); require('../../bootstrap'); describe('The PostController', function () { var createdPostId = 0; it('should create a post', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .post('/post') .set('Accept', 'application/json') .send({"title": "a post", "body": "some body"}) .expect('Content-Type', /json/) .expect(201) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('object'); result.body.should.have.property('id'); result.body.should.have.property('title', 'a post'); result.body.should.have.property('body', 'some body'); createdPostId = result.body.id; done(); } }); }); it('should get posts with comments', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .get('/post/getPostsWithComments') .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('array'); result.body.should.have.length(1); done(); } }); }); it('should delete post created', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .delete('/post/' + createdPostId) .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { returndone(err); } else { returndone(null, result.text); } }); }); }); After running the tests again: npm test We can see that now we have 4 tests: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) 6) Code Coverage Finally, we want to know if our code is being covered by our unit tests, with the help of Istanbul. To generate a report, we just need to run: istanbul cover _mocha test Once we run it, we will have a result similar to this one: The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 26.95% ( 45/167 ) Branches : 3.28% ( 4/122 ) Functions : 35.29% ( 6/17 ) Lines : 26.95% ( 45/167 ) ================================================================================ In this case, we can see that the percentages are not very nice. We don't have to worry much about these since most of the “not covered” code is in /api/policies and /api/responses. You can check that result in a file that was created after istanbul ran, in ./coverage/lcov-report/index.html. If you remove those folders and run it again, you will see the difference. rm -rf api/policies api/responses istanbul cover _mocha test ⬡ 4.4.2 [±master ●●●] Now the result is much better: 100% coverage! The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 100% ( 24/24 ) Branches : 100% ( 0/0 ) Functions : 100% ( 4/4 ) Lines : 100% ( 24/24 ) ================================================================================ Now if you check the report again, you will see a different picture: Coverage report You can get the source code for each of the steps here. I hope you enjoyed the post! Reference Sails documentation on Testing your code Follows recommendations from Sails author, Mike McNeil, Adds some extra stuff based on my own experience developing applications using Sails Framework. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, mentor and advisor, independent software engineer, consultant, and conference speaker. He has a background as a software analyst and designer—creating, designing, and implementing software products and solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 8206

article-image-using-web-api-extend-your-application
Packt
08 Sep 2016
14 min read
Save for later

Using Web API to Extend Your Application

Packt
08 Sep 2016
14 min read
In this article by Shahed Chowdhuri author of book ASP.Net Core Essentials, we will work through a working sample of a web API project. During this lesson, we will cover the following: Web API Web API configuration Web API routes Consuming Web API applications (For more resources related to this topic, see here.) Understanding a web API Building web applications can be a rewarding experience. The satisfaction of reaching a broad set of potential users can trump the frustrating nights spent fine-tuning an application and fixing bugs. But some mobile users demand a more streamlined experience that only a native mobile app can provide. Mobile browsers may experience performance issues in low-bandwidth situations, where HTML5 applications can only go so far with a heavy server-side back-end. Enter web API, with its RESTful endpoints, built with mobile-friendly server-side code. The case for web APIs In order to create a piece of software, years of wisdom tell us that we should build software with users in mind. Without use cases, its features are literally useless. By designing features around user stories, it makes sense to reveal public endpoints that relate directly to user actions. As a result, you will end up with a leaner web application that works for more users. If you need more convincing, here's a recap of features and benefits: It lets you build modern lightweight web services, which are a great choice for your application, as long as you don't need SOAP It's easier to work with than any past work you may have done with ASP.NET Windows Communication Foundation (WCF) services It supports RESTful endpoints It's great for a variety of clients, both mobile and web It's unified with ASP.NET MVC and can be included with/without your web application Creating a new web API project from scratch Let's build a sample web application named Patient Records. In this application, we will create a web API from scratch to allow the following tasks: Add a new patient Edit an existing patient Delete an existing patient View a specific patient or a list of patients These four actions make up the so-called CRUD operations of our system: to Create, Read, Update or Delete patient records. Following the steps below, we will create a new project in Visual Studio 2015: Create a new web API project. Add an API controller. Add methods for CRUD operations. The preceding steps have been expanded into detailed instructions with the following screenshots: In Visual Studio 2015, click File | New | Project. You can also press Ctrl+Shift+N on your keyboard. On the left panel, locate the Web node below Visual C#, then select ASP.NET Core Web Application (.NET Core), as shown in the following screenshot: With this project template selected, type in a name for your project, for examplePatientRecordsApi, and choose a location on your computer, as shown in the following screenshot: Optionally, you may select the checkboxes on the lower right to create a directory for your solution file and/or add your new project to source control. Click OK to proceed. In the dialog that follows, select Empty from the list of the ASP.NET Core Templates, then click OK, as shown in the following screenshot: Optionally, you can check the checkbox for Microsoft Azure to host your project in the cloud. Click OK to proceed. Building your web API project In the Solution Explorer, you may observe that your References are being restored. This occurs every time you create a new project or add new references to your project that have to be restored through NuGet,as shown in the following screenshot: Follow these steps, to fix your references, and build your Web API project: Rightclickon your project, and click Add | New Folder to add a new folder, as shown in the following screenshot: Perform the preceding step three times to create new folders for your Controllers, Models, and Views,as shown in the following screenshot: Rightclick on your Controllers folder, then click Add | New Item to create a new API controller for patient records on your system, as shown in the following screenshot: In the dialog box that appears, choose Web API Controller Class from the list of options under .NET Core, as shown in the following screenshot: Name your new API controller, for examplePatientController.cs, then click Add to proceed. In your new PatientController, you will most likely have several areas highlighted with red squiggly lines due to a lack of necessary dependencies, as shown in the following screenshot. As a result, you won't be able to build your project/solution at this time: In the next section, we will learn about how to configure your web API so that it has the proper references and dependencies in its configuration files. Configuring the web API in your web application How does the web server know what to send to the browser when a specific URL is requested? The answer lies in the configuration of your web API project. Setting up dependencies In this section, we will learn how to set up your dependencies automatically using the IDE, or manually by editing your project's configuration file. To pull in the necessary dependencies, you may right-click on the using statement for Microsoft.AspNet.Mvc and select Quick Actions and Refactorings…. This can also be triggered by pressing Ctrl +. (period) on your keyboard or simply by hovering over the underlined term, as shown in the following screenshot: Visual Studio should offer you several possible options, fromwhich you can select the one that adds the package Microsoft.AspNetCore.Mvc.Corefor the namespace Microsoft.AspNetCore.Mvc. For the Controller class, add a reference for the Microsoft.AspNetCore.Mvc.ViewFeaturespackage, as shown in the following screenshot: Fig12: Adding the Microsoft.AspNetCore.Mvc.Core 1.0.0 package If you select the latest version that's available, this should update your references and remove the red squiggly lines, as shown in the following screenshot: Fig13:Updating your references and removing the red squiggly lines The precedingstep should automatically update your project.json file with the correct dependencies for theMicrosoft.AspNetCore.Mvc.Core, and Microsoft.AspNetCore.Mvc.ViewFeatures, as shown in the following screenshot: The "frameworks" section of theproject.json file identifies the type and version of the .NET Framework that your web app is using, for examplenetcoreapp1.0 for the 1.0 version of .NET Core. You will see something similar in your project, as shown in the following screenshot: Click the Build Solution button from the top menu/toolbar. Depending on how you have your shortcuts set up, you may press Ctrl+Shift+B or press F6 on your keyboard to build the solution. You should now be able to build your project/solution without errors, as shown in the following screenshot: Before running the web API project, open the Startup.cs class file, and replace the app.Run() statement/block (along with its contents) with a call to app.UseMvc()in the Configure() method. To add the Mvc to the project, add a call to the services.AddMvcCore() in the ConfigureServices() method. To allow this code to compile, add a reference to Microsoft.AspNetCore.Mvc. Parts of a web API project Let's take a closer look at the PatientController class. The auto-generated class has the following methods: public IEnumerable<string> Get() public string Get(int id) public void Post([FromBody]string value) public void Put(int id, [FromBody]string value) public void Delete(int id) The Get() method simply returns a JSON object as an enumerable string of values, while the Get(int id) method is an overridden variant that gets a particular value for a specified ID. The Post() and Put() methods can be used for creating and updating entities. Note that the Put() method takes in an ID value as the first parameter so that it knows which entity to update. Finally, we have the Delete() method, which can be used to delete an entity using the specified ID. Running the web API project You may run the web API project in a web browser that can display JSON data. If you use Google Chrome, I would suggest using the JSONView Extension (or other similar extension) to properly display JSON data. The aforementioned extension is also available on GitHub at the following URL: https://github.com/gildas-lormeau/JSONView-for-Chrome If you use Microsoft Edge, you can view the raw JSON data directly in the browser.Once your browser is ready, you can select your browser of choice from the top toolbar of Visual Studio. Click on the tiny triangle icon next to the Debug button, then select a browser, as shown in the following screenshot: In the preceding screenshot, you can see that multiple installed browsers are available, including Firefox, Google Chrome, Internet Explorer,and Edge. To choose a different browser, simply click on Browse With…, in the menu to select a different one. Now, click the Debug button (that isthe green play button) to see the web API project in action in your web browser, as shown in the following screenshot. If you don't have a web application set up, you won't be able to browse the site from the root URL: Don’t worry if you see this error, you can update the URL to include a path to your API controller, for an example seehttp://localhost:12345/api/Patient. Note that your port number may vary. Now, you should be able to see a list of views that are being spat out by your API controller, as shown in the following screenshot: Adding routes to handle anticipated URL paths Back in the days of classic ASP, application URL paths typically reflected physical file paths. This continued with ASP.NET web forms, even though the concept of custom URL routing was introduced. With ASP.NET MVC, routes were designed to cater to functionality rather than physical paths. ASP.NET web API continues this newer tradition, with the ability to set up custom routes from within your code. You can create routes for your application using fluent configuration in your startup code or with declarative attributes surrounded by square brackets. Understanding routes To understand the purpose of having routes, let's focus on the features and benefits of routes in your application. This applies to both ASP.NET MVC and ASP.NET web API: By defining routes, you can introduce predictable patterns for URL access This gives you more control over how URLs are mapped to your controllers Human-readable route paths are also SEO-friendly, which is great for Search Engine Optimization It provides some level of obscurity when it comes to revealing the underlying web technology and physical file names in your system Setting up routes Let's start with this simple class-level attribute that specifies a route for your API controller, as follows: [Route("api/[controller]")] public class PatientController : Controller { // ... } Here, we can dissect the attribute (seen in square brackets, used to affect the class below it) and its parameter to understand what's going on: The Route attribute indicates that we are going to define a route for this controller. Within the parentheses that follow, the route path is defined in double quotes. The first part of this path is thestring literal api/, which declares that the path to an API method call will begin with the term api followed by a forward slash. The rest of the path is the word controller in square brackets, which refers to the controller name. By convention, the controller's name is part of the controller's class name that precedes the term Controller. For a class PatientController, the controller name is just the word Patient. This means that all API methods for this controller can be accessed using the following syntax, where MyApplicationServer should be replaced with your own server or domain name:http://MyApplicationServer/api/Patient For method calls, you can define a route with or without parameters. The following two examples illustrate both types of route definitions: [HttpGet] public IEnumerable<string> Get() {     return new string[] { "value1", "value2" }; } In this example, the Get() method performs an action related to the HTTP verb HttpGet, which is declared in the attribute directly above the method. This identifies the default method for accessing the controller through a browser without any parameters, which means that this API method can be accessed using the following syntax: http://MyApplicationServer/api/Patient To include parameters, we can use the following syntax: [HttpGet("{id}")] public string Get(int id) {     return "value"; } Here, the HttpGet attribute is coupled with an "{id}" parameter, enclosed in curly braces within double quotes. The overridden version of the Get() method also includes an integer value named id to correspond with the expected parameter. If no parameter is specified, the value of id is equal to default(int) which is zero. This can be called without any parameters with the following syntax: http://MyApplicationServer/api/Patient/Get In order to pass parameters, you can add any integer value right after the controller name, with the following syntax: http://MyApplicationServer/api/Patient/1 This will assign the number 1 to the integer variable id. Testing routes To test the aforementioned routes, simply run the application from Visual Studio and access the specified URLs without parameters. The preceding screenshot show the results of accessing the following path: http://MyApplicationServer/api/Patient/1 Consuming a web API from a client application If a web API exposes public endpoints, but there is no client application there to consume it, does it really exist? Without getting too philosophical, let's go over the possible ways you can consume a client application. You can do any of the following: Consume the Web API using external tools Consume the Web API with a mobile app Consume the Web API with a web client Testing with external tools If you don't have a client application set up, you can use an external tool such as Fiddler. Fiddler is a free tool that is now available from Telerik, available at http://www.telerik.com/download/fiddler, as shown in the following screenshot: You can use Fiddler to inspect URLs that are being retrieved and submitted on your machine. You can also use it to trigger any URL, and change the request type (Get, Post, and others). Consuming a web API from a mobile app Since this article is primarily about the ASP.NET core web API, we won't go into detail about mobile application development. However, it's important to note that a web API can provide a backend for your mobile app projects. Mobile apps may include Windows Mobile apps, iOS apps, Android apps, and any modern app that you can build for today's smartphones and tablets. You may consult the documentation for your particular platform of choice, to determine what is needed to call a RESTful API. Consuming a web API from a web client A web client, in this case, refers to any HTML/JavaScript application that has the ability to call a RESTful API. At the least, you can build a complete client-side solution with straight JavaScript to perform the necessary actions. For a better experience, you may use jQuery and also one of many popular JavaScript frameworks. A web client can also be a part of a larger ASP.NET MVC application or a Single-Page Application (SPA). As long as your application is spitting out JavaScript that is contained in HTML pages, you can build a frontend that works with your backend web API. Summary In this article, we've taken a look at the basic structure of an ASP.NET web API project, and observed the unification of web API with MVC in an ASP.NET core. We also learned how to use a web API as our backend to provide support for various frontend applications. Resources for Article:   Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] Schema Validation with Oracle JDeveloper - XDK 11g [article] Getting Started with Spring Security [article]
Read more
  • 0
  • 0
  • 2091

article-image-hello-small-world
Packt
07 Sep 2016
20 min read
Save for later

Hello, Small World!

Packt
07 Sep 2016
20 min read
In this article by Stefan Björnander, the author of the book C++ Windows Programming, we will see how to create Windows applications using C++. This article introduces Small Windows by presenting two small applications: The first application writes "Hello, Small Windows!" in a window The second application handles circles of different colors in a document window (For more resources related to this topic, see here.) Hello, Small Windows! In The C Programming Language by Brian Kernighan and Dennis Richie, the hello-world example was introduced. It was a small program that wrote hello, world on the screen. In this section, we shall write a similar program for Small Windows. In regular C++, the execution of the application starts with the main function. In Small Windows, however, main is hidden in the framework and has been replaced by MainWindow, which task is to define the application name and create the main window object. The argumentList parameter corresponds to argc and argv in main. The commandShow parameter forwards the system's request regarding the window's appearance. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Hello"); Application::MainWindowPtr() = new HelloWindow(windowShow); } In C++, there are to two character types: char and wchar_t, where char holds a regular character of one byte and wchar_t holds a wide character of larger size, usually two bytes. There is also the string class that holds a string of char values and the wstring class that holds a string of wchar_t values. However, in Windows there is also the generic character type TCHAR that is char or wchar_t, depending on system settings. There is also the String class holds a string of TCHAR values. Moreover, TEXT is a macro that translates a character value to TCHAR and a text value to an array of TCHAR values. To sum it up, following is a table with the character types and string classes: Regular character Wide character Generic character char wchar_t TCHAR string wstring String In the applications of this book, we always use the TCHAR type, the String class, and the TEXT macro. The only exception to that rule is the clipboard handling. Our version of the hello-world program writes Hello, Small Windows! in the center of the client area. The client area of the window is the part of the window where it is possible to draw graphical objects. In the following window, the client area is the white area. The HelloWindow class extends the Small Windows Window class. It holds a constructor and the Draw method. The constructor calls the Window constructor with suitable information regarding the appearance of the window. Draw is called every time the client area of the window needs to be redrawn. HelloWindow.h class HelloWindow : public Window { public: HelloWindow(WindowShow windowShow); void OnDraw(Graphics& graphics, DrawMode drawMode); }; The constructor of HelloWindow calls the constructor of Window with the following parameter: The first parameter of the HelloWindow constructor is the coordinate system. LogicalWithScroll indicates that each logical unit is one hundredth of a millimeter, regardless of the physical resolution of the screen. The current scroll bar settings are taken into consideration. The second parameter of the window constructor is the preferred size of the window. It indicates that a default size shall be used. The third parameter is a pointer to the parent window. It is null since the window has no parent window. The fourth and fifth parameters set the window's style, in this case overlapped windows. The last parameter is windowShow given by the surrounding system to MainWindow, which decide the window's initial appearance (minimized, normal, or maximized). Finally, the constructor sets the header of the window by calling the Window method SetHeader. HelloWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" HelloWindow::HelloWindow(WindowShow windowShow) :Window(LogicalWithScroll, ZeroSize, nullptr, OverlappedWindow, NoStyle, windowShow) { SetHeader(TEXT("Hello Window")); } The OnDraw method is called every time the client area of the window needs to be redrawn. It obtains the size of the client area and draws the text in its center with black text on white background. The SystemFont parameter will make the text appear in the default system font. The Small Windows Color class holds the constants Black and White. Point holds a 2-dimensional point. Size holds a width and a height. The Rect class holds a rectangle. More specifically, it holds the four corners of a rectangle. void HelloWindow::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { Size clientSize = GetClientSize(); Rect clientRect(Point(0, 0), clientSize); Font textFont("New Times Roman", 12, true); graphics.DrawText(clientRect, TEXT("Hello, Small Windows!"), textFont , Black, White); } The Circle application In this section, we look into a simple circle application. As the name implies, it provides the user the possibility to handle circles in a graphical application. The user can add a new circle by clicking the left mouse button. They can also move an existing circle by dragging it. Moreover, the user can change the color of a circle as well as save and open the document.   The main window As we will see thought out this book, MainWindow does always do the same thing: it sets the application name and creates the main window of the application. The name is used by the Save and Open standard dialogs, the About menu item, and the registry. The difference between the main window and other windows of the application is that when the user closes the main window, the application exits. Moreover, when the user selects the Exit menu item the main window is closed, and its destructor is called. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Circle"); Application::MainWindowPtr() = new CircleDocument(windowShow); } The CircleDocument class The CircleDocumentclass extends the Small Windows class StandardDocument, which in turn extends Document and Window. In fact, StandardDocument constitutes of a framework; that is, a base class with a set of virtual methods with functionality we can override and further specify. The OnMouseDown and OnMouseUp methods are overridden from Window and are called when the user presses or releases one of the mouse buttons. OnMouseMove is called when the user moves the mouse. The OnDraw method is also overridden from Window and is called every time the window needs to be redrawn. The ClearDocument, ReadDocumentFromStream, and WriteDocumentToStream methods are overridden from Standard­Document and are called when the user creates a new file, opens a file, or saves a file. CircleDocument.h class CircleDocument : public StandardDocument { public: CircleDocument(WindowShow windowShow); ~CircleDocument(); void OnMouseDown(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseUp(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseMove(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnDraw(Graphics& graphics, DrawMode drawMode); bool ReadDocumentFromStream(String name, istream& inStream); bool WriteDocumentToStream(String name, ostream& outStream) const; void ClearDocument(); The DEFINE_BOOL_LISTENER and DEFINE_VOID_LISTENER macros define listeners: methods without parameters that are called when the user selects a menu item. The only difference between the macros is the return type of the defined methods: bool or void. In the applications of this book, we use the common standard that the listeners called in response to user actions are prefixed with On, for instance OnRed. The methods that decide whether the menu item shall be enabled are suffixed with Enable, and the methods that decide whether the menu item shall be marked with a check mark or a radio button are suffixed with Check or Radio. In this application, we define menu items for the red, green, and blue colors. We also define a menu item for the Color standard dialog.     DEFINE_VOID_LISTENER(CircleDocument,OnRed);     DEFINE_VOID_LISTENER(CircleDocument,OnGreen);     DEFINE_VOID_LISTENER(CircleDocument,OnBlue);     DEFINE_VOID_LISTENER(CircleDocument,OnColorDialog); When the user has chosen one of the color red, green, or blue, its corresponding menu item shall be checked with a radio button. RedRadio, GreenRadio, and BlueRadio are called before the menu items become visible and return a Boolean value indicating whether the menu item shall be marked with a radio button.     DEFINE_BOOL_LISTENER(CircleDocument, RedRadio);     DEFINE_BOOL_LISTENER(CircleDocument, GreenRadio);     DEFINE_BOOL_LISTENER(CircleDocument, BlueRadio); The circle radius is always 500 units, which correspond to 5 millimeters.     static const int CircleRadius = 500; The circleList field holds the circles, where the topmost circle is located at the beginning of the list. The nextColor field holds the color of the next circle to be added by the user. It is initialized to minus one to indicate that no circle is being moved at the beginning. The moveIndex and movePoint fields are used by OnMouseDown and OnMouseMove to keep track of the circle being moved by the user. private: vector<Circle> circleList; Color nextColor; int moveIndex = -1; Point movePoint; }; In the StandardDocument constructor call, the first two parameters are LogicalWithScroll and USLetterPortrait. They indicate that the logical size is hundredths of millimeters and that the client area holds the logical size of a US letter: 215.9 * 279.4 millimeters (8.5 * 11 inches). If the window is resized so that the client area becomes smaller than a US letter, scroll bars are added to the window. The third parameter sets the file information used by the standard Save and Open dialogs, the text description is set to Circle Files and the file suffix is set to cle. The null pointer parameter indicates that the window does not have a parent window. The OverlappedWindow constant parameter indicates that the window shall overlap other windows and the windowShow parameter is the window's initial appearance passed on from the surrounding system by MainWindow. CircleDocument.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" CircleDocument::CircleDocument(WindowShow windowShow) :StandardDocument(LogicalWithScroll, USLetterPortrait, TEXT("Circle Files, cle"), nullptr, OverlappedWindow, windowShow) { The StandardDocument framework adds the standard File, Edit, and Help menus to the window menu bar. The File menu holds the New, Open, Save, Save As, Page Setup, Print Preview, and Exit items. The Page Setup and Print Preview items are optional. The seventh parameter of the StandardDocument constructor (default false) indicates their presence. The Edit menu holds the Cut, Copy, Paste, and Delete items. They are disabled by default; we will not use them in this application. The Help menu holds the About item, the application name set in MainWindow is used to display a message box with a standard message: Circle, version 1.0. We add the standard File and Edit menus to the menu bar. Then we add the Color menu, which is the application-specific menu of this application. Finally, we add the standard Help menu and set the menu bar of the document. The Color menu holds the menu items used to set the circle colors. The OnRed, OnGreen, and OnBlue methods are called when the user selects the menu item, and the RedRadio, GreenRadio, BlueRadio are called before the user selects the color menu in order to decide if the items shall be marked with a radio button. OnColorDialog opens a standard color dialog. In the text &RedtCtrl+R, the ampersand (&) indicates that the menu item has a mnemonic; that is, the letter R will be underlined and it is possible to select the menu item by pressing R after the menu has been opened. The tabulator character (t) indicates that the second part of the text defines an accelerator; that is, the text Ctrl+R will occur right-justified in the menu item and the item can be selected by pressing Ctrl+R. Menu menuBar(this); menuBar.AddMenu(StandardFileMenu(false)); The AddItem method in the Menu class also takes two more parameters for enabling the menu item and setting a check box. However, we do not use them in this application. Therefore, we send null pointers. Menu colorMenu(this, TEXT("&Color")); colorMenu.AddItem(TEXT("&RedtCtrl+R"), OnRed, nullptr, nullptr, RedRadio); colorMenu.AddItem(TEXT("&GreentCtrl+G"), OnGreen, nullptr, nullptr, GreenRadio); colorMenu.AddItem(TEXT("&BluetCtrl+B"), OnBlue, nullptr, nullptr, BlueRadio); colorMenu.AddSeparator(); colorMenu.AddItem(TEXT("&Dialog ..."), OnColorDialog); menuBar.AddMenu(colorMenu); menuBar.AddMenu(StandardHelpMenu()); SetMenuBar(menuBar); Finally, we read the current color (the color of the next circle to be added) from the registry; red is the default color in case there is no color stored in the registry. nextColor.ReadColorFromRegistry(TEXT("NextColor"), Red); } The destructor saves the current color in the registry. In this application, we do not need to perform the destructor's normal tasks, such as deallocate memory or closing files. CircleDocument::~CircleDocument() { nextColor.WriteColorToRegistry(TEXT("NextColor")); } The ClearDocument method is called when the user selects the New menu item. In this case, we just clear the circle list. Every other action, such as redrawing the window or changing its title, is taken care of by StandardDocument. void CircleDocument::ClearDocument() { circleList.clear(); } The WriteDocumentToStream method is called by StandardDocument when the user saves a file (by selecting Save or Save As). It writes the number of circles (the size of the circle list) to the output stream and calls WriteCircle for each circle in order to write their states to the stream. bool CircleDocument::WriteDocumentToStream(String name, ostream& outStream) const { int size = circleList.size(); outStream.write((char*) &size, sizeof size); for (Circle circle : circleList) { circle.WriteCircle(outStream); } return ((bool) outStream); } The ReadDocumentFromStream method is called by StandardDocument when the user opens a file by selecting the Open menu item. It reads the number of circles (the size of the circle list) and for each circle it creates a new object of the Circle class, calls ReadCircle in order to read the state of the circle, and adds the circle object to circleList. bool CircleDocument::ReadDocumentFromStream(String name, istream& inStream) { int size; inStream.read((char*) &size, sizeof size); for (int count = 0; count < size; ++count) { Circle circle; circle.ReadCircle(inStream); circleList.push_back(circle); } return ((bool) inStream); } The OnMouseDown method is called when the user presses one of the mouse buttons. First we need to check that they have pressed the left mouse button. If they have, we loop through the circle list and call IsClick for each circle in order to decide whether they have clicked at a circle. Note that the top-most circle is located at the beginning of the list; therefore, we loop from the beginning of the list. If we find a clicked circle, we break the loop. If the user has clicked at a circle, we store its index moveIndex and the current mouse position in movePoint. Both values are needed by OnMouseMove method that will be called when the user moves the mouse. void CircleDocument::OnMouseDown (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if (mouseButtons == LeftButton) { moveIndex = -1; int size = circleList.size(); for (int index = 0; index < size; ++index) { if (circleList[index].IsClick(mousePoint)) { moveIndex = index; movePoint = mousePoint; break; } } However, if the user has not clicked at a circle, we add a new circle. A circle is defined by its center position (mousePoint), radius (CircleRadius), and color (nextColor). An invalidated area is a part of the client area that needs to be redrawn. Remember that in Windows we normally do not draw figures directly. Instead, we call Invalidate to tell the system that an area needs to be redrawn and forces the actually redrawing by calling UpdateWindow, which eventually results in a call to OnDraw. The invalidated area is always a rectangle. Invalidate has a second parameter (default true) indicating that the invalidated area shall be cleared. Technically, it is painted in the window's client color, which in this case is white. In this way, the previous location of the circle becomes cleared and the circle is drawn at its new location. The SetDirty method tells the framework that the document has been altered (the document has become dirty), which causes the Save menu item to be enabled and the user to be warned if they try to close the window without saving it. if (moveIndex == -1) { Circle newCircle(mousePoint, CircleRadius, nextColor); circleList.push_back(newCircle); Invalidate(newCircle.Area()); UpdateWindow(); SetDirty(true); } } } The OnMouseMove method is called every time the user moves the mouse with at least one mouse button pressed. We first need to check whether the user is pressing the left mouse button and is clicking at a circle (whether moveIndex does not equal minus one). If they have, we calculate the distance from the previous mouse event (OnMouseDown or OnMouseMove) by comparing the previous mouse position movePoint by the current mouse position mousePoint. We update the circle position, invalidate both the old and new area, forcing a redrawing of the invalidated areas with UpdateWindow, and set the dirty flag. void CircleDocument::OnMouseMove (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if ((mouseButtons == LeftButton)&&(moveIndex != -1)) { Size distanceSize = mousePoint - movePoint; movePoint = mousePoint; Circle& movedCircle = circleList[moveIndex]; Invalidate(movedCircle.Area()); movedCircle.Center() += distanceSize; Invalidate(movedCircle.Area()); UpdateWindow(); SetDirty(true); } } Strictly speaking, OnMouseUp could be excluded since moveIndex is set to minus one in OnMouseDown, which is always called before OnMouseMove. However, it has been included for the sake of completeness. void CircleDocument::OnMouseUp (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { moveIndex = -1; } The OnDraw method is called every time the window needs to be (partly or completely) redrawn. The call can have been initialized by the system as a response to an event (for instance, the window has been resized) or by an earlier call to UpdateWindow. The Graphics reference parameter has been created by the framework and can be considered a toolbox for drawing lines, painting areas and writing text. However, in this application we do not write text. We iterate throw the circle list and, for each circle, call the Draw method. Note that we do not care about which circles are to be physically redrawn. We simple redraw all circles. However, only the circles located in an area that has been invalidated by a previous call to Invalidate will be physically redrawn. The Draw method has a second parameter indicating the draw mode, which can be Paint or Print. Paint indicates that OnDraw is called by OnPaint in Window and that the painting is performed in the windows' client area. The Print method indicates that OnDraw is called by OnPrint and that the painting is sent to a printer. However, in this application we do not use that parameter. void CircleDocument::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { for (Circle circle : circleList) { circle.Draw(graphics); } } The RedRadio, GreenRadio, and BlueRadio methods are called before the menu items are shown, and the items will be marked with a radio button in case they return true. The Red, Green, and Blue constants are defined in the Color class. bool CircleDocument::RedRadio() const { return (nextColor == Red); } bool CircleDocument::GreenRadio() const { return (nextColor == Green); } bool CircleDocument::BlueRadio() const { return (nextColor == Blue); } The OnRed, OnGreen, and OnBlue methods are called when the user selects the corresponding menu item. They all set the nextColor field to an appropriate value. void CircleDocument::OnRed() { nextColor = Red; } void CircleDocument::OnGreen() { nextColor = Green; } void CircleDocument::OnBlue() { nextColor = Blue; } The OnColorDialog method is called when the user selects the Color dialog menu item and displays the standard Color dialog. If the user choses a new color, nextcolor will be given the chosen color value. void CircleDocument::OnColorDialog() { ColorDialog(this, nextColor); } The Circle class The Circle class is a class holding the information about a single circle. The default constructor is used when reading a circle from a file. The second constructor is used when creating a new circle. The IsClick method returns true if the given point is located inside the circle (to check whether the user has clicked in the circle), Area returns the circle's surrounding rectangle (for invalidating), and Draw is called to redraw the circle. Circle.h class Circle { public: Circle(); Circle(Point center, int radius, Color color); bool WriteCircle(ostream& outStream) const; bool ReadCircle(istream& inStream); bool IsClick(Point point) const; Rect Area() const; void Draw(Graphics& graphics) const; Point Center() const {return center;} Point& Center() {return center;} Color GetColor() {return color;} As mentioned in the previous section, a circle is defined by its center position (center), radius (radius), and color (color). private: Point center; int radius; Color color; }; The default constructor does not need to initialize the fields, since it is called when the user opens a file and the values are read from the file. The second constructor, however, initializes the center point, radius, and color of the circle. Circle.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" Circle::Circle() { // Empty. } Circle::Circle(Point center, int radius, Color color) :color(color), center(center), radius(radius) { // Empty. } The WriteCircle method writes the color, center point, and radius to the stream. Since the radius is a regular integer, we simply use the C standard function write, while Color and Point have their own methods to write their values to a stream. In ReadCircle we read the color, center point, and radius from the stream in a similar manner. bool Circle::WriteCircle(ostream& outStream) const { color.WriteColorToStream(outStream); center.WritePointToStream(outStream); outStream.write((char*) &radius, sizeof radius); return ((bool) outStream); } bool Circle::ReadCircle(istream& inStream) { color.ReadColorFromStream(inStream); center.ReadPointFromStream(inStream); inStream.read((char*) &radius, sizeof radius); return ((bool) inStream); } The IsClick method uses the Pythagoras theorem to calculate the distance between the given point and the circle's center point, and return true if the point is located inside the circle (if the distance is less than or equal to the circle radius). Circle::IsClick(Point point) const { int width = point.X() - center.X(), height = point.Y() - center.Y(); int distance = (int) sqrt((width * width) + (height * height)); return (distance <= radius); } The top-left corner of the resulting rectangle is the center point minus the radius, and the bottom-right corner is the center point plus the radius. Rect Circle::Area() const { Point topLeft = center - radius, bottomRight = center + radius; return Rect(topLeft, bottomRight); } We use the FillEllipse method (there is no FillCircle method) of the Small Windows Graphics class to draw the circle. The circle's border is always black, while its interior color is given by the color field. void Circle::Draw(Graphics& graphics) const { Point topLeft = center - radius, bottomRight = center + radius; Rect circleRect(topLeft, bottomRight); graphics.FillEllipse(circleRect, Black, color); } Summary In this article, we have looked into two applications in Small Windows: a simple hello-world application and a slightly more advance circle application, which has introduced the framework. We have looked into menus, circle drawing, and mouse handling. Resources for Article: Further resources on this subject: C++, SFML, Visual Studio, and Starting the first game [article] Game Development Using C++ [article] Boost.Asio C++ Network Programming [article]
Read more
  • 0
  • 0
  • 1964

article-image-running-your-applications-aws-part-2
Cheryl Adams
19 Aug 2016
6 min read
Save for later

Running Your Applications with AWS - Part 2

Cheryl Adams
19 Aug 2016
6 min read
An active account with AWS means you are on your way with building in the cloud.  Before you start building, you need to tackle the Billing and Cost Management, under Account. It is likely that you are starting with a Free-Tier, so it is important to know that you still have the option of paying for additional services. Also, if you decide to continue with AWS,you should get familiar with this page.  This is not your average bill or invoice page—it is much more than that. The Billing & Cost Management Dashboard is a bird’s-eye view of all of your account activity. Once you start accumulating pay-as-you-go services, this page will give you a quick review of your monthly spending based on services. Part of managing your cloud services includes billing, so it is a good idea to become familiar with this from the start. Amazon also gives you the option of setting up cost-based alerts for your system, which is essential if youwant to be alerted by any excessive cost related to your cloud services. Budgets allow you to receive e-mailed notifications or alerts if spending exceeds the budget that you have created.    If you want to dig in even deeper, try turning on the Cost Explorer for an analysis of your spending. The Billing and Cost Management section of your account is much more than just invoices. It is the AWS complete cost management system for your cloud. Being familiar with all aspects of the cost management system will help you to monitor your cloud services, and hopefully avoid any expenses that may exceed your budget. In our previous discussion, we considered all AWSservices.  Let’s take another look at the details of the services. Amazon Web Services Based on this illustration, you can see that the build options are grouped by words such asCompute, Storage & Content Delivery and  Databases.  Each of these objects or services lists a step-by-step routine that is easy to follow. Within the AWS site, there are numerous tutorials with detailed build instructions. If you are still exploring in the free-tier, AWS also has an active online community of users whotry to answer most questions. Let’s look at the build process for Amazon’s EC2 Virtual Server. The first thing that you will notice is that Amazon provides 22 different Amazon Machine Images (AMIs) to choose from (at the time this post was written).At the top of the screen is a Step process that will guide you through the build. It should be noted that some of the images available are not defined as a part of the free-tier plan. The remaining images that do fit into the plan should fit almost any project need. For this walkthrough, let’s select SUSE Linux (free eligible). It is important to note that just because the image itself is free, that does not mean all the options available within that image are free. Notice on this screen that Amazon has pre-selected the only free-tier option available for this image. From this screen you are given two options: (Review and Launch) or (Next Configure Instance Details).  Let’s try Review and Launch to see what occurs. Notice that our Step process advanced to Step 7. Amazon gives you a soft warning regarding the state of the build and potential risk. If you are okay with these risks, you can proceed and launch your server. It is important to note that the Amazon build process is user driven. It will allow you to build a server with these potential risks in your cloud. It is recommended that you carefully consider each screen before proceeding. In this instance,select Previous and not Cancel to return to Step 3. Selecting Cancelwill stop the build process and return you to the AWS main services page. Until you actually launch your server, nothing is built or saved. There are information bubbles for each line in Step 3: Configure Instance Details. Review the content of each bubble, make any changes if needed, and then proceed to the next step. Select the storage size; then select Next Tag Instance. Enter Values and Continue or Learn More for further information. Select the Next: Configure Security Group button. Security is an extremely important part of setting up your virtual server. It is recommended that you speak to your security administrator to determine the best option. For source, it is recommended that you avoid using the Anywhereoption. This selection will put your build at risk. Select my IP or custom IP as shown. If you are involved in a self-study plan, you can select the Learn More link to determine the best option. Next: Review and Launch The full details of this screen be expanded, reviewed or edited. If everything appears to be okay,proceed to Launch. One additional screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to the Launch Instances. One more screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to Launch Instances to see the build process. You can access your new server from the EC2 Dashboard. This example of a build process gives you a window into how the  AWS build process works. The other objects and services have a similar step-through process. Once you have launched your server, you should be able to access it and proceed with your development. Additional details for development are also available through the site. Amazon’s Web Services Platform is an all-in-one solution for your graduation to the cloud. Not only can you manage your technical environment, but also it has features that allow you to manage your budget. By setting up your virtual applicances and servers appropriately, you can maximize the value of the first  12 months of your free-tier. Carefully monitoring activities through alerts and notification will help you to avoid having any billing surprises. Going through the tutorials and visting the online community will only aid to increase your knowledge base of AWS. AWS is inviting everyone to test their services on this exciting platform, so I would definitely recommend taking advantage of it. Have fun! About the author Cheryl Adams is a senior cloud data andinfrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 1252
article-image-running-your-applications-aws
Cheryl Adams
17 Aug 2016
4 min read
Save for later

Running Your Applications with AWS

Cheryl Adams
17 Aug 2016
4 min read
If you’ve ever been told not to run with scissors, you should not have the same concern when running with AWS. It is neither dangerous nor unsafe when you know what you are doing and where to look when you don’t. Amazon’s current service offering, AWS (Amazon Web Services), is a collection of services, applications and tools that can be used to deploy your infrastructure and application environment to the cloud.  Amazon gives you the option to start their service offerings with a ‘free tier’ and then move toward a pay as you go model.  We will highlight a few of the features when you open your account with AWS. One of the first things you will notice is that Amazon offers a bulk of information regarding cloud computing right up front. Whether you are a novice, amateur or an expert in cloud computing, Amazon offers documented information before you create your account.  This type of information is essential if you are exploring this tool for a project or doing some self-study on your own. If you are a pre-existing Amazon customer, you can use your same account to get started with AWS. If you want to keep your personal account separate from your development or business, it would be best to create a separate account. Amazon Web Services Landing Page The Free Tier is one of the most attractive features of AWS. As a new account you are entitled to twelve months within the Free Tier. In addition to this span of time, there are services that can continue after the free tier is over. This gives the user ample time to explore the offerings within this free-tier period. The caution is not to exceed the free service limitations as it will incur charges. Setting up the free-tier still requires a credit card. Fee-based services will be offered throughout the free tier, so it is important not to select a fee-based charge unless you are ready to start paying for it. Actual paid use will vary based on what you have selected.   AWS Service and Offerings (shown on an open account)     AWS overview of services available on the landing page Amazon’s service list is very robust. If you are already considering AWS, hopefully this means you are aware of what you need or at least what you would like to use. If not, this would be a good time to press pause and look at some resource-based materials. Before the clock starts ticking on your free-tier, I would recommend a slow walk through the introductory information on this site to ensure that you are selecting the right mix of services before creating your account. Amazon’s technical resources has a 10-minute tutorial that gives you a complete overview of the services. Topics like ‘AWS Training and Introduction’ and ‘Get Started with AWS’ include a list of 10-minute videos as well as a short list of ‘how to’ instructions for some of the more commonly used features. If you are a techie by trade or hobby, this may be something you want to dive into immediately.In a company, generally there is a predefined need or issue that the organization may feel can be resolved by the cloud.  If it is a team initiative, it would be good to review the resources mentioned in this article so that everyone is on the same page as to what this solution can do.It’s recommended before you start any trial, subscription or new service that you have a set goal or expectation of why you are doing it. Simply stated, a cloud solution is not the perfect solution for everyone.  There is so much information here on the AWS site. It’s also great if you are comparing between competing cloud service vendors in the same space. You will be able to do a complete assessment of most services within the free-tier. You can map use case scenarios to determine if AWS is the right fit for your project. AWS First Project is a great place to get started if you are new to AWS. If you are wondering how to get started, these technical resources will set you in the right direction. By reviewing this information during your setup or before you start, you will be able to make good use out of your first few months and your introduction to AWS. About the author Cheryl Adams is a senior cloud data and infrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 1798

article-image-exception-handling-python
Packt
17 Aug 2016
10 min read
Save for later

Exception Handling with Python

Packt
17 Aug 2016
10 min read
In this article, by Ninad Sathaye, author of the book, Learning Python Application Development, you will learn techniques to make the application more robust by handling exceptions Specifically, we will cover the following topics: What are the exceptions in Python? Controlling the program flow with the try…except clause Dealing with common problems by handling exceptions Creating and using custom exception classes (For more resources related to this topic, see here.) Exceptions Before jumping straight into the code and fixing these issues, let's first understand what an exception is and what we mean by handling an exception. What is an exception? An exception is an object in Python. It gives us information about an error detected during the program execution. The errors noticed while debugging the application were unhandled exceptions as we didn't see those coming. Later in the article,you will learn the techniques to handle these exceptions. The ValueError and IndexErrorexceptions seen in the earlier tracebacks are examples of built-in exception types in Python. In the following section, you will learn about some other built-in exceptions supported in Python. Most common exceptions Let's quickly review some of the most frequently encountered exceptions. The easiest way is to try running some buggy code and let it report the problem as an error traceback! Start your Python interpreter and write the following code: Here are a few more exceptions: As you can see, each line of the code throws a error tracebackwith an exception type (shown highlighted). These are a few of the built-in exceptions in Python. A comprehensive list of built-in exceptions can be found in the following documentation:https://docs.python.org/3/library/exceptions.html#bltin-exceptions Python provides BaseException as the base class for all built-in exceptions. However, most of the built-in exceptions do not directly inherit BaseException. Instead, these are derived from a class called Exception that in turn inherits from BaseException. The built-in exceptions that deal with program exit (for example, SystemExit) are derived directly from BaseException. You can also create your own exception class as a subclass of Exception. You will learn about that later in this article. Exception handling So far, we saw how the exceptions occur. Now, it is time to learn how to use thetry…except clause to handle these exceptions. The following pseudocode shows a very simple example of the try…except clause: Let's review the preceding code snippet: First, the program tries to execute the code inside thetryclause. During this execution, if something goes wrong (if an exception occurs), it jumps out of this tryclause. The remaining code in the try block is not executed. It then looks for an appropriate exception handler in theexceptclause and executes it. The exceptclause used here is a universal one. It will catch all types of exceptions occurring within thetryclause. Instead of having this "catch-all" handler, a better practice is to catch the errors that you anticipate and write an exception handling code specific to those errors. For example, the code in thetryclause might throw an AssertionError. Instead of using the universalexcept clause, you can write a specific exception handler, as follows: Here, we have an except clause that exclusively deals with AssertionError. What it also means is that any error other than the AssertionError will slip through as an unhandled exception. For that, we need to define multipleexceptclauses with different exception handlers. However, at any point of time, only one exception handler will be called. This can be better explained with an example. Let's take a look at the following code snippet: Thetry block calls solve_something(). This function accepts a number as a user input and makes an assertion that the number is greater than zero. If the assertion fails, it jumps directly to the handler, except AssertionError. In the other scenario, with a > 0, the rest of the code in solve_something() is executed. You will notice that the variable xis not defined, which results in NameError. This exception is handled by the other exception clause, except NameError. Likewise, you can define specific exception handlers for anticipated errors. Raising and re-raising an exception Theraisekeyword in Python is used to force an exception to occur. Put another way, it raises an exception. The syntax is simple; just open the Python interpreter and type: >>> raise AssertionError("some error message") This produces the following error traceback: Traceback (most recent call last): File "<stdin>", line 1, in <module> AssertionError : some error message In some situations, we need to re-raise an exception. To understand this concept better, here is a trivial scenario. Suppose, in thetryclause, you have an expression that divides a number by zero. In ordinary arithmetic, this expression has no meaning. It's a bug! This causes the program to raise an exception called ZeroDivisionError. If there is no exception handling code, the program will just print the error message and terminate. What if you wish to write this error to some log file and then terminate the program? Here, you can use anexceptclause to log the error first. Then, use theraisekeyword without any arguments to re-raise the exception. The exception will be propagated upwards in the stack. In this example, it terminates the program. The exception can be re-raised with the raise keyword without any arguments. Here is an example that shows how to re-raise an exception: As can be seen, adivision by zeroexception is raised while solving the a/b expression. This is because the value of variable b is set to 0. For illustration purposes, we assumed that there is no specific exception handler for this error. So, we will use the general except clause where the exception is re-raised after logging the error. If you want to try this yourself, just write the code illustrated earlier in a new Python file, and run it from a terminal window. The following screenshot shows the output of the preceding code: The else block of try…except There is an optionalelseblock that can be specified in the try…except clause. The elseblock is executed only ifno exception occurs in the try…except clause. The syntax is as follows: Theelseblock is executed before thefinallyclause, which we will study next. finally...clean it up! There is something else to add to the try…except…else story:an optional finally clause. As the name suggests, the code within this clause is executed at the end of the associated try…except block. Whether or not an exception is raised, the finally clause, if specified, willcertainly get executed at the end of thetry…except clause. Imagine it as anall-weather guaranteegiven by Python! The following code snippet shows thefinallyblock in action: Running this simple code will produce the following output: $ python finally_example1.py Enter a number: -1 Uh oh..Assertion Error. Do some special cleanup The last line in the output is theprintstatement from the finally clause. The code snippets with and without the finally clause are are shown in the following screenshot. The code in the finallyclause is assured to be executed in the end, even when the except clause instructs the code to return from the function. Thefinallyclause is typically used to perform clean-up tasks before leaving the function. An example use case is to close a database connection or a file. However, note that, for this purpose you can also use thewith statement in Python. Writing a new exception class It is trivial to create a new exception class derived from Exception. Open your Python interpreter and create the following class: >>> class GameUnitError(Exception): ... pass ... >>> That's all! We have a new exception class,GameUnitError, ready to be deployed. How to test this exception? Just raise it. Type the following line of code in your Python interpreter: >>> raise GameUnitError("ERROR: some problem with game unit") Raising the newly created exception will print the following traceback: >>> raise GameUnitError("ERROR: some problem with game unit") Traceback (most recent call last): File "<stdin>", line 1, in <module> __main__.GameUnitError: ERROR: some problem with game unit Copy the GameUnitError class into its own module, gameuniterror.py, and save it in the same directory as attackoftheorcs_v1_1.py. Next, update the attackoftheorcs_v1_1.py file to include the following changes: First, add the following import statement at the beginning of the file: from gameuniterror import GameUnitError The second change is in the AbstractGameUnit.heal method. The updated code is shown in the following code snippet. Observe the highlighted code that raises the custom exception whenever the value ofself.health_meterexceeds that of self.max_hp. With these two changes, run heal_exception_example.py created earlier. You will see the new exception being raised, as shown in the following screenshot: Expanding the exception class Can we do something more with the GameUnitError class? Certainly! Just like any other class, we can define attributes and use them. Let's expand this class further. In the modified version, it will accept an additional argument and some predefined error code. The updated GameUnitError class is shown in the following screenshot: Let's take a look at the code in the preceding screenshot: First, it calls the __init__method of the Exceptionsuperclass and then defines some additional instance variables. A new dictionary object,self.error_dict, holds the error integer code and the error information as key-value pairs. The self.error_message stores the information about the current error depending on the error code provided. The try…except clause ensures that error_dict actually has the key specified by thecodeargument. It doesn't in the except clause, we just retrieve the value with default error code of 000. So far, we have made changes to the GameUnitError class and the AbstractGameUnit.heal method. We are not done yet. The last piece of the puzzle is to modify the main program in the heal_exception_example.py file. The code is shown in the following screenshot: Let's review the code: As the heal_by value is too large, the heal method in the try clause raises the GameUnitError exception. The new except clause handles the GameUnitError exception just like any other built-in exceptions. Within theexceptclause, we have twoprintstatements. The first one prints health_meter>max_hp!(recall that when this exception was raised in the heal method, this string was given as the first argument to the GameUnitError instance). The second print statement retrieves and prints the error_message attribute of the GameUnitError instance. We have got all the changes in place. We can run this example form a terminal window as: $ python heal_exception_example.py The output of the program is shown in the following screenshot: In this simple example, we have just printed the error information to the console. You can further write verbose error logs to a file and keep track of all the error messages generated while the application is running. Summary This article served as an introduction to the basics of exception handling in Python. We saw how the exceptions occur, learned about some common built-in exception classes, and wrote simple code to handle these exceptions using thetry…except clause. The article also demonstrated techniques, such as raising and re-raising exceptions, using thefinally clause, and so on. The later part of the article focused on implementing custom exception classes. We defined a new exception class and used it for raising custom exceptions for our application. With exception handling, the code is in a better shape. Resources for Article: Further resources on this subject: Mining Twitter with Python – Influence and Engagement [article] Exception Handling in MySQL for Python [article] Python LDAP applications - extra LDAP operations and the LDAP URL library [article]
Read more
  • 0
  • 0
  • 3686

article-image-tiered-application-architecture-docker-compose-part-3
Darwin Corn
08 Aug 2016
6 min read
Save for later

Tiered Application Architecture with Docker Compose, Part 3

Darwin Corn
08 Aug 2016
6 min read
This is the third part in a series that introduces you to basic web application containerization and deployment principles. If you're new to the topic, I suggest reading Part 1 and Part 2 . In this post, I attempt to take the training wheels off, and focus on using Docker Compose. Speaking of training wheels, I rode my bike with training wheels until I was six or seven. So in the interest of full disclosure, I have to admit that to a certain degree I'm still riding the containerization wave with my training wheels on. That's not to say I’m not fully using container technology. Before transitioning to the cloud, I had a private registry running on a Git server that my build scripts pushed to and pulled from to automate deployments. Now, we deploy and maintain containers in much the same way as I've detailed in the first two Parts in this series, and I take advantage of the built-in registry covered in Part 2 of this series. Either way, our use case multi-tiered application architecture was just overkill. Adding to that, when we were still doing contract work, Docker was just getting 1.6 off the ground. Now that I'm working on a couple of projects where this will be a necessity, I'm thankful that Docker has expanded their offerings to include tools like Compose, Machine and Swarm. This post will provide a brief overview of a multi-tiered application setup with Docker Compose, so look for future posts to deal with the latter two. Of course, you can just hold out for a mature Kitematic and do it all in a GUI, but you probably won't be reading this post if that applies to you. All three of these Docker extensions are relatively new, and so the entirety of this post is subject to a huge disclaimer that even Docker hasn't fully developed these extensions to be production-ready for large or intricate deployments. If you're looking to do that, you're best off holding out for my post on alternative deployment options like CoreOS and Kubernetes. But that's beyond the scope of what we're looking at here, so let's get started. First, you need to install the binary. Since this is part 3, I'm going to assume that you have the Docker Engine already installed somewhere. If you're on Mac or Windows, the Docker Toolbox you used to install it also contained an option to install Compose. I'm going to assume your daily driver is a Linux box, so these instructions are for Linux. Fortunately, the installation should just be a couple of commands--curling it from the web and making it executable: # curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose # chmod +x /usr/local/bin/docker-compose # docker-compose -v That last command should output version info if you've installed it correctly. For some reason, the linked installation doc thinks you can run that chmod as a regular user. I'm not sure of any distro that lets regular users write to /usr/local/bin, so I ran them both as root. Docker has its own security issues that are beyond the scope of this series, but I suggest reading about them if you're using this in production. My lazy way around it is to run every Docker-related command elevated, and I'm sure someone will let me have it for that in the comments. Seems like a better policy than making /usr/local/bin writeable by anyone other than root. Now that you have Compose installed, let's look at how to use it to coordinate and deploy a layered application. I'm abandoning my sample music player of the previous two posts in favor of something that's already separated its functionality, namely the Taiga project. If you're not familiar, it's a slick flat JIRA-killer, and the best part is that it's open source with a thorough installation guide. I've done the heavy lifting, so all you have to do is clone the docker-taiga repo into wherever you keep your source code and get to Composin'. $ git clone https://github.com/ndarwincorn/docker-taiga.git $ cd docker-taiga You'll notice a few things. In the root of the app, there's an .envfile where you can set all the environmental variables in one place. Next, there are two folders with taiga- prefixes. They correspond to the layers of the application, from the Angular frontend to the websocket and backend Django server. Each contains a Dockerfile for building the container, as well as relevant configuration files. There's also a docker-entrypoint-initdb.d folder that contains a shell script that creates the Taiga database when the postgres container is built. Having covered container creation in part 1, I'm more concerned with the YAML file in the root of the application, docker-compose.yml. This file coordinates the container/image creation for the application, and full reference can be found on Docker's website. Long story short, the compose YAML file gives the containers a creation order (databases, backend/websocket, frontend) and links them together, so that ports exposed in each container don't need to be published to the host machine. So, from the root of the application, let's run a # docker-compose up and see what happens. Provided there are no errors, you should be able to navigate to localhost:8080 and see your new Taiga deployment! You should be able to log in with the admin user and password 123123. Of course, there's much more to do--configure automated e-mails, link it to your Github organization, configure TLS. I'll leave that as an exercise for you. For now, enjoy your brand-new layered project management application. Of course, if you're deploying such an application for an organization, you don't want all your eggs in one basket. The next two parts in the series will deal with leveraging Docker tools and alternatives to deploy the application in a clustered, high-availability setup. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 4162
article-image-rapid-application-development-django-openduty-story
Bálint Csergő
01 Aug 2016
5 min read
Save for later

Rapid Application Development with Django, the Openduty story

Bálint Csergő
01 Aug 2016
5 min read
Openduty is an open source incident escalation tool, which is something like Pagerduty but free and much simpler. It was born during a hackathon at Ustream back in 2014. The project received a lot of attention in the devops community, and was also featured in Devops weekly andPycoders weekly.It is listed at Full Stack Python as an example Django project. This article is going to include some design decisions we made during the hackathon, and detail some of the main components of the Opendutysystem. Design When we started the project, we already knew what we wanted to end up with: We had to work quickly—it was a hackathon after all An API similar to Pagerduty Ability to send notifications asynchronously A nice calendar to organize on—call schedules can’t hurt anyone, right? Tokens for authorizing notifiers So we chose the corresponding components to reach our goal. Get the job done quickly If you have to develop apps rapidly in Python, Django is the framework you choose. It's a bit heavyweight, but hey, it gives you everything you need and sometimes even more. Don't get me wrong; I'm a big fan of Flask also, but it can be a bit fiddly to assemble everything by hand at the start. Flask may pay off later, and you may win on a lower amount of dependencies, but we only had 24 hours, so we went with Django. An API When it comes to Django and REST APIs, one of the GOTO soluitions is The Django REST Framework. It has all the nuts and bolts you'll need when you're assembling an API, like serializers, authentication, and permissions. It can even give you the possibility to make all your API calls self-describing. Let me show you howserializers work in the Rest Framework. class OnCallSerializer(serializers.Serializer): person = serializers.CharField() email = serializers.EmailField() start = serializers.DateTimeField() end = serializers.DateTimeField() The code above represents a person who is on-call on the API. As you can see, it is pretty simple; you just have to define the fields. It even does the validation for you, since you have to give a type to every field. But believe me, it's capable of more good things like generating a serializer from your Django model: class SchedulePolicySerializer(serializers.HyperlinkedModelSerializer): rules = serializers.RelatedField(many=True, read_only=True) class Meta: model = SchedulePolicy fields = ('name', 'repeat_times', 'rules') This example shows how you can customize a ModelSerializer, make fields read-only, and only accept given fields from an API call. Async Task Execution When you have tasks that are long-running, such as generating huge reports, resizing images, or even transcoding some media, it is a common practice thatyou must move the actual execution of those out of your webapp into a separate layer. This decreases the load on the webservers, helps in avoiding long or even timing out requests, and just makes your app more resilient and scalable. In the Python world, the go-to solution for asynchronous task execution is called Celery. In Openduty, we use Celery heavily to send notifications asynchronously and also to delay the execution of any given notification task by the delay defined in the service settings. Defining a task is this simple: @app.task(ignore_result=True) def send_notifications(notification_id): try: notification = ScheduledNotification.objects.get(id = notification_id) if notification.notifier == UserNotificationMethod.METHOD_XMPP: notifier = XmppNotifier(settings.XMPP_SETTINGS) #choosing notifier removed from example code snippet notifier.notify(notification) #logging task result removed from example snippet raise And calling an already defined task is also almost as simple as calling any regular function: send_notifications.apply_async((notification.id,) ,eta=notification.send_at) This means exactly what you think: Send the notification with the id: notification.id at notification.send_at. But how do these things get executed? Under the hood, Celery wraps your decorated functions so that when you call them, they get enqueued instead of being executed directly. When the celery worker detects that there is a task to be executed, it simply takes it from the queue and executes it asynchronously. Calendar We use django-scheduler for the awesome-looking calendar in Openduty. It is a pretty good project generally, supports recurring events, and provides you with a UI for your calendar, so you won't even have to fiddle with that. Tokens and Auth Service token implementation is a simple thing. You want them to be unique, and what else would you choose if not aUUID? There is a nice plugin for Django models used to handle UUID fields, called django-uuidfield. It just does what it says—addingUUIDField support to your models. User authentication is a bit more interesting, so we currently support plain Django Users, and you can use LDAP as your user provider. Summary This was just a short summary about the design decisions made when we coded Openduty. I also demonstrated the power of the components through some snippets that are relevant. If you are on a short deadline, consider using Django and its extensions. There is a good chance that somebody has already done what you need to do, or something similar, which can always be adapted to your needs thanks to the awesome power of the open source community. About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He lovesUnix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 9763

article-image-debugging-your-net-application
Packt
21 Jul 2016
13 min read
Save for later

Debugging Your .NET Application

Packt
21 Jul 2016
13 min read
In this article by Jeff Martin, author of the book Visual Studio 2015 Cookbook - Second Edition, we will discuss about how but modern software development still requires developers to identify and correct bugs in their code. The familiar edit-compile-test cycle is as familiar as a text editor, and now the rise of portable devices has added the need to measure for battery consumption and optimization for multiple architectures. Fortunately, our development tools continue to evolve to combat this rise in complexity, and Visual Studio continues to improve its arsenal. (For more resources related to this topic, see here.) Multi-threaded code and asynchronous code are probably the two most difficult areas for most developers to work with, and also the hardest to debug when you have a problem like a race condition. A race condition occurs when multiple threads perform an operation at the same time, and the order in which they execute makes a difference to how the software runs or the output is generated. Race conditions often result in deadlocks, incorrect data being used in other calculations, and random, unrepeatable crashes. The other painful area to debug involves code running on other machines, whether it is running locally on your development machine or running in production. Hooking up a remote debugger in previous versions of Visual Studio has been less than simple, and the experience of debugging code in production was similarly frustrating. In this article, we will cover the following sections: Putting Diagnostic Tools to work Maximizing everyday debugging Putting Diagnostic Tools to work In Visual Studio 2013, Microsoft debuted a new set of tools called the Performance and Diagnostics hub. With VS2015, these tools have revised further, and in the case of Diagnostic Tools, promoted to a central presence on the main IDE window, and is displayed, by default, during debugging sessions. This is great for us as developers, because now it is easier than ever to troubleshoot and improve our code. In this section, we will explore how Diagnostic Tools can be used to explore our code, identify bottlenecks, and analyze memory usage. Getting ready The changes didn't stop when VS2015 was released, and succeeding updates to VS2015 have further refined the capabilities of these tools. So for this section, ensure that Update 2 has been installed on your copy of VS2015. We will be using Visual Studio Community 2015, but of course, you may use one of the premium editions too. How to do it… For this section, we will put together a short program that will generate some activity for us to analyze: Create a new C# Console Application, and give it a name of your choice. In your project's new Program.cs file, add the following method that will generate a large quantity of strings: static List<string> makeStrings() { List<string> stringList = new List<string>(); Random random = new Random(); for (int i = 0; i < 1000000; i++) { string x = "String details: " + (random.Next(1000, 100000)); stringList.Add(x); } return stringList; } Next we will add a second static method that produces an SHA256-calculated hash of each string that we generated. This method reads in each string that was previously generated, creates an SHA256 hash for it, and returns the list of computed hashes in the hex format. static List<string> hashStrings(List<string> srcStrings) { List<string> hashedStrings = new List<string>(); SHA256 mySHA256 = SHA256Managed.Create(); StringBuilder hash = new StringBuilder(); foreach (string str in srcStrings) { byte[] srcBytes = mySHA256.ComputeHash(Encoding.UTF8.GetBytes(str), 0, Encoding.UTF8.GetByteCount(str)); foreach (byte theByte in srcBytes) { hash.Append(theByte.ToString("x2")); } hashedStrings.Add(hash.ToString()); hash.Clear(); } mySHA256.Clear(); return hashedStrings; } After adding these methods, you may be prompted to add using statements for System.Text and System.Security.Cryptography. These are definitely needed, so go ahead and take Visual Studio's recommendation to have them added. Now we need to update our Main method to bring this all together. Update your Main method to have the following: static void Main(string[] args) { Console.WriteLine("Ready to create strings"); Console.ReadKey(true); List<string> results = makeStrings(); Console.WriteLine("Ready to Hash " + results.Count() + " strings "); //Console.ReadKey(true); List<string> strings = hashStrings(results); Console.ReadKey(true); } Before proceeding, build your solution to ensure everything is in working order. Now run the application in the Debug mode (F5), and watch how our program operates. By default, the Diagnostic Tools window will only appear while debugging. Feel free to reposition your IDE windows to make their presence more visible or use Ctrl + Alt + F2 to recall it as needed. When you first launch the program, you will see the Diagnostic Tools window appear. Its initial display resembles the following screenshot. Thanks to the first ReadKey method, the program will wait for us to proceed, so we can easily see the initial state. Note that CPU usage is minimal, and memory usage holds constant. Before going any further, click on the Memory Usage tab, and then the Take Snapshot command as indicated in the preceding screenshot. This will record the current state of memory usage by our program, and will be a useful comparison point later on. Once a snapshot is taken, your Memory Usage tab should resemble the following screenshot: Having a forced pause through our ReadKey() method is nice, but when working with real-world programs, we will not always have this luxury. Breakpoints are typically used for situations where it is not always possible to wait for user input, so let's take advantage of the program's current state, and set two of them. We will put one to the second WriteLine method, and one to the last ReadKey method, as shown in the following screenshot: Now return to the open application window, and press a key so that execution continues. The program will stop at the first break point, which is right after it has generated a bunch of strings and added them to our List object. Let's take another snapshot of the memory usage using the same manner given in Step 9. You may also notice that the memory usage displayed in the Process Memory gauge has increased significantly, as shown in this screenshot: Now that we have completed our second snapshot, click on Continue in Visual Studio, and proceed to the next breakpoint. The program will then calculate hashes for all of the generated strings, and when this has finished, it will stop at our last breakpoint. Take another snapshot of the memory usage. Also take notice of how the CPU usage spiked as the hashes were being calculated: Now that we have these three memory snapshots, we will examine how they can help us. You may notice how memory usage increases during execution, especially from the initial snapshot to the second. Click on the second snapshot's object delta, as shown in the following screenshot: On clicking, this will open the snapshot details in a new editor window. Click on the Size (Bytes) column to sort by size, and as you may suspect, our List<String> object is indeed the largest object in our program. Of course, given the nature of our sample program, this is fairly obvious, but when dealing with more complex code bases, being able to utilize this type of investigation is very helpful. The following screenshot shows the results of our filter: If you would like to know more about the object itself (perhaps there are multiple objects of the same type), you can use the Referenced Types option as indicated in the preceding screenshot. If you would like to try this out on the sample program, be sure to set a smaller number in the makeStrings() loop, otherwise you will run the risk of overloading your system. Returning to the main Diagnostic Tools window, we will now examine CPU utilization. While the program is executing the hashes (feel free to restart the debugging session if necessary), you can observe where the program spends most of its time: Again, it is probably no surprise that most of the hard work was done in the hashStrings() method. But when dealing with real-world code, it will not always be so obvious where the slowdowns are, and having this type of insight into your program's execution will make it easier to find areas requiring further improvement. When using the CPU profiler in our example, you may find it easier to remove the first breakpoint and simply trigger a profiling by clicking on Break All as shown in this screenshot: How it works... Microsoft wanted more developers to be able to take advantage of their improved technology, so they have increased its availability beyond the Professional and Enterprise editions to also include Community. Running your program within VS2015 with the Diagnostic Tools window open lets you examine your program's performance in great detail. By using memory snapshots and breakpoints, VS2015 provides you with the tools needed to analyze your program's operation, and determine where you should spend your time making optimizations. There's more… Our sample program does not perform a wide variety of tasks, but of course, more complex programs usually perform well. To further assist with analyzing those programs, there is a third option available to you beyond CPU Usage and Memory Usage: the Events tab. As shown in the following screenshot, the Events tab also provides the ability to search events for interesting (or long-running) activities. Different event types include file activity, gestures (for touch-based apps), and program modules being loaded or unloaded. Maximizing everyday debugging Given the frequency of debugging, any refinement to these tools can pay immediate dividends. VS 2015 brings the popular Edit and Continue feature into the 21st century by supporting a 64-bit code. Added to that is the new ability to see the return value of functions in your debugger. The addition of these features combine to make debugging code easier, allowing to solve problems faster. Getting ready For this section, you can use VS 2015 Community or one of the premium editions. Be sure to run your choice on a machine using a 64-bit edition of Windows, as that is what we will be demonstrating in the section. Don't worry, you can still use Edit and Continue with 32-bit C# and Visual Basic code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU, and select Configuration Manager... When the Configuration Manager dialog opens, we can create a new project platform targeting a 64-bit code. To do this, click on the drop-down menu for Platform, and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to Configuration Manager. Verify that x64 remains active under Platform, and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: With the project settings out of the face, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border, or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method, and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use whatever method you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5, or by clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation, as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). (If you don't see Autos, it can be made visible by pressing Ctrl + D, A, or through Debug | Windows | Autos while debugging.) After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for the area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet in the following screenshot—Step Over once more if you would like to see that assigned): There's more… Programmers who write C++ have already had the ability to see the return values of functions—this just brings .NET developers into the fold. The result is that your development experience won't have to suffer based on the language you have chosen to use for your project. The Edit and Continue functionality is also available for ASP.NET projects. New projects created on VS2015 will have Edit and Continue enabled by default. Existing projects imported to VS2015 will usually need this to be enabled if it hasn't been done already. To do so, open the Options dialog via Tools | Options, and look for the Debugging | General section. The following screenshot shows where this option is located on the properties page: Whether you are working with an ASP.NET project or a regular C#/VB .NET application, you can verify Edit and Continue is set via this location. Summary In this article, we examine the improvements to the debugging experience in Visual Studio 2015, and how it can help you diagnose the root cause of a problem faster so that you can fix it properly, and not just patch over the symptoms. Resources for Article:   Further resources on this subject: Creating efficient reports with Visual Studio [article] Creating efficient reports with Visual Studio [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article]
Read more
  • 0
  • 0
  • 2646