Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Programming

81 Articles
article-image-5-reasons-to-choose-kotlin-over-java
Richa Tripathi
30 Apr 2018
3 min read
Save for later

5 reasons to choose Kotlin over Java

Richa Tripathi
30 Apr 2018
3 min read
Java has been a master of all in almost every field of application development, making the Java developers not wander much in search for other languages. However, things have changed with the steady evolution of Kotlin. Kotlin, no more the "other JVM language" has even surpassed Java's prominence . So,what makes this language stand-out and why is it growing in adoption for application development? What are the benefits of Kotlin vs Java, and how can it help developers? In this article, we’re going to look at the top 5 reasons why Kotlin takes a superior stand over Java and why it will work best for your next development project. Kotlin is more concise Kotlin is way more concise than Java in many cases, solving the same problems with fewer lines of code. This improves code maintainability and readability, meaning engineers can write, read, and change code more effectively and efficiently. Kotlin exclusive features such as type inference, smart casts, data classes, and properties help achieve conciseness. Kotlin’s null-safety is great NullPointerExceptions are a huge source of frustration for Java developers. Java allows you to assign null to any variable, but if you try to use an object reference that has a null value, then brace yourself to encounter a NullPointerException! Kotlin’s type system is aimed to eliminate NullPointerExceptions from the code. This type of system helps to avoid null pointer exceptions by simply refusing to compile code that tries to assign or return null. Combine the best of Functional and Procedural Programming Each set of programming paradigm has its own set of pros and cons. Combining the power of both functional and procedural programming leads to better development and output. It consists of many useful methods, which includes higher-order functions, lambda expressions, operator overloading, lazy evaluation, and much more. With a list of weaknesses and strengths from both languages, Kotlin offers inexpensive and intuitive coding style. The power of Kotlin’s extension functions Extensions of Kotlin are very useful because they allow developers to add methods to classes without making changes to their source code. Here, you can add methods on a per-user basis to classes. This allows users to extend the functionality of existing classes without inheriting the functions and properties from other classes. Interoperability with JAVA When debating between Kotlin vs Java, there is always a third option: Use them both. Despite all the differences Kotlin and Java are 100% interoperable,you can literally continue work on your old Java projects using Kotlin. You can call Kotlin code from Java, and you can call Java code from Kotlin. So it’s possible to have Kotlin and Java classes side-by-side within the same project, and everything will still compile. Undoubtedly, Kotlin has made many positive changes to the long and most used Java. It helps to write safer code, because with less work it's possible to write a more reliable code, thus making the life of programmers a lot easier. Kotlin is really a good replacement for Java. With time, more and more advanced features will be added to the Kotlin’s ecosystem that will help its popularity to grow towards its apex making the developers world more promising. Also read Why are Android developers switching from Java to Kotlin? Getting started with Kotlin programming  
Read more
  • 0
  • 0
  • 4571

article-image-tech-hype-cycles-do-they-deserve-your-attention
Richard Gall
30 Apr 2018
6 min read
Save for later

Tech hype cycles: do they deserve your attention?

Richard Gall
30 Apr 2018
6 min read
Hype cycles are an integral aspect of modern technology. They tell us the story of a specific technology and how it fits into a given context. This context is usually professional, but it is sometimes social and cultural. They are also are able to show us how the use of something has changed. They illustrate when something was adopted, when it grew, and perhaps when it began to decline. True, this might seem superfluous or superficial. But that explains while we often fail to pay that much attention to them. Instead of focusing on the cycle, and the wider context of how and why something is being used, we get distracted in the details of whatever is being hyped. "Hype cycles allow us to see past hype." But hype cycles, or hype curves, can help us to make better sense of the technology at our disposal. They allow you to see past the hype. That means rather than following the trends or buzzwords that fashion places on a pedestal at any given moment, you're always able to see those trends and buzzwords in a context. For example, instead of simply moving from big data to to AI, or from cloud to edge, you can see how different technologies and trends fit together. You can begin to observe how things are impacting one another. Hype cycles allow you to see how software changes trends, and then how trends change industries. It's not always easy to see how the code you're writing fits into the big picture - but hype cycles are a good way of allowing you to get a better sense of it. The history of the tech hype cycle According to this Wired article from 2012, the term 'hype cycle' has been around since 1995. But the idea of a hype cycle was taken by research organization Gartner and became central to the way they presented changes across the tech landscape. The first Gartner hype cycle report was released in 1999. Written by Alexander Drobik, the report predicted the end of the dot com bubble at the beginning of the new millennium. However, it's important to note that what Drobnik hadn't simply predicted the end of a trend - it was instead what's called a period of disillusionment within the 'hype cycle' of, well, the internet (perhaps the ultimate hype cycle). Let's look at what the cycle looks like in detail. What does the tech hype cycle look like? Of course, Gartner are the organization that popularized the concept of the hype cycle, but we've created our own example of what it looks like:                 Let's break down each of these points in the hype cycle in a bit more detail. Technology trigger This is the initial breakthrough. It's an exciting time when either researchers, engineers discover a new way of doing something. It's more the possibility of disruption rather than actual disruption. This is often the time when the press - and investors - get excited. Peak of inflated expectations This is when everyone gets really excited about the possibility of disruption. This period can be characterized by the sentence "This changes everything." It's the period when everyone talks about transformation but nothing yet has really transformed. True, the new technology might have worked somewhere, but there's lots of projects that don't even hit the ground, and a few that have simply failed. Trough of disillusionment This is the hangover everyone goes through after getting drunk on inflated expectations. This begins with 'Why X isn't working' pieces in the press, which gradually develops into silence. Technologies or trends seem to disappear into relative insignificance. Slope of enlightenment Now the hype has died down, technologies are applied with more serious consideration. Arguably the period of disillusionment is an important period of reflection about what works and what doesn't. This allows businesses and organizations to apply technologies in a more effective way during this 'enlightened' period. In essence, this time is about experimentation and learning. True, there might be some humility here, which is probably a good thing after the earlier inflated expectations. Plateau of productivity This is where enlightenment turns into stability. Ways of using a particular technology become established within an industry. It becomes mainstream. Perhaps the benefits to customers are now being felt more readily, which makes it easier to calculate just how valuable something might be. The hype cycle is a framework that explains how technologies become popular and gradually more mainstream. Of course, there are some technologies that don't quite follow this trajectory - what happens, for example, when things simply never take off? Some technologies get stuck at the trough of disillusionment. If they can never really give us the full picture, are hype cycles actually nothing more than a load of hype? Are hype cycles just a load of hype? Although hype cycles are useful in outlining how technologies are adopted, and mature, there are, of course, do have some limitations. Of course, Gartner have some stake in actually selling the concept to you. Its business is based on being an authoritative and invaluable source of tech insight. This means Gartner needs you (or maybe your boss) to think that hype cycles are a recurring pattern of all technology. Similarly, the people who write about technology and sell it, have a vested interest in hype cycles. They might not realize it but the need to 'tell a story' about how or why something is important - why something is 'transformative' - feeds into the concept that Gartner has successfully monetized. But that doesn't mean tech hype cycles should simply be ignored. They might well be artificial and lacking in any quantitative rigour, but we ignore the hype cycle at our peril. This is because the way we - the press, industry leaders, and tech communities - plays an important part in how technologies and trends are adopted. We need to take a somewhat ironic approach to hype cycles. That means we need to recognise while part of it is a bit of a charade, it's a charade that is pretty much inescapable. Trends and technology can't exist outside of these systems. Things only ever become popular when they're visible and when they're being talked about. Hype cycles give us a framework for understanding how technology is talked about. Read next: What is AIOps and why is it going to be important?
Read more
  • 0
  • 0
  • 2298

article-image-20-ways-to-describe-programming-in-5-words
Richard Gall
25 Apr 2018
3 min read
Save for later

20 ways to describe programming in 5 words

Richard Gall
25 Apr 2018
3 min read
How would you describe programming? Can you describe programming in 5 words? It's pretty difficult. Even explaining it in a basic and straightforward way can be challenging. You type stuff... and then it turns into something else or makes something happen. Or, as is often the case, something doesn't happen. Twitter account @abstractionscon asked its followers "what 5 words best describe programming?" The results didn't disappoint. There was a mix of funny, slightly tragic, and even poetic evocations and descriptions of what programming is and what it feels like. It turns out that more often than not, it simply feels frustrating. Things go wrong a lot. One of the most interesting aspects of the conversation was how it brings to light just how challenging it is to put programming into language. That's reflected in many of the responses to the original tweet. One of the conclusions we can probably draw from this is that not only is describing programming pretty hard, it's also pretty funny. And from that, perhaps it's also true that programming is generally a pretty funny thing to do. But then why would that be surprising? You learn from an early age that getting a computer to do what you want is difficult, so why should writing software be any different? Take a look at some of the best attempts to describe programming below. Which is your favourite? And how would you describe programming? https://twitter.com/alicegoldfuss/status/988818057219854336 https://twitter.com/jennschiffer/status/988849269552578560 https://twitter.com/lindseybieda/status/988941397544890368 https://twitter.com/sarahmei/status/988600171075268608 https://twitter.com/tef_ebooks/status/988752549552578560 https://twitter.com/jckarter/status/988828156386684928 https://twitter.com/cassidoo/status/988920470907961344 https://twitter.com/kelseyhightower/status/988646191679209472 https://twitter.com/francesc/status/988653691669446658 https://twitter.com/shanselman/status/988919759377915904 https://twitter.com/chriseng/status/988674723516207104 https://twitter.com/EricaJoy/status/988649667914186755 https://twitter.com/brianleroux/status/988628362355773440 https://twitter.com/ftrain/status/988759827731148800 https://twitter.com/jbeda/status/988634633087545344 https://twitter.com/kamal/status/988749873347375104 https://twitter.com/fatih/status/988695353171030016 https://twitter.com/innesmck/status/989067129432498176 https://twitter.com/franckverrot/status/988611564168036352 https://twitter.com/dewitt/status/988609620536053760 Thank you Twitter for your insights and jokes. It does make you feel better to know that there are millions of people out there with the same frustrations and software-induced high blood pressure. The next time something goes wrong remember you're really just meat teaching sand to think. Hopefully that should put everything into perspective. Read more: Slow down to learn how to code faster
Read more
  • 0
  • 0
  • 9618

article-image-what-is-mob-programming
Pavan Ramchandani
24 Apr 2018
4 min read
Save for later

What is Mob Programming?

Pavan Ramchandani
24 Apr 2018
4 min read
Mob Programming is a programming paradigm that is an extension of Pair Programming. The difference is actually quite straightforward. If in Pair Programming engineers work in pairs, in Mob Programming the whole 'mob' of engineers works together. That mob might even include project managers and DevOps engineers. Like any good mob, it can get rowdy, but it can also get things done when you're all focused on the same thing. What is Mob programming? The most common definition given to this approach by Woody Zuill (the self-proclaimed father of Mob programming) is as following: “All the team members working on the same thing, at the same time, in the same space, and on the same computer.” Here are the key principles of Mob Programming: The team comes together in a meeting room with a set task due for the day. This group working together is called the mob. The entire code is developed on a single system. Only one member is allowed to operate the system. This means only the Driver can write the code or make any changes to the code. The other members are called “Navigator” and the expert among them for the problem at hand guides the Driver to write the code. Everyone keeps switching roles, meaning no one person will be at the system all the time. The session ends with all the aspects of the task getting successfully completed. The Mob Programming strategy The success of mob programming depends on the collaborative nature of the developers coming together to form the Mob. A group of 5-6 members make a good mob. For a productive session, each member needs to be familiar with software development concepts like testing, design patterns, software development life cycle, among others. A project manager can initiate the team to take the Mob programming approach in order to make the early stage of software development stress-free. Anyone stuck at a point in the problem will have Navigators who can bring in their expertise and keep the project development moving. The advantages of Mob Programming Mob programming might make you nervous about performing in a group. But the outcomes have shown that it tends to make work, stress free and almost error free since there are multiple opinions. The ground rules to define Mob remains at a state where a single person cannot be on the keyboard, writing code longer than the other. This reduces the grunt work and provides the opportunity to switch to a different role in the mob. This trait really challenges and intrigues  individuals to contribute to the project by using their creativity. Criticisms of Mob Programming Mob programming is about cutting the communication barrier in the team. However, in situations when the dynamics of some members is different, the session can turn out to be just some active members dictating the terms for the task at hand. Many developers out there are set in their own ways. When asked to work on a task/project at the same time, there might occur a conflict of interest. Some developers might not participate with their full capacity and this might lead the work being sub-standard. To do Mob Programming well, you need a good mob Mob programming is a modern approach to software development and comes with its own set of pros and cons. The productivity and fruitfulness of the approach lies in the credibility and dynamics of the members and not in the nature of the problem at hand. Hence the potential of this approach can be leveraged for solving difficult problems, given the best bunch of mobs to deal with it. More on programming paradigms: What is functional reactive programming? What is the difference between functional and object oriented programming?
Read more
  • 0
  • 3
  • 3684

article-image-what-is-the-reactive-manifesto
Packt Editorial Staff
17 Apr 2018
3 min read
Save for later

What is the Reactive Manifesto?

Packt Editorial Staff
17 Apr 2018
3 min read
The Reactive Manifesto is a document that defines the core principles of reactive programming. It was first released in 2013 by a group of developers led by a man called Jonas Boner (you can find him on Twitter: @jboner). Jonas wrote this in a blog post explaining the reasons behind the manifesto: "Application requirements have changed dramatically in recent years. Both from a runtime environment perspective, with multicore and cloud computing architectures nowadays being the norm, as well as from a user requirements perspective, with tighter SLAs in terms of lower latency, higher throughput, availability and close to linear scalability. This all demands writing applications in a fundamentally different way than what most programmers are used to." A number of high-profile programmers signed the reactive manifesto. Some of the names behind it include Erik Meijer, Martin Odersky, Greg Young, Martin Thompson, and Roland Kuhn. A second, updated version of the Reactive Manifesto was released in 2014 - to date more than 22,000 people have signed it. The Reactive Manifesto underpins the principles of reactive programming You can think of it as the map to the treasure of reactive programming, or like the bible for the programmers of the reactive programming religion. Everyone starting with reactive programming should have a read of the manifesto to understand what reactive programming is all about and what its principles are. The 4 principles of the Reactive Manifesto Reactive systems must be responsive The system should respond in a timely manner. Responsive systems focus on providing rapid and consistent response times, so they deliver a consistent quality of service. Reactive systems must be resilient In case the system faces any failure, it should stay responsive. Resilience is achieved by replication, containment, isolation, and delegation. Failures are contained within each component, isolating components from each other, so when failure has occurred in a component, it will not affect the other components or the system as a whole. Reactive systems must be elastic Reactive systems can react to changes and stay responsive under varying workload. They achieve elasticity in a cost effective way on commodity hardware and software platforms. Reactive systems must be message driven Message driven: In order to establish the resilient principle, reactive systems need to establish a boundary between components by relying on asynchronous message passing. Those are the core principles behind reactive programming put forward by the manifesto. But there's something else that supports the thinking behind reactive programming. That's the standard specification on reactive streams. Reactive Streams standard specifications Everything in the reactive world is accomplished with the help of Reactive Streams. In 2013, Netflix, Pivotal, and Lightbend (previously known as Typesafe) felt a need for a standards specification for Reactive Streams as the reactive programming was beginning to spread and more frameworks for reactive programming were starting to emerge, so they started the initiative that resulted in Reactive Streams standard specification, which is now getting implemented across various frameworks and platforms. You can take a look at the Reactive Streams standard specification here. This post has been adapted from Reactive Programming in Kotlin. Find it on the Packt store here.
Read more
  • 0
  • 1
  • 9311

article-image-concurrency-programming-101-why-do-programmers-hang-by-a-thread
Aarthi Kumaraswamy
03 Apr 2018
4 min read
Save for later

Concurrency programming 101: Why do programmers hang by a thread?

Aarthi Kumaraswamy
03 Apr 2018
4 min read
A thread can be defined as an ordered stream of instructions that can be scheduled to run as such by operating systems. These threads, typically, live within processes, and consist of a program counter, a stack, and a set of registers as well as an identifier. These threads are the smallest unit of execution to which a processor can allocate time. Threads are able to interact with shared resources, and communication is possible between multiple threads. They are also able to share memory, and read and write different memory addresses, but therein lies an issue. When two threads start sharing memory, and you have no way to guarantee the order of a thread's execution, you could start seeing issues or minor bugs that give you the wrong values or crash your system altogether. These issues are, primarily, caused by race conditions, an important topic for another post. The following figure shows how multiple threads can exist on multiple different CPUs: Types of threads Within a typical operating system, we, typically, have two distinct types of threads: User-level threads: Threads that we can actively create, run, and kill for all of our various tasks Kernel-level threads: Very low-level threads acting on behalf of the operating system Python works at the user-level, and thus, everything we cover here will be, primarily, focused on these user-level threads. What is multithreading? When people talk about multithreaded processors, they are typically referring to a processor that can run multiple threads simultaneously, which they are able to do by utilizing a single core that is able to very quickly switch context between multiple threads. This switching context takes place in such a small amount of time that we could be forgiven for thinking that multiple threads are running in parallel when, in fact, they are not. When trying to understand multithreading, it's best if you think of a multithreaded program as an office. In a single-threaded program, there would only be one person working in this office at all times, handling all of the work in a sequential manner. This would become an issue if we consider what happens when this solitary worker becomes bogged down with administrative paperwork, and is unable to move on to different work. They would be unable to cope, and wouldn't be able to deal with new incoming sales, thus costing our metaphorical business money. With multithreading, our single solitary worker becomes an excellent multi-tasker, and is able to work on multiple things at different times. They can make progress on some paperwork, and then switch context to a new task when something starts preventing them from doing further work on said paperwork. By being able to switch context when something is blocking them, they are able to do far more work in a shorter period of time, and thus make our business more money. In this example, it's important to note that we are still limited to only one worker or processing core. If we wanted to try and improve the amount of work that the business could do and complete work in parallel, then we would have to employ other workers or processes as we would call them in Python. Let's see a few advantages of threading: Multiple threads are excellent for speeding up blocking I/O bound programs They are lightweight in terms of memory footprint when compared to processes Threads share resources, and thus communication between them is easier There are some disadvantages too, which are as follows: CPython threads are hamstrung by the limitations of the global interpreter lock (GIL), about which we'll go into more depth in the next chapter. While communication between threads may be easier, you must be very careful not to implement code that is subject to race conditions It's computationally expensive to switch context between multiple threads. By adding multiple threads, you could see a degradation in your program's overall performance. This is an excerpt from the book, Learning Concurrency in Python by Elliot Forbes. To know how to deal with issues such as deadlocks and race conditions that go hand in hand with concurrent programming be sure to check out the book.     
Read more
  • 0
  • 0
  • 7693
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-slow-down-learn-how-code-faster
Packt Editorial Staff
29 Mar 2018
6 min read
Save for later

Slow down to learn how to code faster

Packt Editorial Staff
29 Mar 2018
6 min read
Nowadays, it seems like everyone wants to do things faster. We want to pay without taking out a credit card or cash. Social media lets us share images and videos from our lives in a split second. And we get frustrated if Netflix takes more than 3 seconds to start streaming our latest TV show series binge. However, if you want to learn how to code faster, I'm going to present an odd idea: go slower. This has been taken from Skill Up: A Software Developer's Guide to Life and Career by Jordan Hudgens. This may seem like a counterintuitive concept. After all, don't coding bootcamps, even DevCamp where I teach, tell you how you can learn how to code in a few months? Well yes, and research shows that 8 weeks is a powerful number when it comes to learning. The Navy Seal training program specifically chose 8 weeks as its timeframe for conditioning candidates. And if you search the web for the phrase 8 Week Training programs, you'll find courses ranging from running 10ks to speaking Spanish fluently. So yes, I'm huge believer that individuals can learn an incredible amount of information in a short period of time. But what I'm talking about here is becoming more deliberate when it comes to learning new information. Learn how to code faster If you're like me, when you learn a new topic the first thing you'll do is either move onto the next topic or repeat the concept as quickly as humanly possible. For example, when I learn a new Ruby or Scala programming method I'll usually jump right into using it in as many different situations as possible. However, I've discovered that this may not be the best approach because it's very short-sighted. Your default mind is stopping you from coding faster When it comes to learning how to code faster, one of the most challenging requirements is moving knowledge from our short-term memory to our long-term memory. Remember the last time you learned a programming technique. Do you remember how easy it felt when you repeated what the instructor taught? The syntax seemed straightforward and it probably seemed like there was no way you would forget how to implement the feature. But after a few days, if you try to rebuild the component, is it easy or hard? If you're like me, the concept that seemed incredibly easy only a few days ago now causes you to draw a blank. But don't worry. This doesn't mean that we're incompetent. Instead, it means that this piece of knowledge wasn't given the chance to move from our short-term to our long-term memory. Hacking the mind So, if our default mindset is to forget what we've learned after a few days (or a few minutes), how can we learn anything? This is where our brain's default programming comes into play and where we can hack the way that we learn. I'm currently teaching myself the TypeScript programming language. TypeScript is the language that's recommended for Angular 2 development, so I thought it would be a good next language to learn. However, instead of taking my default approach, which is to slam through training guides and tutorials, I'm taking a more methodical approach. Slowing it down To learn TypeScript, I'm going through a number of different books and videos. And as I follow along with the guides, as soon as I learn a new topic I completely stop. I'll stand up. Write the new component on one of my whiteboards. And actually, write the program out by hand. After that, I type the program out on the keyboard… very slowly. So slowly that I know I could go around 4-5x faster. But by taking this approach I'm forcing my mind to think about the new concept instead of rushing through it. When it comes to working with our long-term memory, this approach is more effective than simply flying through a concept because it forces our minds to think through each keystroke. That means when it comes to actually writing code, it will come much more naturally to you. Bend it like Beethoven I didn't learn this technique from another developer. Instead, I heard about how one of the most successful classical music institutions in the world, the Meadowmount School of Music in New York, taught students new music compositions. As a game, the school gives out portions of the sheet music. So, where most schools will give each student the full song, Meadowmount splits the music up into pieces. From there, it hands each student a single piece for them to focus on. From that point, the student will only learn to place that single piece of music. They will start out very slowly. They won't rush through notes because they don't even know how they fit into the song. This approach teaches them how to concentrate on learning a new song one note at a time. From that point, the students trade note cards and then focus on learning another piece of the song. They continue with trading cards until each student has been able to work through the entire set of cards. By forcing the students to break a song into pieces they no longer will have any weak points in a song. Instead, the students will have focused on the notes themselves. From this point, it's trivial for all the students in the class to combine their knowledge and learn how to play the song all the way through. From classical music to coding So, can this approach help you learn how to code faster? I think so. The research shows that by slowing down and breaking concepts into small pieces, it's easier for students to transfer information from the short-term to long-term memory. So, the next time you are learning a coding concept, take a step back. Instead of simply copying what the instructor is teaching, write it down on a piece of paper. Walk through exactly what is happening in a program. If you take this approach, you will discover that you're no longer simply following a teacher's set of steps, but that you'll actually learn how the concepts work. And if you get to the stage of understanding, you will be ready to transfer that knowledge to your long-term memory and remember it for good.
Read more
  • 0
  • 0
  • 5861

article-image-should-you-move-python-3-7-experts-opinions
Richard Gall
29 Mar 2018
9 min read
Save for later

Should you move to Python 3? 7 Python experts' opinions

Richard Gall
29 Mar 2018
9 min read
Python is one of the most used programming languages on the planet. But when something is so established and popular across a number of technical domains the pace of change slows. Moving to Python 3 appears to be a challenge for many development teams and organizations. So, is switching to Python 3 worth the financial investment, the training and the stress? Mike Driscoll spoke to a number of Python experts about whether developers should move to Python 3 for Python Interviews, a book that features 20 interviews with leading Python programmers and community contributors. The transition to Python 3 can be done gradually Brett Cannon (@brettsky), Python core developer and Principal Software Developer at Microsoft: As someone who helped to make Python 3 come about, I'm not exactly an unbiased person to ask about this. I obviously think people should make the switch to Python 3 immediately, to gain the benefits of what has been added to the language since Python 3.0 first came out. I hope people realize that the transition to Python 3 can be done gradually, so the switch doesn't have to be abrupt or especially painful. Instagram switched in nine months, while continuing to develop new features, which shows that it can be done. Anyone starting out with Python should learn Python 3 Steve Holden (@HoldenWeb), CTO of Global Stress Index and former chairman and director of The PSF: Only when they need to. There will inevitably be systems written in 2.7 that won't get migrated. I hope that their operators will collectively form an industry-wide support group, to extend the lifetimes of those systems beyond the 2020 deadline for Python-Dev support. However, anyone starting out with Python should clearly learn Python 3 and that is increasingly the case. Python 3 resolves a lot of inconsistencies Glyph Lefkowitz (@glyph), founder of Twisted, a Python network programming framework, awarded The PSF’s Community Service Award in 2017: I'm in Python 3 in my day job now and I love it. After much blood, sweat and tears, I think it actually is a better programming language than Python 2 was. I think that it resolves a lot of inconsistencies. Most improvements should mirror quality of life issues and the really interesting stuff going on in Python is all in the ecosystem. I absolutely cannot wait for a PyPy 3.5, because one of the real downsides of using Python 3 at work is that I now have to deal with the fact that all of my code is 20 times slower. When I do stuff for the Twisted ecosystem, and I run stuff on Twisted's infrastructure, we use Python 2.7 as a language everywhere, but we use PyPy as the runtime. It is just unbelievably fast! If you're running services, then they can run with a tenth of the resources. A PyPy process will take 80 MB of memory, but once you're running that it will actually take more memory per interpreter, but less memory per object. So if you're doing any Python stuff at scale, I think PyPy is super interesting. One of my continued bits of confusion about the Python community is that there's this thing out there which, for Python 2 anyway, just makes all of your code 20 times faster. This wasn't really super popular, in fact PyPy download stats still show that it's not as popular as Python 3, and Python 3 is really experiencing a huge uptick in popularity. I do think that given that the uptake in popularity has happened, the lack of a viable Python 3 implementation for PyPy is starting to hurt it quite a bit. But it was around and very fast for a long time before Python 3 had even hit 10% of PyPy's downloads. So I keep wanting to predict that this is the year of PyPy on the desktop, but it just never seems to happen. Most actively maintained libraries support Python 3 Doug Hellmann (@doughellmann), the man behind Python Module of the Week and a fellow of The PSF: The long lifetime for Python 2.7 recognizes the reality that rewriting functional software based on backwards-incompatible upstream changes isn't a high priority for most companies. I encourage people to use the latest version of Python 3 that is available on their deployment platform for all new projects. I also advise them to carefully reconsider porting their remaining legacy applications now that most actively maintained libraries support Python 3. Migration from Python 2 to 3 is difficult Massimo Di Pierro (@mdipierro), Professor at the School of Computing at De Paul University in Chicago and creator of web2py, an open source web application framework written in Python: Python 3 is a better language than Python 2, but I think that migration from Python 2 to Python 3 is difficult. It cannot be completely automated and often it requires understanding the code. People do not want to touch things that currently work. For example, the str function in Python 2 converts to a string of bytes, but in Python 3, it converts to Unicode. So this makes it impossible to switch from Python 2 to Python 3, without actually going through the code and understanding what type of input is being passed to the function, and what kind of output is expected. A naïve conversion may work very well as long as you don't have any strange characters in your input (like byte sequences that do not map into Unicode). When that happens, you don't know if the code is doing what it was supposed to do originally or not. Consider banks, for example. They have huge codebases in Python, which have been developed and tested over many years. They are not going to switch easily because it is difficult to justify that cost. Consider this: some banks still use COBOL. There are tools to help with the transition from Python 2 to Python 3. I'm not really an expert on those tools, so a lot of the problems I see may have a solution that I'm not aware of. But I still found that each time I had to convert code, this process was not as straightforward as I would like. The divide between the worlds of Python 2 and 3 will exist well beyond 2020 Marc-Andre Lemburg (@malemburg), co-founder of The PSF and CEO of eGenix: Yes, you should, but you have to consider the amount of work which has to go into a port from Python 2.7 to 3.x. Many companies have huge code bases written for Python 2.x, including my own company eGenix. Commercially, it doesn't always make sense to port to Python 3.x, so the divide between the two worlds will continue to exist well beyond 2020. Python 2.7 does have its advantages because it became the LTS version of Python. Corporate users generally like these long-term support versions, since they reduce porting efforts from one version to the next. I believe that Python will have to come up with an LTS 3.x version as well, to be able to sustain success in the corporate world. Once we settle on such a version, this will also make a more viable case for a Python 2.7 port, since the investment will then be secured for a good number of years. Python 3 has tons of amazing new features Barry Warsaw (@pumpichank), member of the Python Foundation team at LinkedIn, former project leader of GNU Mailman: We all know that we've got to get on Python 3, so Python 2's life is limited. I made it a mission inside of Ubuntu to try to get people to get on Python 3. Similarly, within LinkedIn, I'm really psyched, because all of my projects are on Python 3 now. Python 3 is so much more compelling than Python 2. You don't even realize all of the features that you have in Python 3. One of the features that I think is really awesome is the async I/O library. I'm using that in a lot of things and think it is a very compelling new feature, that started with Python 3.4. Even with Python 3.5, with the new async keywords for I/O-based applications, asyncio was just amazing. There are tons of these features that once you start to use them, you just can't go back to Python 2. It feels so primitive. I love Python 3 and use it exclusively in all of my personal open source projects. I find that dropping back to Python 2.7 is often a chore, because so many of the cool things you depend on are just missing, although some libraries are available in Python 2 compatible back ports. I firmly believe that it's well past the time to fully embrace Python 3. I wouldn't write a line of new code that doesn't support it, although there can be business reasons to continue to support existing Python 2 code. It's almost never that difficult to convert to Python 3, although there are still a handful of dependencies that don't support it, often because those dependencies have been abandoned. It does require resources and careful planning though, but any organization that routinely addresses technical debt should have conversion to Python 3 in their plans. That said, the long life of Python 2.7 has been great. It's provided two important benefits I think. The first is that it provided a very stable version of Python, almost a long-term support release, so folks didn't have to even think about changes in Python every 18 months (the typical length of time new versions are in development). Python 2.7's long life also allowed the rest of the ecosystem to catch up with Python 3. So the folks who were very motivated to support it could sand down the sharp edges and make it much easier for others to follow. I think we now have very good tools, experience, and expertise in how to switch to Python 3 with the greatest chance of success. I think we reached the tipping point somewhere around the Python 3.5 release. Regardless of what the numbers say, we're well past the point where there's any debate about choosing Python 3, especially for new code. Python 2.7 will end its life in mid-2020 and that's about right, although not soon enough for me! At some point, it's just more fun to develop in and on Python 3. That's where you are seeing the most energy and enthusiasm from Python developers.
Read more
  • 0
  • 0
  • 8714

article-image-future-python-3-experts-views
Richard Gall
27 Mar 2018
7 min read
Save for later

The future of Python: 3 experts' views

Richard Gall
27 Mar 2018
7 min read
Python is the fastest growing programming language on the planet. This year’s Stack Overflow survey produces clear evidence that it is growing at an impressive rate. And it’s not really that surprising - versatile, dynamic, and actually pretty easy to learn, it’s a language that is accessible and powerful enough to solve problems in a range of fields, from statistics to building APIs. But what does the future hold for Python? How will it evolve to meet the needs of its growing community of engineers and analysts? Read the insights from 3 Python experts on what the future might hold for the programming language, taken from Python Interviews, a book that features 20 conversations with leading figures from the Python community. In the future, Python will spawn other more specialized languages Steve Holden (@HoldenWeb), CTO of Global Stress Index and former chairman and director of The PSF: I'm not really sure where the language is going. You hear loose talk of Python 4. To my mind though, Python is now at the stage where it's complex enough. Python hasn't bloated in the same way that I think the Java environment has. At that maturity level, I think it's rather more likely that Python's ideas will spawn other, perhaps more specialized, languages aimed at particular areas of application. I see this as fundamentally healthy and I have no wish to make all programmers use Python for everything; language choices should be made on pragmatic grounds. I've never been much of a one for pushing for change. Enough smart people are thinking about that already. So mostly I lurk on Python-Dev and occasionally interject a view from the consumer side, when I think that things are becoming a little too esoteric. The needs of the Python community are going to influence where the language goes in future Carol Willing (@WillingCarol), former director of The Python Foundation, core developer of CPython, and Research Software Engineer at Project Jupyter. I think we're going to continue to see growth in the scientific programming part of Python. So things that support the performance of Python as a language and async stability are going to continue to evolve. Beyond that, I think that Python is a pretty powerful and solid language. Even if you stopped development today, Python is a darn good language. I think that the needs of the Python community are going to feed back into Python and influence where the language goes. It's great that we have more representation from different groups within the core development team. Smarter minds than mine could provide a better answer to your question. I'm sure that Guido has some things in mind for where he wants to see Python go. Mobile development has been an Achilles' heel for Python for a long time. I'm hoping that some of the BeeWare stuff is going to help with the cross-compilation. A better story in mobile is definitely needed. But you know, if there's a need then Python will get there. I think that the language is going to continue to move towards the stuff that's in Python 3. Some big code bases, like Instagram, have now transitioned from Python 2 to 3. While there is much Python 2.7 code still in production, great strides have been made by Instagram, as they shared in their PyCon 2017 keynote. There's more tooling around Python 3 and more testing tools, so it's less risky for companies to move some of their legacy code to Python 3, where it makes business sense to. It will vary by company, but at some point, business needs, such as security and maintainability, will start driving greater migration to Python 3. If you're starting a new project, then Python 3 is the best choice. New projects, especially when looking at microservices and AI, will further drive people to Python 3. Organizations that are building very large Python codebases are adopting type annotations to help new developers Barry Warsaw (@pumpichank), member of the Python Foundation team at LinkedIn, former project leader of GNU Mailman: In some ways it's hard to predict where Python is going. I've been involved in Python for 23 years, and there was no way I could have predicted in 1994 what the computing world was going to look like today. I look at phones, IoT (Internet of things) devices, and just the whole landscape of what computing looks like today, with the cloud and containers. It's just amazing to look around and see all of that stuff. So there's no real way to predict what Python is going to look like even five years from now, and certainly not ten or fifteen years from now. I do think Python's future is still very bright, but I think Python, and especially CPython, which is the implementation of Python in C, has challenges. Any language that's been around for that long is going to have some challenges. Python was invented to solve problems in the 90s and the computing world is different now and is going to become different still. I think the challenges for Python include things like performance and multi-core or multi-threading applications. There are definitely people who are working on that stuff and other implementations of Python may spring up like PyPy, Jython, or IronPython. Aside from the challenges that the various implementations have, one thing that Python has as a language, and I think this is its real strength, is that it scales along with the human scale. For example, you can have one person write up some scripts on their laptop to solve a particular problem that they have. Python's great for that. Python also scales to, let's say, a small open source project with maybe 10 or 15 people contributing. Python scales to hundreds of people working on a fairly large project, or thousands of people working on massive software projects. Another amazing strength of Python as a language is that new developers can come in and learn it easily and be productive very quickly. They can pull down a completely new Python source code for a project that they've never seen before and dive in and learn it very easily and quickly. There are some challenges as Python scales on the human scale, but I feel like those are being solved by things like the type annotations, for example. On very large Python projects, where you have a mix of junior and senior developers, it can be a lot of effort for junior developers to understand how to use an existing library or application, because they're coming from a more statically-typed language. So a lot of organizations that are building very large Python codebases are adopting type annotations, maybe not so much to help with the performance of the applications, but to help with the onboarding of new developers. I think that's going a long way in helping Python to continue to scale on a human scale. To me, the language's scaling capacity and the welcoming nature of the Python community are the two things that make Python still compelling even after 23 years, and will continue to make Python compelling in the future. I think if we address some of those technical limitations, which are completely doable, then we're really setting Python up for another 20 years of success and growth.
Read more
  • 0
  • 2
  • 6321

article-image-what-difference-between-declarative-and-imperative-programming
Antonio Cucciniello
10 Mar 2018
4 min read
Save for later

What is the difference between declarative and imperative programming?

Antonio Cucciniello
10 Mar 2018
4 min read
Declarative programming and imperative programming are two different approaches that offer a different way of working on a given project or application. But what is the difference between declarative and imperative programming? And when should you use one over the other? What is declarative programming? Let us first start with declarative programming. This is the form or style of programming where we are most concerned with what we want as the answer, or what would be returned. Here, we as developers are not concerned with how we get there, simply concerned with the answer that is received. What is imperative programming? Next, let's take a look at imperative programming. This is the form and style of programming in which we care about how we get to an answer, step by step. We want the same result ultimately, but we are telling the complier to do things a certain way in order to achieve that correct answer we are looking for. An analogy If you are still confused, hopefully this analogy will clear things up for you. The analogy is comparing wanting to learn a topic or outsourcing work to have someone else do it. If you are outsourcing work, you might not care about how the work is completed; rather you care just what the final product or result of the work looks like. This is likened to declarative programming. You know exactly what you want and you program a function to give you just that. Let us say you did not want to outsource work for your business or project. Instead, you tell yourself that you want to learn this skill for the long term. Now, when attempting to complete the task, you care about how it is actually done. You need to know the individual steps along the way in order to get it working properly. This is similar to imperative programming. Why you should use declarative programming Reusability Since the way the result is achieved does not necessarily matter here, it allows for the functions you build to be more general and could potentially be used for multiple purposes and not just one. Not rewriting code can speed up the program you are currently writing and any others that use the same functionality in the future. Reducing Errors Given that in declarative programming you tend to write functions that do not change state as you would in functional programming, the chances of errors arising are smaller and it allows for your application to become more stable. The removal of side effects from your functions allows you to know exactly what comes in and what comes out, allows for a more predictable program. Potential drawbacks of declarative programming Lack of Control In declarative programming, you may use functions that someone else created, in order to achieve the desired results. But you may need specific things to be completed behind the scenes to make your result come out properly. You do not have this control in declarative programming as you would in imperative programming. Inefficiency When the implementation is controlled by something else, you may have problems making your code efficient. In applications where there may be a time constraint, you will need to program the individual steps in order to make sure your program is running as efficient as possible. There are benefits and disadvantages to both forms. Overall, it is entirely up to you, the programmer, to decide which implementation you would like to follow in your code. If you are solely focused on the data, perhaps consider using the declarative programming style. If you care more about the implementation and how something works, maybe stick to an imperative programming approach. More importantly, you can have a mix of both styles. It is extremely flexible for you. You are in charge here.
Read more
  • 0
  • 0
  • 8054
article-image-systems-programming-go-unix-linux
Mihalis Tsoukalos
24 Jan 2018
17 min read
Save for later

Systems programming with Go in UNIX and Linux

Mihalis Tsoukalos
24 Jan 2018
17 min read
This is a guest post by Mihalis Tsoukalos. Mihalis is a Unix administrator, programmer, and Mathematician who enjoys writing. He is the author of Go Systems Programming from which this Go programming tutorial is taken. What is Go? Back when UNIX was first introduced, the only way to write systems software was by using C; nowadays you can program systems software using programming languages including Go. Apart from Go, other preferred languages for developing system utilities are Python, Perl, Rust and Ruby. Go is a modern generic purpose open-source programming language that was officially announced at the end of 2009, was begun as an internal Google project and has been inspired by many other programming languages including C, Pascal, Alef and Oberon. Its spiritual fathers are Robert Griesemer, Ken Thomson and Rob Pike that designed Go as a language for professional programmers that want to build reliable and robust software. Apart from its syntax and standard functions, Go comes with a pretty rich and convenient standard library. What is systems programming? Systems programming is a special area of programming on UNIX machines. Please note that Systems programming is not limited to UNIX machines. Most commands that have to do with System Administration tasks such as disk formatting, network interface configuration, module loading, kernel performance tracking, and so on, are implemented using the techniques of Systems Programming. Additionally, the /etc directory, which can be found on all UNIX systems, contains plain text files that deal with the configuration of a UNIX machine and its services and are also manipulated using systems software. You can group the various areas of systems software and related system calls in the following sets: File I/O: This area deals with file reading and writing operations, which is the most important task of an operating system. File input and output must be fast and efficient and, above all, it must be reliable. Advanced File I/O: Apart from the basic input and output system calls, there are also more advanced ways to read or write a file including asynchronous I/O and non-blocking I/O. System files and Configuration: This group of systems software includes functions that allow you to handle system files such as /etc/password and get system specific information such as system time and DNS configuration. Files and Directories: This cluster includes functions and system calls that allow the programmer to create and delete directories and get information such as the owner and the permissions of a file or a directory. Process Control: This group of software allows you to create and interact with UNIX processes. Threads: When a process has multiple threads, it can perform multiple tasks. However, threads must be created, terminated and synchronized, which is the purpose of this collection of functions and system calls. Server Processes: This set includes techniques that allow you to develop server processes, which are processes that get executed in the background without the need for an active terminal. Go is not that good at writing server processes in the traditional UNIX way – but let me explain this a little more. UNIX servers like Apache use fork(2) to create one or more children processes; this process is called forking and refers to cloning the parent process into a child process and continue executing the same executable from the same point and, most importantly, sharing memory. Although Go does not offer an equivalent to the fork(2) function this is not an issue because you can use goroutines to cover most of the uses of fork(2). Interprocess Communication: This set of functions allows processes that run on the same UNIX machine to communicate with each other using features such as pipes, FIFOs, message queues, semaphores and shared memory. Signal Processing: Signals offer processes a way of handling asynchronous events, which can be very handy. Almost all server processes have extra code that allows them to handle UNIX signals using the system calls of this group. Network Programming: This is the art of developing applications that work over computer networks with the he€lp of TCP/IP and is not Systems programming per se. However, most TCP/IP servers and clients are dealing with system resources, users, files and directories so most of the times you cannot create network applications without doing some kind of Systems programming. The challenging thing with Systems programming is that you cannot afford to have an incomplete program; you can either have a fully working, secure program that can be used on a production system or nothing at all. This mainly happens because you cannot trust end users and hackers! The key difficulty in systems programming is the fact that an erroneous system call can make your UNIX machine misbehave or, even worst, crash it! Most security issues on UNIX systems usually come from wrongly implemented systems software because bugs in systems software can compromise the security of an entire system. The worst part is that this can happen many years after using a certain piece of software! Systems programming examples with Go Printing the permission of a file or a directory With the help of the ls(1) command, you can find out the permissions of a file: $ ls -l /bin/ls -rwxr-xr-x 1 root wheel 38624 Mar 23 01:57 /bin/ls The presented Go program, which is named permissions.go, will teach you how to print the permissions of a file or a directory using Go and will be presented in two parts. The first part is the next: package main import ( "fmt" "os" ) func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Please provide an argument!") os.Exit(1) } file := arguments[1] The second part contains the important Go code: info, err := os.Stat(file) if err != nil { fmt.Println("Error:", err) os.Exit(1) } mode := info.Mode() fmt.Print(file, ": ", mode, "n") } Once again most of the Go code is for dealing with the command line argument and making sure that you have one! The Go code that does the actual job is mainly the call to the os.Stat() function, which returns a FileInfo structure that describes the file or directory examined by os.Stat(). From the FileInfo structure you can discover the permissions of a file by calling the Mode() function. Executing permissions.go creates the following kind of output: $ go run permissions.go /bin/ls /bin/ls: -rwxr-xr-x $ go run permissions.go /usr /usr: drwxr-xr-x $ go run permissions.go /us Error: stat /us: no such file or directory exit status 1 How to write to files using fmt.Fprintf() The use of the fmt.Fprintf() function allows you to write formatted text to files in a way that is similar to the way the fmt.Printf() function works. The Go code that illustrates the use of fmt.Fprintf() will be named fmtF.go and is going to be presented in three parts. The first part is the expected preamble of the program: package main import ( "fmt" "os" ) The second part has the next Go code: func main() { if len(os.Args) != 2 { fmt.Println("Please provide a filename") os.Exit(1) } filename := os.Args[1] destination, err := os.Create(filename) if err != nil { fmt.Println("os.Create:", err) os.Exit(1) } defer destination.Close() First, you make sure that you have one command line argument before continuing. Then, you read that command line argument and you give it to os.Create() in order to create it! Please note that the os.Create() function will truncate the file if it already exists. The last part is the following: fmt.Fprintf(destination, "[%s]: ", filename) fmt.Fprintf(destination, "Using fmt.Fprintf in %sn", filename) } Here, you write the desired text data to the file that is identified by the destination variable using fmt.Fprintf() as if you were using the fmt.Printf() method. Executing fmtF.go will generate the following output: $ go run fmtF.go test $ cat test [test]: Using fmt.Fprintf in test In other words, you can create plain text files using fmt.Fprintf(). Developing wc(1) in Go The principal idea behind the code of the wc.go program is that you read a text file line by line until there is nothing left to read. For each line you read you find out the number of characters and the number of words it has. As you need to read your input line by line, the use of bufio is preferred instead of the plain io because it simplifies the code. However, trying to implement wc.go on your own using io would be a very educational exercise. But first you will see the kind of output the wc(1) utility generates: $ wcwc.gocp.go 68 160 1231wc.go 45 112 755cp.go 113 272 1986 total So, if wc(1) has to process more than one file, it automatically generates summary information. Counting words The trickiest part of the implementation is word counting, which is implemented using Go regular expressions: r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } What the provided regular expression does is separating the words of a line based on whitespace characters in order to count them afterwards! The code! After this little introduction, it is time to see the Go code of wc.go, which will be presented in five parts. The first part is the expected preamble: import ( "bufio" "flag" "fmt" "io" "os" "regexp" ) The second part is the implementation of the count() function, which includes the core functionality of the program: func count(filename string) (int, int, int) { var err error varnumberOfLinesint varnumberOfCharactersint varnumberOfWordsint numberOfLines = 0 numberOfCharacters = 0 numberOfWords = 0 f, err := os.Open(filename) if err != nil { fmt.Printf("error opening file %s", err) os.Exit(1) } defer f.Close() r := bufio.NewReader(f) for { line, err := r.ReadString('n') if err == io.EOF { break } else if err != nil { fmt.Printf("error reading file %s", err) } numberOfLines++ r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } numberOfCharacters += len(line) } return numberOfLines, numberOfWords, numberOfCharacters } There exist lot of interesting things here. First of all, you can see the Go code presented in the previous section for counting the words of each line. Counting lines is easy because each time the bufio reader reads a new line the value of the numberOfLines variable is increased by one. The ReadString() function tells the program to read until the first occurrence of a 'n' in the input – multiple calls to ReadString() mean that you are reading a file line by line. Next, you can see that the count() function returns three integer values. Last, counting characters is implemented with the help of the len() function that returns the number of characters in a given string, which in this case is the line that was read. The for loop terminates when you get the io.EOF error message, which signifies that there is nothing left to read from the input file. The third part of wc.go starts with the beginning of the implementation of the main() function, which also includes the configuration of the flag package: func main() { minusC := flag.Bool("c", false, "Characters") minusW := flag.Bool("w", false, "Words") minusL := flag.Bool("l", false, "Lines") flag.Parse() flags := flag.Args() if len(flags) == 0 { fmt.Printf("usage: wc<file1> [<file2> [... <fileN]]n") os.Exit(1) } totalLines := 0 totalWords := 0 totalCharacters := 0 printAll := false for _, filename := range flag.Args() { The last for statement is for processing all input files given to the program. The wc.go program supports three flags: the -c flag is for printing the character count, the -w flag is for printing the word count and the -l flag is for printing the line count. The fourth part is the next: numberOfLines, numberOfWords, numberOfCharacters := count(filename) totalLines = totalLines + numberOfLines totalWords = totalWords + numberOfWords totalCharacters = totalCharacters + numberOfCharacters if (*minusC&& *minusW&& *minusL) || (!*minusC&& !*minusW&& !*minusL) { fmt.Printf("%d", numberOfLines) fmt.Printf("t%d", numberOfWords) fmt.Printf("t%d", numberOfCharacters) fmt.Printf("t%sn", filename) printAll = true continue } if *minusL { fmt.Printf("%d", numberOfLines) } if *minusW { fmt.Printf("t%d", numberOfWords) } if *minusC { fmt.Printf("t%d", numberOfCharacters) } fmt.Printf("t%sn", filename) } This part deals with the printing of the information on a per file basis depending on the command line flags. As you can see, most of the Go code here is for handling the output according to the command line flags. The last part is the following: if (len(flags) != 1) &&printAll { fmt.Printf("%d", totalLines) fmt.Printf("t%d", totalWords) fmt.Printf("t%d", totalCharacters) fmt.Println("ttotal") return } if (len(flags) != 1) && *minusL { fmt.Printf("%d", totalLines) } if (len(flags) != 1) && *minusW { fmt.Printf("t%d", totalWords) } if (len(flags) != 1) && *minusC { fmt.Printf("t%d", totalCharacters) } if len(flags) != 1 { fmt.Printf("ttotaln") } } This is where you print the total number of lines, words and characters read according to the flags of the program. Once again, most of the Go code here is for modifying the output according to the command line flags. Executing wc.go will generated the following kind of output: $ go build wc.go $ ls -l wc -rwxr-xr-x 1 mtsouk staff 2264384 Apr 29 21:10 wc $ ./wcwc.gosparse.gonotGoodCP.go 120 280 2319 wc.go 44 98 697 sparse.go 27 61 418 notGoodCP.go 191 439 3434 total $ ./wc -l wc.gosparse.go 120 wc.go 44 sparse.go 164 total $ ./wc -w -l wc.gosparse.go 120 280 wc.go 44 98 sparse.go 164 378 total If you do not execute go build wc.go in order to create an executable file, then executing go run wc.go using Go source files as arguments will fail because the compiler will try to compile the Go source files instead of treating them as command line arguments to the go run wc.go command: $ go run wc.gosparse.go # command-line-arguments ./sparse.go:11: main redeclared in this block previous declaration at ./wc.go:49 $ go run wc.gowc.go package main: case-insensitive file name collision: "wc.go" and "wc.go" $ go run wc.gocp.gosparse.go # command-line-arguments ./cp.go:35: main redeclared in this block previous declaration at ./wc.go:49 ./sparse.go:11: main redeclared in this block previous declaration at ./cp.go:35 Additionally, trying to execute wc.go on a Linux system with Go version 1.3.3 will fail because it uses features of Go that can be found in newer versions – if you use the latest Go version you will have no problem running wc.go. The error message you will get will be the following: $ go version go version go1.3.3 linux/amd64 $ go run wc.go # command-line-arguments ./wc.go:40: syntax error: unexpected range, expecting { ./wc.go:46: non-declaration statement outside function body ./wc.go:47: syntax error: unexpected } Reading a text file character by character Although reading a text file character by character is not needed for the development of the wc(1) utility, it would be good to know how to implement it in Go. The name of the file will be charByChar.go and will be presented in four parts. The first part comes with the following Go code: import ( "bufio" "fmt" "io/ioutil" "os" "strings" ) Although charByChar.go does not have many lines of Go code, it needs lots of Go standard packages, which is a naïve indication that the task it implements is not trivial. The second part is: func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Not enough arguments!") os.Exit(1) } input := arguments[1] The third part is the following: buf, err := ioutil.ReadFile(input) if err != nil { fmt.Println(err) os.Exit(1) } The last part has the next Go code: in := string(buf) s := bufio.NewScanner(strings.NewReader(in)) s.Split(bufio.ScanRunes) for s.Scan() { fmt.Print(s.Text()) } } ScanRunes is a split function that returns each character (rune) as a token. Then the call to Scan() allows us to process each character one by one. There also exist ScanWords and ScanLines for getting words and lines scanned, respectively. If you use fmt.Println(s.Text()) as the last statement to the program instead of fmt.Print(s.Text()), then each character will be printed in its own line and the task of the program will be more obvious. Executing charByChar.go generates the following kind of output: $ go run charByChar.go test package main … The wc(1) command can verify the correctness of the Go code of charByChar.go by comparing the input file with the output generated by charByChar.go: $ go run charByChar.go test | wc 32 54 439 $ wc test 32 54 439 test How to create sparse files in Go Big files that are created with the os.Seek() function may have holes in them and occupy fewer disk blocks than files with the same size but without holes in them; such files are called sparse files. This section will develop a program that creates sparse files. The Go code of sparse.go will be presented in three parts. The first part is: package main import ( "fmt" "log" "os" "path/filepath" "strconv" ) The second part of sparse.go has the following Go code: func main() { if len(os.Args) != 3 { fmt.Printf("usage: %s SIZE filenamen", filepath.Base(os.Args[0])) os.Exit(1) } SIZE, _ := strconv.ParseInt(os.Args[1], 10, 64) filename := os.Args[2] _, err := os.Stat(filename) if err == nil { fmt.Printf("File %s already exists.n", filename) os.Exit(1) } The strconv.ParseInt() function is used for converting the command line argument that defines the size of the sparse file from its string value to its integer value. Additionally, the os.Stat() call makes sure that you will not accidentally overwrite an existing file. The last part is where the action takes place: fd, err := os.Create(filename) if err != nil { log.Fatal("Failed to create output") } _, err = fd.Seek(SIZE-1, 0) if err != nil { fmt.Println(err) log.Fatal("Failed to seek") } _, err = fd.Write([]byte{0}) if err != nil { fmt.Println(err) log.Fatal("Write operation failed") } err = fd.Close() if err != nil { fmt.Println(err) log.Fatal("Failed to close file") } } First, you try to create the desired sparse file using os.Create(). Then, you call fd.Seek() in order to make the file bigger without adding actual data. Last, you write a byte to it using fd.Write(). As you do not have anything more to do with the file, you call fd.Close() and you are done. Executing sparse.go generates the following output: $ go run sparse.go 1000 test $ go run sparse.go 1000 test File test already exists. exit status 1 How can you tell whether a file is a sparse file or not? You will learn in a while, but first let us create some files: $ go run sparse.go 100000 testSparse $ dd if=/dev/urandom bs=1 count=100000 of=noSparseDD 100000+0 records in 100000+0 records out 100000 bytes (100 kB) copied, 0.152511 s, 656 kB/s $ dd if=/dev/urandom seek=100000 bs=1 count=0 of=sparseDD 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000159399 s, 0.0 kB/s $ ls -l noSparse DDsparse DDtestSparse -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse So, how can you tell if any of the three files is a sparse file or not? The -s flag of the ls(1) utility shows the number of file system blocks actually used by a file. So, the output of the ls -ls command allows you to detect if you are dealing with a sparse file or not: $ ls -ls noSparse DDsparse DDtestSparse 104 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD 0 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD 8 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse Now look at the first column of the output. The noSparseDD file, which was generated using the dd(1) utility, is not a sparse file. The sparseDD file is a sparse file generated using the dd(1) utility. Last, the testSparse is also a sparse file that was created using sparse.go. Mihalis Tsoukalos is a Unix administrator, programmer, DBA and mathematician who enjoys writing. He is currently writing Mastering Go. His research interests include programming languages, databases and operating systems. He holds a B.Sc in Mathematics from the University of Patras and an M.Sc in IT from University College London (UK). He has written various technical articles for Sys Admin, MacTech, C/C++ Users Journal, Linux Journal, Linux User and Developer, Linux Format and Linux Voice.
Read more
  • 0
  • 0
  • 10168

article-image-5-application-development-tool-matter-2018
Richard Gall
13 Dec 2017
3 min read
Save for later

5 application development tools that will matter in 2018

Richard Gall
13 Dec 2017
3 min read
2017 has been a hectic year. Not least in application development. But it’s time to look ahead to 2018. You can read what ‘things’ we think are going to matter here, but here are the key tools we think are going to define the next 12 months in the area. 1. Kotlin Kotlin has been one of the most notable languages in 2017. It’s adoption has been dramatic over the last 12 months, and signals significant changes in what engineers want and need from a programming language. We think it’s likely to challenge Java's dominance throughout 2018 as more and more people adopt it. If you want a run down of the key reasons why you should start using Kotlin, you could do a lot worse than this post on Medium. Learn Kotlin. Explore Kotlin eBooks and videos. 2. Kubernetes Kubernetes is a tool that’s been following in the slipstream of Docker. It has been a core part of the growth of containerization, and we’re likely to see it move from strength to strength in 2018 as the technology matures and the size of container deployments continues to grow in size and complexity. Kubernetes’ success and importance was underlined earlier this year when Docker announced that its enterprise edition would support Kubernetes. Clearly, if Docker paved the way for the container revolution, Kubernetes is consolidating and helping teams take the next step with containerization. Find Packt’s Kubernetes eBooks and videos here. 3. Spring Cloud This isn’t a hugely well known tool, but 2018 might just be the year that the world starts to pay it more attention. In many respects Spring Cloud is a thoroughly modern software project, perfect for a world where microservices reign supreme. Following the core principles of Spring Boot, it essentially enables you to develop distributed systems in a really efficient and clean way. Spring is interesting because it represents the way Java is responding to the growth of open source software and the decline of the more traditional enterprise system. 4. Java 9 This nicely leads us on to the new version of Java 9. Here we have a language that is thinking differently about itself, that is moving in a direction that is heavily influenced by a software culture that is distinctive from where it belonged 5-10 years ago. The new features are enough to excite anyone that’s worked with Java before. They have all been developed to help reduce the complexity of modern development, modeled around the needs of developers in 2017 - and 2018. And they all help to radically improve the development experience - which, if you’ve been reading up, you’ll know is going to really matter for everyone in 2018. Explore Java 9 eBooks and videos here. 5. ASP.NET Core Microsoft doesn’t always get enough attention. But it should. Because a lot has changed over the last two years. Similar to Java, the organization and its wider ecosystem of software has developed in a way that moves quickly and responds to developer and market needs in an impressive way. ASP.NET Core is evidence of that. A step forward from the formidable ASP.NET, this cross-platform framework has been created to fully meet the needs of today’s cloud based, fully connected applications that run on microservices. It’s worth comparing it with Spring Cloud above - both will help developers build a new generation of applications, and both represent two of software’s old-guard establishment embracing the future and pushing things forward. Discover ASP.NET Core eBooks and videos.
Read more
  • 0
  • 0
  • 3044

article-image-5-things-that-matter-application-development-2018
Richard Gall
11 Dec 2017
4 min read
Save for later

5 things that will matter in application development in 2018

Richard Gall
11 Dec 2017
4 min read
Things change quickly in application development. Over the past few years we've seen it merge with other fields. With the web become more app-like, DevOps turning everyone into a part-time sysadmin (well, sort of), and the full-stack trend shifting expectations about the modern programmer skill set, the field has become incredibly fluid and open. That means 2018 will present a wealth of challenges of application developers - but of course there will also be plenty of opportunities for the curious and enterprising… But what's going to be most important in 2018? What's really going to matter? Take a look below at our list of 5 things that will matter in application development in 2018. 1. Versatile languages that can be used on both client and server Versatility is key to be a successful programmer today. That doesn't mean the age of specialists is over, but rather you need to be a specialist in everything. And when versatility is important to your skillset, it also becomes important for the languages we use. It's for that reason that we're starting to see the increasing popularity of languages like Kotlin and Go. It's why Python continues to be popular - it's just so versatile. This is important when you're thinking about how to invest your learning time. Of course everyone is different, but learning languages that can help you do multiple things and solve different problems can be hugely valuable. Investing your energy in the most versatile languages will be well worth your time in 2018. 2. The new six month Java release cycle This will be essential for Java programmers in 2018. Starting with the release of Java 9 early in 2018, the new cycle will kick in. This might mean there's a little more for developers to pay attention to, but it should make life easier, as Oracle will be able to update and add new features to the language with greater effectiveness than ever before. From a more symbolic point of view, this move hints a lot at the deepening of open source culture in 2018, with Oracle aiming to satisfy developers working on smaller systems, keen to constantly innovate, as much as its established enterprise clients. 3. Developing usable and useful conversational UI Conversational UI has been a 'thing' for some time now, but it hasn't quite captured the imagination of users. This is likely because it simply hasn't proved that useful yet - like 3D film it feels like too much of a gimmick, maybe even too much of a hassle. It's crucial - if only to satisfy the hype - that developers finally find a way to make conversational UI work. To really make it work we're ultimately going to need to join the dots between exceptionally good artificial intelligence and a brilliant user experience - making algorithms that 'understand' user needs, and can adapt to what people want. 4. Microservices Microservices certainly won't be new in 2018, but they are going to play a huge part in how software is built in 2018. Put simply, if they're not important to you yet, they will be. We're going to start to see more organizations moving away from monolithic architectures, looking to engineering teams to produce software in ways that is much more dynamic and much more agile. Yes, these conversations have been happening for a number of years; but like everything when it comes to tech, change happens at different speeds. It's only now as technologies mature, developer skillsets change, and management focus shifts that broader changes take place. 5. Taking advantage of virtual and augmented reality Augmented Reality (AR) and Virtual Reality (VR) have been huge innovations within fields like game development. But in 2018, we're going to see both expand beyond gaming and into other fields. It's already happening in many areas, such as healthcare, and for engineers and product developers/managers, it's going to be an interesting 12 months to see how the market changes.
Read more
  • 0
  • 0
  • 1978
article-image-exploring-language-improvements-c-72-and-73-0
Mark J.
28 Nov 2017
9 min read
Save for later

Exploring Language Improvements in C# 7.2 and 7.3

Mark J.
28 Nov 2017
9 min read
With the C# 7 generation, Microsoft has decided to increase the cadence of language releases, releasing minor version numbers, aka point releases, for the first time since C# 1.1. This allows new features to be used by programmers faster than ever before, but the policy poses a challenge to writers of books about C#. Introduction One of the hardest parts of writing for technology is deciding when to stop chasing the latest changes and adding new content. Back in March 2017, I was reviewing the final drafts of the second edition of my book, C# 7 and .NET Core – Modern Cross-Platform Development. In Chapter 2, Speaking C# I got to the topic of representing number literals. One of the improvements in C# 7 is the ability to use the underscore character as a digit separator. For example, when writing large numbers in decimal you can improve the readability of number literals using underscores, and you can express binary or hexadecimal number literals by prefixing the number literal with 0b or 0x, as shown in the following code: // C# 6 and earlier int decimalNotation = 2000000; // 2 million // C# 7 and 7.1 int decimalNotation = 2_000_000; // 2 million int binaryNotation = 0b0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x001E_8480; // 2 million But in the final draft I hadn't included code examples of using underscores in number literals. At the last minute, I decided to add the preceding examples to the book. Unfortunately, I assumed that the underscore could be used to separate the prefixes 0b and 0x from the digits, and did not check the code examples would compile until the following day, after the book had gone to print. I had to release an erratum on the book's web page before it even reached the shelves. I felt so embarrassed. In the third edition, C# 7.1 and .NET Core 2.0 – Modern Cross-Platform Development, I fixed the code examples by removing the unsupported underscores after the prefixes since they are not supported in C# 7 or C# 7.1. Ironically, just as the third edition was due to go to print, Microsoft released C# 7.2, which adds support for using an underscore after the prefixes, as shown in the following code: // C# 7.2 and later int binaryNotation = 0b_0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x_001E_8480; // 2 million Gah! Clearly, I wasn't the only programmer who thought it is natural to be able to use underscores after the 0b or 0x prefixes. For the third edition, I decided not to make any last-minute changes to the book. This was partly because I didn't want to risk making a mistake again, and also because the code examples do work, they just don't show the latest improvement. Maybe in the fourth edition I will finally get the whole book perfect! But, of course, in the programming world that's impossible. Since the third edition covers C# 7.1, I have written this article to cover the improvements in C# 7.2 that are available today, and to preview the improvements coming early in 2018 with C# 7.3. Enabling C# 7 point releases Developer tools like Visual Studio 2017, Visual Studio Code, and the dotnet command line interface assume that you want to use the C# 7.0 language compiler by default. To use the improvements in a C# point release like 7.1 or 7.2, you must add a configuration element to the project file, as shown in the following markup: <LangVersion>7.2</LangVersion> Potential values for the <LangVersion> markup are shown in the following table: LangVersion Description 7, 7.1, 7.2, 7.3, 8 Entering a specific version number will use that compiler if it has been installed. default Uses the highest major number without a minor number, for example, 7 in 2017 and 8 later in 2018. latest Uses the highest major and highest minor number, for example, 7.2 in 2017, 7.3 early in 2018, 8 later in 2018. To be able to use C# 7.2, either install Visual Studio 2017 version 15.5 on Windows, or install .NET Core SDK 2.1.2 on Windows, macOS, or Linux from the following link: https://www.microsoft.com/net/download/ Run the .NET Core SDK installer, as shown in the following screenshot: Setting up a project for exploring C# 7.2 improvements In Visual Studio 2017 version 15.5 or later, create a new Console App (.NET Core) project named ExploringCS72 in a solution named Bonus, as shown in the following screenshot: You can download the projects created in this article from the Packt website or from the following GitHub repository: https://github.com/PacktPublishing/CSharp-7.1-and-.NET-Core-2.0-Modern-Cross-Platform-Development-Third-Edition/tree/master/BonusSectionCode/Bonus In Visual Studio Code, create a new folder named Bonus with a subfolder named ExploringCS72. Open the ExploringCS72 folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new console In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } } } In Visual Studio 2017, navigate to Debug | Start Without Debugging, or press Ctrl + F5. In Visual Studio Code, in Integrated Terminal, enter the following command: dotnet run You should see the following output, which confirms that you have successfully enabled C# 7.2 for this project: I was born in 1972. In Visual Studio Code, note that the C# extension version 1.13.1 (released on November 13, 2017) has not been updated to recognize the improvements in C# 7.2. You will see red squiggle compile errors in the editor even though the code will compile and run without problems, as shown in the following screenshot: Controlling access to type members with modifiers When you define a type like a class with members like fields, you control where those members can be accessed by applying modifiers like public and private. Until C# 7.2, there have been five combinations access modifier keywords. C# 7.2 adds a sixth combination, as shown in the last row of the following table: Access modifier Description private Member is accessible inside the type only. This is the default if no keyword is applied to a member. internal Member is accessible inside the type, or any type that is in the same assembly. protected Member is accessible inside the type, or any type that inherits from the type. public Member is accessible everywhere. internal protected Member is accessible inside the type, or any type that is in the same assembly, or any type that inherits from the type. Equivalent to internal_OR_protected. private protected Member is accessible inside the type, or any type that inherits from the type and is in the same assembly. Equivalent to internal_AND_protected. Setting up a .NET Standard class library to explore access modifiers In Visual Studio 2017 version 15.5 or later, add a new Class Library (.NET Standard) project named ExploringCS72Lib to the current solution, as shown in the following screenshot: In Visual Studio Code, create a new subfolder in the Bonus folder named ExploringCS72Lib. Open the ExploringCS72Lib folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new classlib Open the Bonus folder so that you can work with both projects. In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72Lib.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> In the class library, rename the class file from Class1 to AccessModifiers, and edit the class, as shown in the following code: using static System.Console; namespace ExploringCS72 { public class AccessModifiers { private int InTypeOnly; internal int InSameAssembly; protected int InDerivedType; internal protected int InSameAssemblyOrDerivedType; private protected int InSameAssemblyAndDerivedType; // C# 7.2 public int Everywhere; public void ReadFields() { WriteLine("Inside the same type:"); WriteLine(InTypeOnly); WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } public class DerivedInSameAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in same assembly:"); //WriteLine(InTypeOnly); // is not visible WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } } Edit the ExploringCS72.csproj file, and add the <ItemGroup> element to reference the class library in the console app, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> <ItemGroup> <ProjectReference Include="..ExploringCS72LibExploringCS72Lib.csproj" /> </ItemGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } public void ReadFieldsInType() { WriteLine("Inside a type in different assembly:"); var am = new AccessModifiers(); WriteLine(am.Everywhere); } } public class DerivedInDifferentAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in different assembly:"); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(Everywhere); } } } When entering code that accesses the am variable, note that IntelliSense only shows members that are visible due to access control. Passing parameters to methods In the original C# language, parameters had to be passed in the order that they were declared in the method. In C# 4, Microsoft introduced named parameters so that values could be passed in a custom order and even made optional. But if a developer chose to name parameters, all of them had to be named. In C# 7.2, you can mix named and unnamed parameters, as long as they are passed in the correct position. In Program.cs, add a static method, as shown in the following code: public static void PassingParameters(string name, int year) { WriteLine($"{name} was born in {year}."); } In the Main method, add the following statement: PassingParameters(name: "Bob", 1945); Visual Studio Code will show an error, as shown in the following screenshot, but the code will compile and execute. Optimizing performance with value types The fourth and final feature of C# 7.2 is working with value types while using reference semantics. This can improve performance in very specialized scenarios. You are unlikely to use them much in your own code, unless like Microsoft themselves, you create frameworks for other programmers to build upon that need to do a lot of memory management. You can learn more about these features at the following link: https://docs.microsoft.com/en-gb/dotnet/csharp/reference-semantics-with-value-types Conclusion I plan to refresh this bonus article when C# 7.3 is released to update it with the new features in that point release. Good luck with all your C# adventures!
Read more
  • 0
  • 0
  • 4664

article-image-how-has-python-remained-so-popular
Antonio Cucciniello
21 Sep 2017
4 min read
Save for later

How has Python remained so popular?

Antonio Cucciniello
21 Sep 2017
4 min read
In 1991, the Python programming language was created. It is a dynamically typed object oriented language that is often used for scripting and web applications today. It is usually paired with frameworks such as Django or Flask on the backend. Since its creation, it's still extremely relevant and one of the most widely used programming languages in the world. But why is this the case? Today we will look at the reason why the Python programming language has still been extremely popular over the last couple of years. Used by bigger companies Python is widely used by bigger technology companies. When bigger tech companies (think companies such as Google) use Python, the engineers that work there also use it. If the developers use Python at their jobs, they will take it to their next job and pass the knowledge on. In addition, Python continues to be used organically when these developers use their knowledge of this language in any of their personal projects they have as well, futher spreading its usage. Plenty of styles are not used In Python, whitespace is important, but in other languages such as JavaScript and C++ it is not. The whitespace is used to dictate the scope of the statements in that indent. By making whitespace important it reduces the need for things like braces and semicolons in your code. That reduction alone can make your code look simpler and cleaner. People are always more willing to try a language that looks and feels cleaner, because it seems easier to learn psychologically. Variety of libraries and third-party support Being around as long as it has, Python has plenty of built-in functionality. It has an extremely large standard library with plenty of things that you can use in your code. On top of that, it has plenty of third-party support libraries that make things even easier. All of this gained functionality allows programmers to focus on the more important logic that is vital to their application's core functionality. This makes programmers more efficient, and who doesn't like efficiency? Object oriented As mentioned earlier, Python is an object oriented programming language. Due to it being object oriented, more people are likely to adopt it because object orient programming allows developers to model their code very similar to real world behavior. Built-in testing Python allows you to import a package called unittest. This package is a full unit testing suite with setup and teardown functions. Having this built in, it is something that is stable for developers to use to test their applications. Readability and learnability As we mentioned earlier, the whitespace is important, therefore we do not need brackets and semicolons. Also Python is dynamically typed so it is easier to create and use variables while not really having to worry about the type. All of these topics could be difficult for new programmers to learn. Python makes it easier by removing some of the difficult parts and having nicer looking code. The reduction of difficulty allows people to choose Python as their first programming language more often than others. (This was my first programming language.) Well documented Building upon the standard libraries and the vast amount of third-party packages, the code in those applications is usually well documented. They tend to have plenty of helpful comments and tons of additional documentation to explain what is happening. From a developer stand point this is crucial. Having great documentation can make or break a language's usage for me. Multiple applications To top it off, Python has many applications it can be used in. It can be used to develop games for fun, web applications to aid businesses, and data science applications. The wide variety of usage attracts more and more people to Python, because when you learn the language you now have the power of versatility. The scope of applications is vast. With all of these benefits, who wouldn't consider Python as their next language of choice? There are many options out there, but Python tends to be superior when it comes to readability, support, documentation, and its wide application usage. About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 1959