Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-why-learn-machine-learning-as-a-non-techie
Natasha Mathur
11 Sep 2018
9 min read
Save for later

Why learn machine learning as a non-techie?

Natasha Mathur
11 Sep 2018
9 min read
“..what we want is a machine that can learn from experience..” ~Alan Turing, 1947 Thanks to artificial intelligence, Turing’s vision is coming true. Machines are learning, from others’ experience (using training datasets) and from their own as well.  Machines can now play chess, Go, and other games, they can help predict cancer, manage your day, summarize today’s news for you, edit your essays, identify your face, and even mimic dance moves and facial expressions. Come to think of it, every job role and career demands that you learn from experience, improve over time and explore new ways to do things.  Yes, machines are very effective at the former two, but humans still have an edge when it comes to innovative thinking. Imagine what you could achieve if you put together your mind with that of an efficient learning algorithm! You might think that artificial intelligence and machine learning are a dense and impenetrable field limited to research labs and textbooks. Does that mean only software engineers and researchers can dream of making it into this fascinating field? Not quite. We’ll unpick machine learning in the following sections and present our case for why it makes sense for everyone to understand this field better. Machine learning is, potentially, a first-class ticket to an exciting career, whether you are starting off fresh from college or are considering a career switch. Beyond the artificial intelligence and machine learning hype Artificial intelligence is simply an area of computing that solves complex real-world problems. Yes, research still happens in universities, and yes, data scientists are still exploring the limits of artificial intelligence in forward-thinking businesses, but it's much more than that. AI is so pervasive - and mysterious - that its applications hide in plain sight. Look around you carefully. From Netflix recommending personalized content to its 130 million viewers, to Youtube’s video search and automatic captions in videos, to Amazon’s shopping recommendations, to Instagram hashtags, Snapchat filters, spam filters on your Gmail and virtual assistants like Siri on our smartphones, artificial intelligence, and machine learning techniques are in action everywhere. This means as a user you are at some level already impacted by algorithms every day. The question then is should you be the person who’s career is limited by algorithms or the one whose career is propelled by algorithms. Why get into artificial intelligence development as a non-programmer? Artificial Intelligence is a perfect blend of knowledge, high salary, and some really great opportunities. Your non-programming field does not have to deter your growth in the AI field. In fact, your background can give you an edge over the traditional software developers and data scientists in terms of domain awareness and better understanding what the system should do, what it should look for, and make the users feel. Below are some reasons proving why you should make the jump in AI. Machine learning can help you be better at your current job How? You may ask. Take a news reporter or editor’s job for example. They must possess a blend of research/analysis centric capabilities, a creative set of skills and speed to come up with timely, quality articles on topics of interest to their readers. A data journalist or a writer with machine learning experience could quickly find great topics to write on with the help of machine learning based web scraping apps. Also, they could let the data lead them to unique stories that are emerging before traditional news reporters find their way to them. They could further also get a quick summary of multiple perspectives on a given topic using custom-built news feed algorithms. Then could they also find further research resources by tweaking their search parameters, even adding quality filters on top to only allow for high-quality citations. This kind of writer has cut down on the time they spent finding and understanding topics - which means more time to actually write compelling pieces and to connect with real sources for further insight. Algorithms can also find and correct language issues in writing now. This means editors can spend more time improving the content quality from a scope perspective. You can quickly start to see how artificial intelligence can complement the work you do and help you grow in your career. Yes, all this sounds lovely in theory, but is it really happening in practice? There are others like you who are successfully exploring machine learning Don’t believe me? Mason Fish, a software Engineer at Docker, Inc was earlier a musician. He had done his bachelor’s and masters from two different music conservatories. After graduating, he worked for five years as a professional musician. But, today he helps build and maintain services for Docker, a tool used by software engineers all over the world! This was just one case of a non-programmer diving into the computer science world. When musicians can learn to code and get core developer jobs in cutting-edge tech companies, it is not far fetched to say they can also learn to build machine learning models. Below are some examples of non-programmers of varied experience levels who are exploring the Machine Learning world. Per Harald Borgen, an economics graduate was able to boost the sales at his workplace Xeneta using machine learning algorithms, an accomplishment that helped accelerate his career. You can read his blog to see how he transformed from a machine learning newbie to a seasoned practitioner. Another example is a 14-year-old Tanmay Bakshi, who started a youtube channel at just 7 years of age where he teaches coding, algorithms, AI and machine learning concepts. Similarly, Sean Le Van created an AI chatbot when he was 14 years old using ML algorithms.   Rosebud Anwuri is another great example as she switched from chemical engineering to Data science. “My first exposure to Data Science was from a book that had nothing to do with Data Science,” writes Anwuri on her blog. She created her first Data Science learning path from an answer on Quora, last year. Fast forward to this year, she has been invited to speak at Stanford’s Women in Data Science Conference in Nigeria and has facilitated a workshop at The Women in Machine Learning and Data Science among others. She also writes on Machine Learning and Data Science on her blog.   Like Anwuri, Sce Pike dreamed of being an artist or singer in college and did her major in fine arts and anthropology. Pike went from art to web design to “human factors design,” which involves human-machine interactions, for the telecommunications giant Qualcomm. In addition to that, Pike started her own company IOTAS, that offers smart-home services to renters and homeowners. “I have had to approach my work with logic, research, and great design. Looking back, I’m amazed where I am now,” says Sce Pike. Read also: Data science for non-techies: How I got started (Part 1) Adapt or perish in the oncoming job automation wave of the fourth industrial revolution Ok, so maybe you’re happy with how you are growing anyway in your career. Be warned though, your job may not look the same even in the next few years. Automation is expected to replace up to 30% of jobs in the next 10 years, so upskilling to machine learning is a wise choice. Last month, Bank of England’s Chief Economist warned that 15 million jobs in Britain could be at stake because of artificial intelligence. Machine learning as a skill could help you stay relevant in the future and prepare for what’s being called, “the third machine age”. You can develop machine learning apps with no to minimal coding experience Thanks to great advancements by big tech companies and open source projects, machine learning today is accessible to people with varying degrees of programming experience - from new developers and even those who have never written a line of code in their life. So, whether you’re a curious web/UX designer, a news reporter, an artist, a school student, a filmmaker or an NGO worker, you will find good use of machine learning in your field. There are tools for machine learning for users with varying levels of experience. In fact, there are certain Machine Learning Applications that you can build even today. Some examples are Image and text classification with Neural Network, Facial recognition, Gaming bots, music generation, object detection, etc. Machine learning skills are highly rewarded Machine learning is a nascent field where demand far outweighs supply. According to research done by Indeed.com, the number one job requirement in AI is that of a Machine Learning Engineer, with data scientist jobs taking the second spot. In fact, AI researchers can earn more than 1 million dollar per year and the AI geniuses at Elon Musk’s OpenAI are a living proof for this. OpenAI paid its top AI researcher, Ilya Sutskever, more than  $1.9 million, back in 2016. Another leading researcher, Ian Goodfellow, in OpenAI was paid more than $800,000. Machine Learning is not hard to learn. It might seem intimidating at first, but once you get the basics right, the rest of the ML journey becomes easier. If you’re convinced that ML is for you, but are confused about how to get started then don’t worry, we’ve got you covered. To help you get started, here is a non-programmer’s guide to learning Machine Learning. So, yes, it doesn’t matter if you’re a non-programmer, musician, a librarian, or a student, the future is AI-driven so don’t be afraid to make that dive into Machine Learning. As Robert Frost said, “Two roads diverged in a wood, and I took the one less traveled by, And that has made all the difference”. 8 Machine learning best practices [Tutorial] Google introduces Machine Learning courses for AI beginners Top languages for Artificial Intelligence development
Read more
  • 0
  • 0
  • 8425

article-image-quantum-expert-robert-sutor-explains-the-basics-of-quantum-computing
Packt Editorial Staff
12 Dec 2019
9 min read
Save for later

Quantum expert Robert Sutor explains the basics of Quantum Computing

Packt Editorial Staff
12 Dec 2019
9 min read
What if we could do chemistry inside a computer instead of in a test tube or beaker in the laboratory? What if running a new experiment was as simple as running an app and having it completed in a few seconds? For this to really work, we would want it to happen with complete fidelity. The atoms and molecules as modeled in the computer should behave exactly like they do in the test tube. The chemical reactions that happen in the physical world would have precise computational analogs. We would need a completely accurate simulation. If we could do this at scale, we might be able to compute the molecules we want and need. These might be for new materials for shampoos or even alloys for cars and airplanes. Perhaps we could more efficiently discover medicines that are customized to your exact physiology. Maybe we could get a better insight into how proteins fold, thereby understanding their function, and possibly creating custom enzymes to positively change our body chemistry. Is this plausible? We have massive supercomputers that can run all kinds of simulations. Can we model molecules in the above ways today?  This article is an excerpt from the book Dancing with Qubits written by Robert Sutor. Robert helps you understand how quantum computing works and delves into the math behind it with this quantum computing textbook.  Can supercomputers model chemical simulations? Let’s start with C8H10N4O2 – 1,3,7-Trimethylxanthine.  This is a very fancy name for a molecule that millions of people around the world enjoy every day: caffeine. An 8-ounce cup of coffee contains approximately 95 mg of caffeine, and this translates to roughly 2.95 × 10^20 molecules. Written out, this is 295, 000, 000, 000, 000, 000, 000 molecules. A 12 ounce can of a popular cola drink has 32 mg of caffeine, the diet version has 42 mg, and energy drinks often have about 77 mg. These numbers are large because we are counting physical objects in our universe, which we know is very big. Scientists estimate, for example, that there are between 10^49 and 10^50 atoms in our planet alone. To put these values in context, one thousand = 10^3, one million = 10^6, one billion = 10^9, and so on. A gigabyte of storage is one billion bytes, and a terabyte is 10^12 bytes. Getting back to the question I posed at the beginning of this section, can we model caffeine exactly on a computer? We don’t have to model the huge number of caffeine molecules in a cup of coffee, but can we fully represent a single molecule at a single instant? Caffeine is a small molecule and contains protons, neutrons, and electrons. In particular, if we just look at the energy configuration that determines the structure of the molecule and the bonds that hold it all together, the amount of information to describe this is staggering. In particular, the number of bits, the 0s and 1s, needed is approximately 10^48: 10, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000. And this is just one molecule! Yet somehow nature manages to deal quite effectively with all this information. It handles the single caffeine molecule, to all those in your coffee, tea, or soft drink, to every other molecule that makes up you and the world around you. How does it do this? We don’t know! Of course, there are theories and these live at the intersection of physics and philosophy. However, we do not need to understand it fully to try to harness its capabilities.  We have no hope of providing enough traditional storage to hold this much information. Our dream of exact representation appears to be dashed. This is what Richard Feynman meant in his quote: “Nature isn’t classical.” However, 160 qubits (quantum bits) could hold 2^160 ≈ 1.46 × 10^48 bits while the qubits were involved in a computation. To be clear, I’m not saying how we would get all the data into those qubits and I’m also not saying how many more we would need to do something interesting with the information. It does give us hope, however. In the classical case, we will never fully represent the caffeine molecule. In the future, with enough very high-quality qubits in a powerful quantum computing system, we may be able to perform chemistry on a computer. How quantum computing is different than classical computing I can write a little app on a classical computer that can simulate a coin flip. This might be for my phone or laptop. Instead of heads or tails, let’s use 1 and 0. The routine, which I call R, starts with one of those values and randomly returns one or the other. That is, 50% of the time it returns 1 and 50% of the time it returns 0. We have no knowledge whatsoever of how R does what it does. When you see “R,” think “random.” This is called a “fair flip.” It is not weighted to slightly prefer one result over the other. Whether we can produce a truly random result on a classical computer is another question. Let’s assume our app is fair. If I apply R to 1, half the time I expect 1 and another half 0. The same is true if I apply R to 0. I’ll call these applications R(1) and R(0), respectively. If I look at the result of R(1) or R(0), there is no way to tell if I started with 1 or 0. This is just like a  secret coin flip where I can’t tell whether I began with heads or tails just by looking at how the coin has landed. By “secret coin flip,” I mean that someone else has flipped it and I can see the result, but I have no knowledge of the mechanics of the flip itself or the starting state of the coin.  If R(1) and R(0) are randomly 1 and 0, what happens when I apply R twice? I write this as R(R(1)) and R(R(0)). It’s the same answer: random result with an equal split. The same thing happens no matter how many times we apply R. The result is random, and we can’t reverse things to learn the initial value.  Now for the quantum version, Instead of R, I use H. It too returns 0 or 1 with equal chance, but it has two interesting properties. It is reversible. Though it produces a random 1 or 0 starting from either of them, we can always go back and see the value with which we began. It is its own reverse (or inverse) operation. Applying it two times in a row is the same as having done nothing at all.  There is a catch, though. You are not allowed to look at the result of what H does if you want to reverse its effect. If you apply H to 0 or 1, peek at the result, and apply H again to that, it is the same as if you had used R. If you observe what is going on in the quantum case at the wrong time, you are right back at strictly classical behavior.  To summarize using the coin language: if you flip a quantum coin and then don’t look at it, flipping it again will yield heads or tails with which you started. If you do look, you get classical randomness. A second area where quantum is different is in how we can work with simultaneous values. Your phone or laptop uses bytes as individual units of memory or storage. That’s where we get phrases like “megabyte,” which means one million bytes of information. A byte is further broken down into eight bits, which we’ve seen before. Each bit can be a 0 or 1. Doing the math, each byte can represent 2^8 = 256 different numbers composed of eight 0s or 1s, but it can only hold one value at a time. Eight qubits can represent all 256 values at the same time This is through superposition, but also through entanglement, the way we can tightly tie together the behavior of two or more qubits. This is what gives us the (literally) exponential growth in the amount of working memory. How quantum computing can help artificial intelligence Artificial intelligence and one of its subsets, machine learning, are extremely broad collections of data-driven techniques and models. They are used to help find patterns in information, learn from the information, and automatically perform more “intelligently.” They also give humans help and insight that might have been difficult to get otherwise. Here is a way to start thinking about how quantum computing might be applicable to large, complicated, computation-intensive systems of processes such as those found in AI and elsewhere. These three cases are in some sense the “small, medium, and large” ways quantum computing might complement classical techniques: There is a single mathematical computation somewhere in the middle of a software component that might be sped up via a quantum algorithm. There is a well-described component of a classical process that could be replaced with a quantum version. There is a way to avoid the use of some classical components entirely in the traditional method because of quantum, or the entire classical algorithm can be replaced by a much faster or more effective quantum alternative. As I write this, quantum computers are not “big data” machines. This means you cannot take millions of records of information and provide them as input to a quantum calculation. Instead, quantum may be able to help where the number of inputs is modest but the computations “blow up” as you start examining relationships or dependencies in the data.  In the future, however, quantum computers may be able to input, output, and process much more data. Even if it is just theoretical now, it makes sense to ask if there are quantum algorithms that can be useful in AI someday. To summarize, we explored how quantum computing works and different applications of artificial intelligence in quantum computing. Get this quantum computing book Dancing with Qubits by Robert Sutor today where he has explored the inner workings of quantum computing. The book entails some sophisticated mathematical exposition and is therefore best suited for those with a healthy interest in mathematics, physics, engineering, and computer science. Intel introduces cryogenic control chip, ‘Horse Ridge’ for commercially viable quantum computing Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases
Read more
  • 0
  • 0
  • 8374

article-image-how-do-data-structures-and-data-models-differ
Amey Varangaonkar
21 Dec 2017
7 min read
Save for later

How do Data Structures and Data Models differ?

Amey Varangaonkar
21 Dec 2017
7 min read
[box type="note" align="" class="" width=""]The following article is an excerpt taken from the book Statistics for Data Science, authored by James D. Miller. The book presents interesting techniques through which you can leverage the power of statistics for data manipulation and analysis.[/box] In this article, we will be zooming the spotlight on data structures and data models, and also understanding the difference between both. Data structures Data developers will agree that whenever one is working with large amounts of data, the organization of that data is imperative. If that data is not organized effectively, it will be very difficult to perform any task on that data, or at least be able to perform the task in an efficient manner. If the data is organized effectively, then practically any operation can be performed easily on that data. A data or database developer will then organize the data into what is known as data structures. Following image is a simple binary tree, where the data is organized efficiently by structuring it: A data structure can be defined as a method of organizing large amounts of data more efficiently so that any operation on that data becomes easy. Data structures are created in such a way as to implement one or more particular abstract data type (ADT), which in turn will stipulate what operations can be performed on the data structure, as well as the computational complexity of those operations. [box type="info" align="" class="" width=""]In the field of statistics, an ADT is a model for data types where a data type is defined by its behavior from the point of view (POV) of users of that data, explicitly showing the possible values, the possible operations on data of this type, and the behavior of all of these operations.[/box] Database design is then the process of using the defined data structures to produce a detailed data model, which will become the database. This data model must contain all of the required logical and physical design selections, as well as the physical storage parameters needed to produce a design in a Data Definition Language (DDL), which can then be used to create an actual database. [box type="info" align="" class="" width=""]There are varying degrees of the data model, for example, a fully attributed data model would also contain detailed attributes for each entity in the model.[/box] So, is a data structure a data model? No, a data structure is used to create a data model. Is this data model the same as data models used in statistics? Let's see in the next section. Data models You will find that statistical data models are at the heart of statistical analytics. In the simplest terms, a statistical data model is defined as the following: A representation of a state, process, or system that we want to understand and reason about In the scope of the previous definition, the data or database developer might agree that in theory or in concept, one could use the same terms to define a financial reporting database, as it is designed to contain business transactions and is arranged in data structures that allow business analysts to efficiently review the data, so that they can understand or reason about particular interests they may have concerning the business. Data scientists develop statistical data models so that they can draw inferences from them and, more importantly, make predictions about a topic of concern. Data developers develop databases so that they can similarly draw inferences from them and, more importantly, make predictions about a topic of concern (although perhaps in some organizations, databases are more focused on past and current events (transactions) than on forward-thinking ones (predictions)). Statistical data models come in a multitude of different formats and flavours (as do databases). These models can be equations linking quantities that we can observe or measure; they can also be simply, sets of rules. Databases can be designed or formatted to simplify the entering of online transactions—say, in an order entry system—or for financial reporting when the accounting department must generate a balance sheet, income statement, or profit and loss statement for shareholders. [box type="info" align="" class="" width=""]I found this example of a simple statistical data model: Newton's Second Law of Motion, which states that the net sum of force acting on an object causes the object to accelerate in the direction of the force applied, and at a rate proportional to the resulting magnitude of the force and inversely proportional to the object's mass.[/box] What's the difference? Where or how does the reader find the difference between a data structure or database and a statistical model? At a high level, as we speculated in previous sections, one can conclude that a data structure/database is practically the same thing as a statistical data model, as shown in the following image: At a high level, as we speculated in previous sections, one can conclude that a data structure/database is practically the same thing as a statistical data model. When we take the time to drill deeper into the topic, you should consider the following key points: Although both the data structure/database and the statistical model could be said to represent a set of assumptions, the statistical model typically will be found to be much more keenly focused on a particular set of assumptions concerning the generation of some sample data, and similar data from a larger population, while the data structure/database more often than not will be more broadly based A statistical model is often in a rather idealized form, while the data structure/database may be less perfect in the pursuit of a specific assumption Both a data structure/database and a statistical model are built around relationships between variables The data structure/database relationship may focus on answering certain questions, such as: What are the total orders for specific customers? What are the total orders for a specific customer who has purchased from a certain salesperson? Which customer has placed the most orders? Statistical model relationships are usually very simple, and focused on proving certain questions: Females are shorter than males by a fixed amount Body mass is proportional to height The probability that any given person will partake in a certain sport is a function of age, sex, and socioeconomic status Data structures/databases are all about the act of summarizing data based on relationships between variables Relationships The relationships between variables in a statistical model may be found to be much more complicated than simply straightforward to recognize and understand. An illustration of this is awareness of effect statistics. An effect statistic is one that shows or displays a difference in value to one that is associated with a difference related to one or more other variables. Can you image the SQL query statements you'd use to establish a relationship between two database variables based upon one or more effect statistic? On this point, you may find that a data structure/database usually aims to characterize relationships between variables, while with statistical models, the data scientist looks to fit the model to prove a point or make a statement about the population in the model. That is, a data scientist endeavors to make a statement about the accuracy of an estimate of the effect statistic(s) describing the model! One more note of interest is that both a data structure/database and a statistical model can be seen as tools or vehicles that aim to generalize a population; a database uses SQL to aggregate or summarize data, and a statistical model summarizes its data using effect statistics. The above argument presented the notion that data structures/databases and statistical data models are, in many ways, very similar. If you found this excerpt to be useful, check out the book Statistics for Data Science, which demonstrates different statistical techniques for implementing various data science tasks such as pre-processing, mining, and analysis.  
Read more
  • 0
  • 0
  • 8373

article-image-ai-deserve-to-be-so-overhyped
Aaron Lazar
28 May 2018
6 min read
Save for later

Does AI deserve to be so Overhyped?

Aaron Lazar
28 May 2018
6 min read
The short answer is yes, and no. The long answer is, well, read on to find out. Several have been asking the question, including myself, wondering whether Artificial Intelligence is just another passing fad like maybe the Google Glass or nano technology. The hype for AI began over the past few years, although if you actually look back at the 60’s it seems to have started way back then. In the early 90s and all the way down to the early 2000’s, a lot of media and television shows were talking about AI quite a bit. Going 25 centuries even further back, Aristotle speaks of not just thinking machines but goes on to talk of autonomous ones in his book, Politics: for if every instrument, at command, or from a preconception of its master's will, could accomplish its work (as the story goes of the statues of Daedalus; or what the poet tells us of the tripods of Vulcan, "that they moved of their own accord into the assembly of the gods "), the shuttle would then weave, and the lyre play of itself; nor would the architect want servants, or the [1254a] master slaves. Aristotle, Politics: A treatise on Government, Book 1, Chapter 4 This imagery of AI has managed to sink into our subconscious minds over the centuries propelling creative work, academic research and industrial revolutions toward that goal. The thought of giving machines a mind of their own, existed quite long ago, but recent advancements in technology have made it much clearer and realistic. The Rise of the Machines The year is 2018. The 4th Industrial Revolution is happening and intelligent automation has taken over. This is the point where I say no, AI is not overhyped. General Electric, for example, is a billion dollar manufacturing company that has already invested in AI. GE Digital has AI systems running through several automated systems. They even have their own IIoT platform called Predix. Similarly, in the field of healthcare, the implementation of AI is growing in leaps and bounds. The Google Deepmind project is able to process millions of medical records within minutes. Although this kind of research is in its early phase, Google is working closely with the Moorfields Eye Hospital NHS Foundation Trust to implement AI and improve eye treatment. AI startups focused on healthcare and other allied areas such as genetic engineering are some of the highly invested and venture capital supported ones in recent times. Computer Vision or image recognition is one field where AI has really proven its power. Analysing datasets like iris has never been easier, paving way for more advanced use cases like automated quality checks in manufacturing units. Another interesting field is Healthcare, where AI has helped sift through tonnes of data, helping doctors diagnose illnesses quicker, manufacture more effective and responsive drugs, and in patient monitoring. The list is endless, clearly showing that AI has made its mark in several industries. Back (up) to the Future Now, if you talk about the commercial implementations of AI, they’re still quite far fetched at the moment. Take the same Computer Vision application for example. Its implementation will be a huge breakthrough in autonomous vehicles. But if researchers have managed to obtain an 80% accuracy for object recognition on roads, the battle is not close to being won! Even if they do improve, do you think driverless vehicles are ready to drive in the snow, through the rain or even storms? I remember a few years ago, Business Process Outsourcing was one industry, at least in India, that was quite fearful of the entry of AI and autonomous systems that might take over their jobs. Machines are only capable of performing 60-70% of the BPO processes in Insurance, and with changing customer requirements and simultaneously falling patience levels, these numbers are terrible! It looks like the end of Moore’s law is here, for AI I mean. Well, you can’t really expect AI to have the same exponential growth that computers did, decades ago. There are a lot of unmet expectations in several fields, which has a considerable number of people thinking that AI isn’t going to solve their problems now, and they’re right. It is probably going to take a few more years to mature, making it a thing of the future, not of the present. Is AI overhyped now? Yeah, maybe? What I think Someone once said, hype is a double-edged sword. If it’s not enough, innovation may become obscure and if it’s too much, expectations will become unreasonable. It’s true that AI has several beneficial use cases, but what about fairness of such systems? Will machines continue to think the way they’re supposed to or will they start finding their own missions that don’t involve benefits to the human race? At the same time, there’s also a question of security and data privacy. GDPR will come into effect in a few days, but what about the prevailing issues of internet security? I had an interesting discussion with a colleague yesterday. We were talking about what the impact of AI could be for us as end-customers, in a developing and young country like India. Do we really need to fear losing our jobs, will we be able to reap the benefits of AI directly or would it be an indirect impact? The answer is, probably yes, but not so soon. If we drew up the hierarchy of needs pyramid for AI, it would look something like the above. For each field to fully leverage AI, it’s going to involve several stages like collecting data, storing it effectively, exploring it, then aggregating it, optimising it with the help of algorithms and then finally achieving AI. That’s bound to take a LOT of time! Honestly speaking, a country like India lacks as much implementation of AI in several fields. The major customers of AI, apart from some industrial giants, will obviously be the government. Although, that is sure to take at least a decade or so, keeping in mind the several aspects to be accomplished first. In the meantime, buddying AI developers and engineers are scurrying to skill themselves up in the race to be in the cream of the crowd! Similarly, what about the rest of the world? Well, I can’t speak for everyone, but if you ask me, AI is a really promising technology and I think we need to give it some time; allow the industries and organisations investing in it to take enough time to let it evolve and ultimately benefit us customers, one way or another. You can now make music with AI thanks to Magenta.js Splunk leverages AI in its monitoring tools    
Read more
  • 0
  • 0
  • 8372

article-image-rxswift-part-1-where-start-beginning-hot-and-cold-observables
Darren Karl
09 Feb 2017
6 min read
Save for later

RxSwift Part 1: Where to Start? Beginning with Hot and Cold Observables

Darren Karl
09 Feb 2017
6 min read
In the earlier articles, we gave a short introduction to RxSwift and talked about the advantages of the functional aspect of Rx,by using operators and composing a stream of operations. In my journey to discover and learn Rx, I was drawn to it after finding various people talking about its benefits. After I finally bought into the idea that I wanted to learn it, I began to read the documentation. I was overwhelmed by how many objects, classes, or operators were provided. There were loads of various terminologies that I encountered in the documentation. The documentation was (and still is) there, but because I was still at page one, my elementary Rx vocabulary prevented me from actually being able to appreciate, maximize, and use RxSwift. I had to go through months of soaking in the documentation until it saturated in my brain and things clickedfinally. I found that I wasn’t the only one who was experiencing this after talking with some of my RxSwift community members in Slack. This is the gap. RxSwift is a beautifully designed API (I’ll talk about why exactly, later), but I personally didn’t know how long it would take to go from my working non-Rx knowledge to slowly learning the well-designed tools that Rx provides. The problem wasn’t that the documentation was lacking, because it was sufficient. It was that while reading the documentation, I found that I didn't even know what questions to ask or which documentation answered what questions I had. What I did know was programming concepts in the context of application development, in a non-Rx way. What I wanted to discover was how things would be done in RxSwift along with the thought processes that led to the design of elegant units of code, such as the various operators like flatMap or concatMap or the units of code such as Subjects or Drivers. This article aims to walk you through real programming situations that software developers encounter, while gradually introducing the Rx concepts that can be used. This article assumes that you’ve read through the last two articles on RxSwift I’ve written, which is linked above, and that you’ve found and read some of the documentation but don’t know where to start. It also assumes that you’re familiar with how network calls or database queries are made and how to wrap them using Rx. A simple queuing application Let’s start with something simple, such as a mobile application, for queuing. We can have multiple queues, which contain zero to many people in order. Let’s say that we have the following code that performs a network query to get the queue data from your REST API. We assume that these are network requests wrapped using Observable.create(): privatefuncgetQueues() -> Observable<[Queue]> privatefuncgetPeople(in queue: Queue) -> Observable<[Person]> privatevardisposeBag = DisposeBag() An example of the Observable code for getting the queue data is available here. Where do I write my subscribe code? Initially, a developer might write the following code in the viewDidLoad() method and bind it to some UITableView: funcviewDidLoad() { getQueue() .subscribeOn(ConcurrentDispatchQueueScheduler(queue: networkQueue)) .observeOn(MainScheduler.instance) .bindTo(tableView.rx.items(cellIdentifier: "Cell")) { index, model, cell in cell.textLabel?.text = model } .addDisposableTo(disposeBag) } However, if the getQueues() observable code loads the data from a cold observable network call, then, by definition, the cold observable will only perform the network call once during viewDidLoad(), load the data into the views, and it is done. The table view will not update in case the queue is updated by the server, unless the view controller gets disposed and viewDidLoad() is performed again. Note that should the network call fail, we can use the catchError() operator right after and swap in a database query or from a cache instead, assuming we’ve persisted the queue data through a file or database. Thisway, we’re assured that this view controller will always have data to display. Introduction to cold and hot observables By cold observable, we mean that the observable code (that is, the network call to get the data) will only begin emitting items on subscription (which is currently on viewDidLoad). This is the difference between a hot and a cold observable: hot observables can be emitting items even when there are no observers subscribed, while cold observables will only run once an observer is subscribed. The examples of cold observables are things you’ve wrapped using Observable.create(), while the examples of hot observables are things like UIButton.rx.tap or UITextField.rx.text, which can be emitting items such as Void for a button press or String for atext field, even when there aren’t any observers subscribed to them. Inherently, we are wrong to use a cold observable because its definition will simply not meet the demands of our application. A quick fix might be to write it in viewWillAppear Going back to our queuing example, one could write it in the viewWillAppear() life cycle state of the app so that it will refresh its data every time the view appears. The problem that arises from this solution is that we perform a network query too frequently. Furthermore, every time viewWillAppear is called, note that a new subscription is added to the disposeBag. If, for some reason, the last subscription does not dispose (that is, it is still processing and emitting items and has not yet entered into the onComplete or onError state) and you’ve begun to perform a network query again, then it means that you have a possibility of amemory leak! Here’s an example of the (impractical) code that refreshes on every view. The code will work (it will refresh every time), but this isn’t good code: publicviewWillAppear(_ animated: Bool) { getQueues().bindTo(tableView.rx.items(cellIdentifier: "Cell")) { index, model, cell in cell.textLabel?.text = model } .addDisposableTo(self.disposeBag) } So, if we don’t want to query only one time and we don’t want to query too frequently, it begs the question,“How many times should the queries really be performed?” In part 2 of this article, we'll discuss what the right amount of querying is. About the Author Darren Karl Sapalo is a software developer, an advocate ofUX, and a student taking up his Master's degree in Computer Science. He enjoyed developing games in his free time when he was twelve. He finally finished with his undergraduate thesis on computer vision and took up some industry work with Apollo Technologies Inc. developing for both Android and iOS platforms.
Read more
  • 0
  • 0
  • 8316

article-image-android-studio-how-does-it-differ-from-other-ides
Natasha Mathur
30 May 2018
5 min read
Save for later

What is Android Studio and how does it differ from other IDEs?

Natasha Mathur
30 May 2018
5 min read
Android Studio is a powerful and sophisticated development environment, designed with the specific purpose of developing, testing, and packaging Android applications. It can be downloaded, along with the Android SDK, as a single package.  It is a collection of tools and components. Many such tools are installed and updated independently of each other. Android Studio is not the only way to develop Android apps; there are other IDEs, such as Eclipse and NetBeans, and it is even possible to develop a complete app using nothing more than Notepad and the command line. This article is an excerpt from the book, 'Mastering Android Studio 3', written by Kyle Mew. Built for a purpose, Android Studio has attracted a growing number of third-party plugins that provide a large array of valuable functions, not available directly via the IDE. These include plugins to speed up build times, debug a project over Wi-Fi, and many more. Despite being arguably a superior tool, there are some very good reasons for having stuck with another IDE, such as Eclipse. Many developers develop for multiple platforms, which makes Eclipse a good choice of tool. Every developer has deadlines to meet, and getting to grips with unfamiliar software can slow them down considerably at first. But Android studio is the official IDE for Android studio and every android app developer should be wary of the differences between the two so that they can figure out the similarities and the differences, and see what works for them. How Android Studio differs There are many ways in which Android Studio differs from other IDEs and development tools. Some of these differences are quite subtle, such as the way support libraries are installed, and others, for instance, the build process and the UI design, are profoundly different. Before taking a closer look at the IDE itself, it is a good idea to first understand what some of these important differences are. The major ones are listed here:  UI development: The most significant difference between Studio and other IDEs is its layout editor, which is far superior to any of its rivals, offering text, design, and blueprint views, and most importantly, constraint layout tools for every activity or fragment, an easy-to-use theme and style editors, and a drag-and-drop design function. The layout editor also provides many tools unavailable elsewhere, such as a comprehensive preview function for viewing layouts on a multitude of devices and simple-to-use theme and translation editors. Project structure: Although the underlying directory structure remains the same, the way Android Studio organizes each project differs considerably from its predecessors. Rather than using workspaces as in Eclipse, Studio employs modules that can more easily be worked on together without having to switch workspaces. This difference in structure may seem unusual at first, but any Eclipse user will soon see how much time it can save once it becomes familiar.  Code completion and refactoring: The way that Android Studio intelligently completes code as you type makes it a delight to use. It regularly anticipates what you are about to type, and often a whole line of code can be entered with no more than two or three keystrokes. Refactoring too, is easier and more far-reaching than alternative IDEs, such as Eclipse and NetBeans. Almost anything can be renamed, from local variables to entire packages.  Emulation: Studio comes equipped with a flexible virtual device editor, allowing developers to create device emulators to model any number of real-world devices. These emulators are highly customizable, both in terms of form factor and hardware configurations, and virtual devices can be downloaded from many manufacturers. Users of other IDEs will be familiar with Android AVDs already, although they will certainly appreciate the preview features found in the Design tab. Build tools: Android Studio employs the Gradle build system, which performs the same functions as the Apache Ant system that many Java developers will be familiar with. It does, however, offer a lot more flexibility and allows for customized builds, enabling developers to create APKs that can be uploaded to TestFlight, or to produce demo versions of an app, with ease. It is also the Gradle system that allows for the modular nature. Rather than each library or a third-party SDK being compiled as a JAR file, Studio builds each of these using Gradle. These are the most far-reaching differences between Android Studio and other IDEs, but there are many other features which are unique. Studio provides the powerful JUnit test facility and allows for cloud platform support and even Wi-Fi debugging. It is also considerably faster than Eclipse, which, to be fair, has to cater for a wider range of development needs, as opposed to just one, and it can run on less powerful machines. Android Studio also provides an amazing time-saving device in the form of Instant Run. This feature cleverly only builds the part of a project that has been edited, meaning that developers can test small changes to code without having to wait for a complete build to be performed for each test. This feature can bring waiting time down from minutes to almost zero. To know more about Android studio and how to build faster, smoother, and error-free Android applications, be sure to check out the book 'Mastering Android Studio 3'. The art of Android Development using Android Studio Getting started with Android Things  Unit Testing apps with Android Studio
Read more
  • 0
  • 0
  • 8312
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-3-cybersecurity-lessons-for-e-commerce-website-administrators
Guest Contributor
25 Jun 2019
8 min read
Save for later

3 cybersecurity lessons for e-commerce website administrators

Guest Contributor
25 Jun 2019
8 min read
In large part, the security of an ecommerce company is the responsibility of its technical support team and ecommerce software vendors. In reality, cybercriminals often exploit the security illiteracy of the staff to hit a company. Of all the ecommerce team, web administrators are often targeted for hacker attacks as they control access to the admin panel with lots of sensitive data. Having broken into the admin panel, criminals can take over an online store, disrupt its operation, retrieve customer confidential data, steal credit card information, transfer payments to their own account, and do more harm to business owners and customers. Online retailers contribute to the security of their company greatly when they educate web administrators where security threats can come from and what measures they can take to prevent breaches. We have summarized some key lessons below. It’s time for a quick cybersecurity class! Lesson 1. Mind password policy Starting with the basis of cybersecurity, we will proceed to more sophisticated rules in the lessons that follow. The importance of secure password policy may seem obvious, it's still shocking how careless people can be with choosing a password. In e-commerce, web administrators set credentials for accessing the admin panel and they can “help” cybercriminals greatly if they neglect basic password rules. Never use similar or alike passwords to log into different systems. In general, sticking to the same patterns when creating passwords (for example, using a date of birth) is risky. Typically, people have a number of personal profiles in social networks and email services. If they use identical passwords to all of them, cybercriminals can steal credentials just to one social media profile to crack the others. If employees are that negligent about accessing corporate systems, they endanger the security of the company. Let’s outline the worst-case scenario. Criminals take advantage of the leaked database of 167 million LinkedIn accounts to hack a large online store. As soon as they see the password of its web administrator (the employment information is stated in the profile just for hackers’ convenience), they try to apply the password to get access to the admin panel. What luck! The way to break into this web store was too easy. Use strong and impersonalized passwords. We need to introduce the notion of doxing to fully explain the importance of this rule. Doxing is the process of collecting pieces of information from social accounts to ultimately create a virtual profile of a person. Cybercriminals engage doxing to crack a password to an ecommerce platform by using an admin’s personal information in it. Therefore, a strong password shouldn’t contain personal details (like dates, names, age, etc.) and must consist of eight or more characters featuring a mix of letters, numbers, and unique symbols. Lesson 2. Watch out for phishing attacks With the wealth of employment information people leave in social accounts, hackers hold all the cards for implementing targeted, rather than bulk, phishing attacks. When planning a malicious attack on an ecommerce business, criminals can search for profiles of employees, check their position and responsibilities, and conclude what company information they have access to. In such an easy way, hackers get to know a web store administrator and follow with a series of phishing attacks. Here are two possible scenarios of attacks: When hackers target a personal computer. Having found a LinkedIn profile of a web administrator and got a personal email, hackers can bombard them with disguised messages, for example, from bank or tax authorities. If the admin lets their guard down and clicks a malicious link, malware installs itself on their personal computer. Should they remotely log in the admin panel, hackers steal their credentials and immediately set a new password. From this moment, they take over the control over a web store. Hackers can also go a different way. They target a personal email of the web administrator with a phishing attack and succeed in taking it over. Let’s say they have already found out a URL to the admin panel by that time. All they have to do now is to request to change the password to the panel, click the confirmation link from the admin’s email and set a new password. In the described scenario, the web administrator has made three security mistakes of using a personal email for work purposes, not changing the default admin URL, and taking the bait of a phishing email. When hackers target a work computer. Here is how a cyberattack may unfold if web administrators have been reckless to disclose a work email online. This time, hackers create a targeted malicious email related to work activities. Let’s say, the admin can get a legitimate-looking email from FedEx informing about delivery problems. Not alarmed, they open the email, click the link to know the details, and compromise the security of the web store by giving away the credentials to the admin panel to hackers. The main mistake in dealing with phishing attacks is to expect a fraudulent email to look suspicious. However, phishers falsify emails from real companies so it can be easy to fall into the trap. Here are recommendations for ecommerce web administrators to follow: Don’t use personal emails to log in to the admin panel. Don’t make your work email publicly available. Don’t use work email for personal purposes (e.g., for registration in social networks). Watch out for links and downloads in emails. Always hover over the link prior to click it – in malicious emails, the destination URL doesn’t match the expected destination website. Remember that legitimate companies never ask for your credentials, credit card details or any other sensitive information in emails. Be wary of emails with urgent notifications and deadlines – hackers often try to allay suspicions by provoking anxiety and panic among their victims. Engage two-step verification for an ecommerce admin panel. Lesson 3.  Stay alert while communicating with a hosting provider Web administrators of companies that have chosen a hosted ecommerce platform for their e-shop will need to contact the technical support of their hosting provider now and then. Here, a cybersecurity threat comes unexpected. If hackers have compromised the security of the web hosting company, they can target its clients (e-commerce websites) as well. Admins are in serious danger if the hosting company stores their credentials unencrypted. In this case, hackers can get direct access to the admin panel of a web store. Otherwise, more sophisticated attacks are developed. Cybercriminals can mislead web administrators by speaking for tech support agents. When communicating with their hosting provider, web administrators should mind several rules to protect their confidential data and the web store from hacking. Use unique email and password to log in your web hosting account. The usage of similar credentials for different work services or systems leads to a company security breach in case the hosting company has been hacked. Never reveal any credentials on request of tech support agents. Having shared their password to the admin panel, web administrators can no longer authenticate themselves by using it. Track your company communication with tech support. Web administrators can set email notifications to track requests from team members to the tech support and control what information is shared. Time for an exam As a rule, ecommerce software vendors and retailers do their best for the security of ecommerce businesses. Thus, software vendors take the major role in providing for the security of SaaS ecommerce solutions (like Shopify or Salesforce Commerce Cloud), including the security of servers, databases and the application itself. In IaaS solutions (like Magento), retailers need to put more effort in maintaining the security of the environment and system, staying current on security updates, conducting regular audits and more (you can see the full list of Magento security measures as an example). Still, cybercriminals often target company employees to hack an online store. Retailers are responsible for educating their team what security rules are compulsory to follow and how to identify malicious intents. In our article, we have outlined the fundamental security lessons for web administrators to learn in order to protect a web store against illicit access. In short, they should be careful with personal information they publish online (in their social media profiles) and use unique credentials for different services and systems. There are no grades in our lessons – rather an admin’s contribution to the security of their company can become the evaluation of knowledge they have gained. About the Author Tanya Yablonskaya is Ecommerce Industry Analyst at ScienceSoft, an IT consulting and software development company headquartered in McKinney, Texas. After 2+ years of exploring the cryptocurrency and blockchain sphere, she has shifted the focus of interest to ecommerce industry. Delving into this enormous world, Tanya covers key challenges online retailers face and unveils a wealth of tools they can use to outpace competitors. The US launched a cyber attack on Iran to disable its rocket launch systems; Iran calls it unsuccessful All Docker versions are now vulnerable to a symlink race attack 12,000+ unsecured MongoDB databases deleted by Unistellar attackers
Read more
  • 0
  • 0
  • 8307

article-image-cyber-security-and-internet-things
Owen Roberts
12 Jun 2016
4 min read
Save for later

Cyber Security and the Internet of Things

Owen Roberts
12 Jun 2016
4 min read
We’re living in a world that’s more connected than we once ever thought possible. Even 10 years ago, the idea of our household appliances being connected to our Nokias was impossible to comprehend. But things have changed now and almost every week we seem to be seeing another day-to-day item now connected to the internet. Twitter accounts like @internetofShit are dedicated to pointing out every random item that is now connected to the internet; from smart wallets to video linked toothbrushes to DRM infused wine bottles, but the very real side to all the laughing and caution - For every connected device you connect to your network you’re giving attackers another potential hole to crawl through. This weekend, save 50% on some of our very best IoT titles - or, if ones not enough pick up any 5 features products for $50! Start exploring here. IoT security has simply not been given much attention by companies. Last year two security researchers managed to wirelessly hack into a Jeep Cherokee, first by taking control of the entertainment system and windshield wipers before moving on to disable the accelerator; just months earlier a security expert managed to take over and force a plane to fly sideways by making a single engine go into climb mode. In 2013 over 40 million credit card numbers were taken from US retailer Target after hackers managed to get into the network via the AC company that worked with the retailer. The reaction to these events was huge, along with the multitude of editorials wondering how this could happen… when security experts were wondering in turn how it took so long. The problem until recently was that the IoT was seen mostly as a curio – a phone apps that turns your light on or sets the kettle at the right time was seen as a quaint little toy to mess around with for a bit, it was hard for most to fully realize how it could tear a massive hole in your network security. Plus the speed of which these new gadgets are entering the market is becoming much faster, what used to take 3-4 years to reach the market is now taking a year or less to capitalize on the latest hype; Kickstarter projects by those new to business are being sent out into the world, homebrew is on the rise. To give an example of how this landscape could affect us the French technology institute Eurecom downloaded some 32,000 firmware images from potential IoT device manufacturers and discovered 38 vulnerabilities across 123 products. These products were found in at least 140K devices accessible over the internet. Now imagine what the total number of vulnerabilities across all IoT products on all networks is, the potential number is scarily huge. The wind is changing slowly. In October, the IoT Security Summit is taking place in Boston, with speakers from both the FBI and US Homeland Security playing prominent roles as Speakers. Experts are finally speaking up about the need to properly secure our interconnected devices. As the IoT becomes mainstream and interconnected devices become more affordable to the general public we need to do all we can to ensure that potential security cracks are filled as soon as possible; every new connection is a potential entrance for attackers to break in and many people simply have little to no knowledge of how to improve their computer security. While this will improve as time goes on companies and developers need to be proactive in their advancement of IoT security. Choosing not to do so will mean that the IoT will become less of a tech revolution and more of a failure left on the wayside.
Read more
  • 0
  • 0
  • 8278

article-image-5-reasons-to-choose-kotlin-over-java
Richa Tripathi
30 Apr 2018
3 min read
Save for later

5 reasons to choose Kotlin over Java

Richa Tripathi
30 Apr 2018
3 min read
Java has been a master of all in almost every field of application development, making the Java developers not wander much in search for other languages. However, things have changed with the steady evolution of Kotlin. Kotlin, no more the "other JVM language" has even surpassed Java's prominence . So,what makes this language stand-out and why is it growing in adoption for application development? What are the benefits of Kotlin vs Java, and how can it help developers? In this article, we’re going to look at the top 5 reasons why Kotlin takes a superior stand over Java and why it will work best for your next development project. Kotlin is more concise Kotlin is way more concise than Java in many cases, solving the same problems with fewer lines of code. This improves code maintainability and readability, meaning engineers can write, read, and change code more effectively and efficiently. Kotlin exclusive features such as type inference, smart casts, data classes, and properties help achieve conciseness. Kotlin’s null-safety is great NullPointerExceptions are a huge source of frustration for Java developers. Java allows you to assign null to any variable, but if you try to use an object reference that has a null value, then brace yourself to encounter a NullPointerException! Kotlin’s type system is aimed to eliminate NullPointerExceptions from the code. This type of system helps to avoid null pointer exceptions by simply refusing to compile code that tries to assign or return null. Combine the best of Functional and Procedural Programming Each set of programming paradigm has its own set of pros and cons. Combining the power of both functional and procedural programming leads to better development and output. It consists of many useful methods, which includes higher-order functions, lambda expressions, operator overloading, lazy evaluation, and much more. With a list of weaknesses and strengths from both languages, Kotlin offers inexpensive and intuitive coding style. The power of Kotlin’s extension functions Extensions of Kotlin are very useful because they allow developers to add methods to classes without making changes to their source code. Here, you can add methods on a per-user basis to classes. This allows users to extend the functionality of existing classes without inheriting the functions and properties from other classes. Interoperability with JAVA When debating between Kotlin vs Java, there is always a third option: Use them both. Despite all the differences Kotlin and Java are 100% interoperable,you can literally continue work on your old Java projects using Kotlin. You can call Kotlin code from Java, and you can call Java code from Kotlin. So it’s possible to have Kotlin and Java classes side-by-side within the same project, and everything will still compile. Undoubtedly, Kotlin has made many positive changes to the long and most used Java. It helps to write safer code, because with less work it's possible to write a more reliable code, thus making the life of programmers a lot easier. Kotlin is really a good replacement for Java. With time, more and more advanced features will be added to the Kotlin’s ecosystem that will help its popularity to grow towards its apex making the developers world more promising. Also read Why are Android developers switching from Java to Kotlin? Getting started with Kotlin programming  
Read more
  • 0
  • 0
  • 8269

article-image-jakarta-ee-past-present-and-future
David Heffelfinger
16 Aug 2018
10 min read
Save for later

Jakarta EE: Past, Present, and Future

David Heffelfinger
16 Aug 2018
10 min read
You may have heard some talk about a new Java framework called Jakarta EE, in this article we will cover what Jakarta EE actually is, how we got here, and what to expect when it’s actually released. History and Background In September of 2017, Oracle announced it was donating Java EE to the Eclipse Foundation. Isn’t Eclipse a Java IDE? Most Java developers are familiar with the hugely popular Eclipse IDE, therefore for many, when they hear the word “Eclipse”, the Eclipse IDE comes to mind. Not everybody knows that the Eclipse IDE is developed by the Eclipse Foundation, an open source foundation similar to the Apache Foundation and the Linux Foundation. In addition to the Eclipse IDE, the Eclipse Foundation develops several other Java tools and APIs such as Eclipse Vert.x, Eclipse Yasson, and EclipseLink. Java EE was the successor to J2EE; which was a wildly popular set of specifications for implementing enterprise software. In spite of its popularity, many J2EE APIs were cumbersome to use and required lots of boilerplate code. Sun Microsystems, together with the Java community as part of the Java Community Process (JCP), replaced J2EE with Java EE in 2006. Java EE introduced a much nicer, lightweight programming model, making enterprise Java development much more easier than what could be accomplished with J2EE. J2EE was so popular that, to this day, it is incorrectly used as a generic term for all server-side Java technologies. Many, to this day still refer to Java EE as J2EE, and incorrectly assume Java EE is a bloated, convoluted technology. In short, J2EE was so popular that even Java EE can’t shake its predecessor’s reputation for being a “heavyweight” technology. In 2010 Oracle purchased Sun Microsystems, and became the steward for Java technology, including Java EE. Java EE 7 was released in 2013, after the Sun Microsystems acquisition by Oracle, simplifying enterprise software development even further, and adding additional APIs to meet new demands of enterprise software systems. Work on Java EE 8, the latest version of the Java EE specification, began shortly after Java EE 7 was released. In the beginning everything seemed to be going well, however  in early 2016, the Java EE community started noticing a lack of progress in Java EE 8, particularly Java Specification Requests (JSRs) led by Oracle. The perceived lack of Java EE 8 progress became a big concern for many in the Java EE community. Since the specifications were owned by Oracle, there was no legal way for any other entity to continue making progress on Java EE 8. In response to the perceived lack of progress, several Java EE vendors, including big names such as IBM and Red Hat, got together and started the Microprofile initiative, which aimed to introduce new APIs to Java EE, with a focus on optimizing Java EE for developing systems based on a microservices architecture. The idea wasn’t to compete with Java EE per se, but to develop new specifications in the hopes that they would be eventually added to Java EE proper. In addition to big vendors reacting to the perceived Java EE progress, a grassroots organization called the Java EE Guardians was formed, led largely by prominent Java EE advocate Reza Rahman. The Java EE Guardians provided a way for Java EE developers and advocates to have a united, collective voice which could urge Oracle to either keep working on Java EE 8, or to allow the community to continue the work themselves. Nobody can say for sure how much influence the Microprofile initiative and Java EE Guardians had, but many speculate that Java EE would have never been donated to the Eclipse Foundation had it not been for these two initiatives. One Standard, Multiple Implementations It is worth mentioning that Java EE is not a framework per se, but a set of specifications for various APIs. Some examples of Java EE specifications include the Java API for RESTful Web Services (JAX-RS), Contexts and Dependency Injection (CDI), and the Java Persistence API (JPA). There are several implementations of Java EE, commonly known as application servers or runtimes, examples include Weblogic, JBoss, Websphere, Apache Tomee, GlassFish and Payara. Since all of these implement the Java EE specifications, code written against one of these servers can easily be migrated to another one, with minimal or no modifications. Coding against the Java EE standard provides protection against vendor lock-in. Once Jakarta EE is completely migrated to the Eclipse Foundation, it will continue being a specification with multiple implementations, keeping one of the biggest benefits of Java EE. To become Java EE certified, application server vendors had to pay Oracle a fee to obtain a Technology Compatibility Kit (TCK), which is a set of tests vendors can use to make sure their products comply 100% with the Java EE specification. The fact that the TCK is closed source and not publicly available has been a source of controversy among the Java EE community. It is expected that the TCK will be made publicly available once the transition to the Eclipse Foundation is complete. From Java EE to Jakarta EE Once the announcement of the donation was made, it became clear that for legal reasons Java EE would have to be renamed, as Oracle owns the “Java” trademark. The Eclipse Foundation requested input from the community, hundreds of suggestions were submitted. The Foundation made it clear that naming such a big project is no easy task, there are several constraints that may not be obvious to the casual observer, such as: the name must not be trademarked in any country, it must be catchy, and it must not spell profanity in any language. Out of hundreds of suggestions, the Eclipse Foundation narrowed them down to two choices, “Enterprise Profile” and “Jakarta EE”, and had the community vote for their favorite. “Jakarta EE” won by a fairly large margin. It is worth mentioning that the name “Jakarta” carries a bit of history in the Java world, as it used to be an umbrella project under the Apache Foundation. Several very popular Java tools and libraries used to fall under the Jakarta umbrella, such as the ANT build tool, the Struts MVC framework, and many others. Where we are in the transition Ever since the announcement, the Eclipse Foundation along with the Java EE community at large has been furiously working on transitioning Java EE to the Eclipse Foundation. Transitioning such a huge and far reaching project to an open source foundation is a huge undertaking, and as such it takes some time. Some of the progress so far includes relicensing all Oracle led Java EE technologies, including reference implementations (RI), Technology Compatibility Kits (TCK) and project documentation.  39 projects have been created under the Jakarta EE umbrella, corresponding to 39 Java EE specifications being donated to the Eclipse Foundation. Reference Implementations Each Java EE specification must include a reference implementation, which proves that the requirements on the specification can be met by actual code. For example, the reference implementation for JSF is called Mojarra, the CDI reference implementation is called Weld, and the JPA is called EclipseLink. Similarly, all other Java EE specifications have a corresponding reference implementation. These 39 projects are in different stages of completion, a small minority are still in the proposal stage; some have provisioned committers and other resources, but code and other artifacts hasn’t been transitioned yet; some have had the initial contribution (code and related content) transitioned already, the majority of the projects have had the initial contribution committed to the Eclipse Foundation’s Git repository, and a few have had their first Release Review, which is a formal announcement of the project’s release to the Eclipse Foundation, and a request for feedback. Current status for all 39 projects can be found at https://www.eclipse.org/ee4j/status.php. Additionally, the Jakarta EE working group was established, which includes Java EE implementation vendors, companies that either rely on Java EE or provide products or services complementary to Java EE, as well as individuals interested in advancing Jakarta EE. It is worth noting that Pivotal, the company behind the popular Spring Framework, has joined the Jakarta EE Working Group. This is worth pointing out as the Spring Framework and Java EE have traditionally been perceived as competing technologies. With Pivotal joining the Jakarta EE Working Group some are speculating that “the feud may soon be over”, with Jakarta EE and Spring cooperating with each other instead of competing. At the time of writing, it has been almost a year since the announcement that Java EE is moving to the Eclipse foundation, some may be wondering what is holding up the process. Transitioning a project of such a massive scale as Java EE involves several tasks that may not be obvious to the casual observer, both tasks related to legal compliance as well as technical tasks. For example, each individual source code file needs to be inspected to make sure it has the correct license header. Project dependencies for each API need to be analyzed. For legal reasons, some of the Java EE technologies need to be renamed, appropriate names need to be found. Additionally, build environments need to be created for each project under the Eclipse Foundation infrastructure. In short, there is more work than meets the eye. What to expect when the transition is complete The first release of Jakarta EE will be 100% compatible with Java EE. Existing Java EE applications, application servers and runtimes will also be Jakarta EE compliant. Sometime after the announcement, the Eclipse Foundation surveyed the Java EE community as to the direction Jakarta EE should take under the Foundation’s guidance. The community overwhelmingly stated that they want better support for cloud deployment, as well as better support for microservices. As such, expect Jakarta EE to evolve to better support these technologies. Representatives from the Eclipse Foundation have stated that release cadence for Jakarta EE will be more frequent than it was for Java EE when it was under Oracle. In summary, the first version of Jakarta EE will be an open version of Java EE 8, after that we can expect better support for cloud and microservices development, as well as a faster release cadence. Help Create the Future of Jakarta EE Anyone, from large corporations to individual contributors can contribute to Jakarta EE. I would like to invite interested readers to contribute! Here are a few ways to do so: Subscribe to Jakarta EE community mailing list: jakarta.ee-community@eclipse.org Contribute to EE4J projects: https://github.com/eclipse-ee4j You can also keep up to date with the latest Jakarta EE happenings by following Jakarta EE on Twitter at @JakartaEE or by visiting the Jakarta EE web site at https://jakarta.ee About the Author David R. Heffelfinger David R. Heffelfinger is an independent consultant based in the Washington D.C. area. He is a Java Champion, an Apache NetBeans committer, and a former member of the JavaOne content committee. He has written several books on Java EE, application servers, NetBeans, and JasperReports. David is a frequent speaker at software development conferences such as JavaOne, Oracle Code and NetBeans Day. You can follow him on Twitter at @ensode.  
Read more
  • 0
  • 0
  • 8268
article-image-risk-wearables-how-secure-your-smartwatch
Sam Wood
10 Jun 2016
4 min read
Save for later

The Risk of Wearables - How Secure is Your Smartwatch

Sam Wood
10 Jun 2016
4 min read
Research suggests we're going to see almost 700 million smartwatches and wearable units shipped to consumers over the next few years. Wearables represent an exciting new frontier for developers - and a potential new cyber security risk. Smartwatches record a surprisingly large amount of data, and that data often isn't very secure. What data do smartwatches collect? Smartwatches are stuffed full of sensors, to monitor your body and the world around you. A typical smartwatch might include any of the following: Gyroscope Accelerometer Light detection Heart rate monitor GPS Pedometer Through SDKs like Apple's ResearchKit, or through firmware like on the FitBit apps can be created to allow a wearable to monitor and collect this very physical personal data. These data collection is benign and useful - but encompasses some very personal parts of an individuals life such as health, daily activities, and even sleeping patterns. So is it secure? Where is the data stored and how can hackers access it? Smart wearables almost always link up to another 'host' device and that device is almost always a mobile phone. Data from wearables is stored and analysed on that host device, and is in turn vulnerable to the myriad of attacks that can be undertaken against mobile devices. Potential attacks include: Direct USB Connection: Physically linking your wearable with a USB port, either after theft or with a fake charging station. Think it's unlikely? So called 'juice jacking' is more common than you might think. WiFi, Bluetooth and Near Field Communication: Wearables are made possible by wireless networks, whether Bluetooth, WiFi, or NFC. This makes them especially vulnerable to the myriad of wireless attacks it is possible to execute - even something a simple as rooting a device over WiFi with SSH. Malware and Web-based Attacks: Mobile devices remain highly vulnerable to attacks from malware and web-based attacks such as StageFright. Why is this data a security risk? You might be thinking "What do I care if some hacker knows how much I walk during the day?" But access to this data has some pretty scary implications. Our medical records are sealed tight for a reason - do you really want a hacker to be able to intuit the state of your health from your heart rate and exercise? What about if they then sell that data to your medical insurer? Social engineering is one of the most used tools of anyone seeking access to a secure system or area. Knowing how a person slept, where they work out, when their heart rate has been elevated - even what sort of mood they might be in - all makes it that much easier for a hacker to manipulate human weakness. Even if we're not a potential gateway into a highly secured organization, this data can be hypothetically used by dodgy advertisers and products to target us when we're at our most vulnerable. For example, 'Freemium' games often have highly sophisticated models for when to push their paid content and turn us into 'whales' who are always buying their product. Access to elements of our biometrics would only make this that much easier. What does this mean? As our lives integrate more and more with information technology, our data moves further and further outside of our own control. Wearables mean the recording of some of our most intimate details - and putting that data at risk in turn. Even when we work to keep it secure, it only takes one momentary lapse to put it at risk to anyone who's ever been interested in seeing it. Information security is only going to get more vital to all of us. Acknowledgements This blog is based on the presentation delivered by Sam Phelps at the Security BSides London 2016.
Read more
  • 0
  • 0
  • 8263

article-image-pets-cattle-analogy-demonstrates-how-serverless-fits-software-infrastructure-landscape
Russ McKendrick
20 Feb 2018
8 min read
Save for later

The pets and cattle analogy demonstrates how serverless fits into the software infrastructure landscape

Russ McKendrick
20 Feb 2018
8 min read
When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers. This can be quite a valid conclusion if you are using a public cloud service like AWS, but when it comes to running in your own environment, you can't avoid having to run on a server of some sort. This blog post is an extract from Kubernetes for Serverless Applications by Russ McKendrick. Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach. The pets, cattle, chickens insects, and snowflakes analogy I first came across the pets versus cattle analogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft. The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago. Pets: the bare metal servers and virtual machines Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines: We name each server as you would a pet. For example, app-server01.domain.com and database-server01.domain.com. When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily. You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented. There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers. Cattle: the sort of instances you run on public clouds Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled. You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called ip123067099123.domain.com and tagged as app-server. When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica. You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years. Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires additional resources, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state. Chickens: an analogy for containers In 2015,  Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle: Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance. Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM. Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes. Insects: An analogy for serverless Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service. Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds. Snowflakes Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare: Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago. Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up. Bringing the pets, cattle, chickens, insects and snowflakes analogy together... When I explain the analogy to people, I usually sum up by saying something like this: Organizations who have pets are slowly moving their infrastructure to be more like cattle. Those who are already running their infrastructure as cattle are moving towards chickens to get the most out of their resources. Those running chickens are going to be looking at how much work is involved in moving their application to run as insects by completely decoupling their application into individually executable components. But the most important take away is this:  No one wants to or should be running snowflakes. Serverless and insects As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model. When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers. Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code. Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments. So how does that explanation fits in with the insect analogy? Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site. In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed. We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests. Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.
Read more
  • 0
  • 0
  • 8256

article-image-what-are-lightweight-architecture-decision-records
Richard Gall
16 May 2018
4 min read
Save for later

What are lightweight Architecture Decision Records?

Richard Gall
16 May 2018
4 min read
Architecture Decision Records (ADRs) document all the decisions made about software. Every change is recorded in a plain text file sitting inside a version control system (like GitHub). The record should be a complement to the information you can find in a version control system. The ADR provides context and information around every decision made about a piece of software. Why are lightweight Architecture Decision Records needed? We are always making decisions when we build software. Even the simplest piece of software will have required the engineer to take a number of different decisions. Often these decisions aren't obvious. If you've ever had to work with code written by someone else you're probably familiar with this sort of situation. You might have even found that when you come across someone else's code, you need to make a further decision. Either you can simply accept what has been written, and merely surmise and assume why it has been done in the way that it, or you can decide to change it, based on your own judgement. Neither option is ideal. This was what Michael Nygard identified in this blog post in 2011. This was when the concept of Architecture Decision Records first emerged. An ADR should prevent situations like this arising. That makes life easier for you. More importantly, it should mean that every decision is transparent to everyone involved. So, instead of blindly accepting something or immediately changing it, you can simply check the Architecture Decision Record. This will then inform how you proceed. Perhaps you need to make a change. But perhaps you also now understand the context of why something was built in the way it was. Any questions you might have should be explicitly answered in the architecture decision record. So, when you start asking yourself why has she done it like that? instead of floundering helplessly, you can find the answer in the ADR. Why lightweight Architecture Decision Records now? Architecture Decision Records aren't a new thing. Nygard wrote his post all the way back in 2011, after all. But the fact remains that the context from which Nygard was writing in 2011 was very specific. Today it is mainstream. As we've moved away from monolithic architecture towards microservices or serverless, decision making has become more and more important in software engineering. This is a point that is well explained in a blog post here: "The rise of lean development and microservices... complicates the ability to communicate architecture decisions. While these concepts are not inherently opposed to documentation, their processes often fail to effectively capture decision-making processes and reasoning. Another possible inefficiency when recording decisions is bad or out-of-date documentation. It's often a herculean effort to keep large, complex architecture documents current, making maintenance one of the most common barriers to entry." ADRs are, then, a way of managing the complexity in modern software engineering. They are a response to a fundamental need to better communicate decisions. Most importantly, they codify decision-making within the development process. It is when they are lightweight and sit within the project itself that they are most effective. Architecture Decision Record template Architecture Decision Records must follow a template. Not only does that mean everyone is working off the same page, it also means people are actually more likely to document their decisions. Think about it: if you're asked to note how you decide to do something without any guidelines, you're probably not going to do it at all. Below, you'll find an Architecture Decision Record example template. There are a number of different templates you can use, but it's probably best to sit down with your team and agree on what needs to be captured. An Architecture Decision Record example Date Decision makers [who was involved in the decision taken] Category [which part of the architecture does this decision pertain to] Contextual outline [Explain why this decision was made. Outline the key considerations and assumptions at play] Impact consequences [What does this decision mean for the project? What should someone reading this be aware of in terms of future decisions?] As I've already noted, there are a huge number of ways you may want to approach this. Use this as a starting point. Read next Enterprise Architecture Concepts Reactive Programming and the Flux Architecture
Read more
  • 0
  • 0
  • 8248
article-image-how-should-web-developers-learn-machine-learning
Chris Tava
12 Jun 2017
6 min read
Save for later

How should web developers learn machine learning?

Chris Tava
12 Jun 2017
6 min read
Do you have the motivation to learn machine learning? Given its relevance in today's landscape, you should be motivated to learn about this field. But if you're a web developer, how do you go about learning it? In this article, I show you how. So, let’s break this down. What is machine learning? You may be wondering why machine learning matters to you, or how you would even go about learning it. Machine learning is a smart way to create software that finds patterns in data without having to explicitly program for each condition. Sounds too good to be true? Well it is. Quite frankly many of the state-of-the-art solutions to the toughest machine learning problems don’t even come close to reaching 100 percent accuracy and precision. This might not sound right to you if you’ve been trained, or have learned, to be precise and deterministic with the solutions you provide to the web applications you’ve worked on. In fact, machine learning is such a challenging problem domain that data scientists describe problems to be tractable or not. Computer algorithms can solve tractable problems in a reasonable amount of time with a reasonable amount of resources, whereas, in-tractable problems simply can’t be solved. Decades more of R&D is needed at a deep theoretical level, to bring approaches and frameworks forward that will then take years to be applied and be useful to society. Did I scare you off? Nope? Okay great. Then you accept this challenge to learn machine learning.  But before we dive into how to learn machine learning, let's answer the question: Why does learning machine learning matter to you?  Well, you're a technologist and as a result, it’s your duty, your obligation, to be on the cutting edge. The technology world is moving at a fast clip and it’s accelerating. Take for example, the shortened duration between public accomplishments of machine learning versus top gaming experts. It took a while to get to the 2011 Watson v. Jeopardy champion, and far less time between AlphaGo and Libratus. So what's the significance to you and your professional software engineering career? Elementary dear my Watson—just like the so-called digital divide between non-technical and technical lay people, there is already the start of a technological divide between top systems engineers and the rest of the playing field in terms of making an impact and disrupting the way the world works.  Don’t believe me? When’s the last time you’ve programmed a self-driving car or a neural network that can guess your drawings? Making an impact and how to learn machine learning The toughest part about getting started with machine learning is figuring out what type of problem you have at hand because you run the risk of jumping to potential solutions too quickly before understanding the problem. Sure you can say this of any software design task, but this point can’t be stressed enough when thinking about how to get machines to recognize patterns in data. There are specific applications of machine learning algorithms that solve a very specific problem in a very specific way and it’s difficult to know how to solve a meta-problem if you haven’t studied the field from a conceptual standpoint. For me, a break through in learning machine learning came from taking Andrew Ng’s machine learning course on courser. So taking online courses can be a good way to start learning.  If you don’t have the time, you can learn about machine learning through numbers and images. Let's take a look.  Numbers Conceptually speaking, predicting a pattern in a single variable based on a direct—otherwise known as a linear relationship with another piece of data—is probably the easiest machine learning problem and solution to understand and implement.  The following script predicts the amount of data that will be created based on fitting a sample data set to a linear regression model: https://github.com/ctava/linearregression-go. Because there is somewhat of a fit of the sample data to a linear model, the machine learning program predicted that the data created in the fictitious Bob’s system will grow from 2017, 2018.  Bob’s Data 2017: 4401Bob’s Data 2018 Prediction: 5707  This is great news for Bob and for you. You see, machine learning isn’t so tough after all. I’d like to encourage you to save data for a single variable—also known as feature—to a CSV file and see if you can find that the data has a linear relationship with time. The following website is handy in calculating the number of dates between two dates: https://www.timeanddate.com/date/duration.html. Be sure to choose your starting day and year appropriately at the top of the file to fit your data. Images Machine learning on images is exciting! It’s fun to see what the computer comes up with in terms of pattern recognition, or image recognition. Here’s an example using computer vision to detect that grumpy cat is actually a Persian cat: https://github.com/ctava/tensorflow-go-imagerecognition. If setting up Tensorflow from source isn’t your thing, not to worry. Here’s a Docker image to start off with: https://github.com/ctava/tensorflow-go. Once you’ve followed the instructions in the readme.md file, simply:  Get github.com/ctava/tensorflow-go-imagerecognition Run main.go -dir=./ -image=./grumpycat.jpg Result: BEST MATCH: (66% likely) Persian cat Sure there is a whole discussion on this topic alone in terms of what Tensorflow is, what’s a tensor, and what’s image recognition. But I just wanted to spark your interest so that maybe you’ll start to look at the amazing advances in the computer vision field. Hopefully this has motivated you to learn more about machine learning based on reading about the recent advances in the field and seeing two simple examples of predicting numbers, and classifying images.I’d like to encourage you to keep up with data science in general. About the Author  Chris Tava is a Software Engineering / Product Leader with 20 years of experience delivering applications for B2C and B2B businesses. His specialties include: program strategy, product and project management, agile software engineering, resource management, recruiting, people development, business analysis, machine learning, ObjC / Swift, Golang, Python, Android, Java, and JavaScript.
Read more
  • 0
  • 0
  • 8233

article-image-a-five-level-learning-roadmap-for-functional-programmers
Sugandha Lahoti
12 Apr 2019
4 min read
Save for later

A five-level learning roadmap for Functional Programmers

Sugandha Lahoti
12 Apr 2019
4 min read
The following guide serves as an excellent learning roadmap for functional programming. It can be used to track our level of knowledge regarding functional programming. This guide was developed for the Fantasyland institute of learning for the LambdaConf conference. It was designed for statically-typed functional programming languages that implement category theory. This post is extracted from the book Hands-On Functional Programming with TypeScript by Remo H. Jansen. In this book, you will understand the pros, cons, and core principles of functional programming in TypeScript. This roadmap talks about five levels of difficulty: Beginner, Advanced Beginner, Intermediate, Proficient, and Expert. Languages such as Haskell support category theory natively, but, we can take advantage of category theory in TypeScript by implementing it or using some third-party libraries. Not all the items in the list are 100% applicable to TypeScript due to language differences, but most of them are 100% applicable. Beginner To reach the beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Immutable data Second-order functions Constructing and destructuring Function composition First-class functions and lambdas Use second-order functions (map, filter, fold) on immutable data structures Destructure values to access their components Use data types to represent optionality Read basic type signatures Pass lambdas to second-order functions Advanced beginner To reach the advanced beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Algebraic data types Pattern matching Parametric polymorphism General recursion Type classes, instances, and laws Lower-order abstractions (equal, semigroup, monoid, and so on) Referential transparency and totality Higher-order functions Partial application, currying, and point-free style Solve problems without nulls, exceptions, or type casts Process and transform recursive data structures using recursion Able to use functional programming in the small Write basic monadic code for a concrete monad Create type class instances for custom data types Model a business domain with abstract data types (ADTs) Write functions that take and return functions Reliably identify and isolate pure code from an impure code Avoid introducing unnecessary lambdas and named parameters Intermediate To reach the intermediate level, you will need to master the following concepts and skills: CONCEPTS SKILLS Generalized algebraic data type Higher-kinded types Rank-N types Folds and unfolds Higher-order abstractions (category, functor, monad) Basic optics Implement efficient persistent data structures Existential types Embedded DSLs using combinators Able to implement large functional programming applications Test code using generators and properties Write imperative code in a purely functional way through monads Use popular purely functional libraries to solve business problems Separate decision from effects Write a simple custom lawful monad Write production medium-sized projects Use lenses and prisms to manipulate data Simplify types by hiding irrelevant data with existential Proficient To reach the proficient level, you will need to master the following concepts and skills: CONCEPTS SKILLS Codata (Co)recursion schemes Advanced optics Dual abstractions (comonad) Monad transformers Free monads and extensible effects Functional architecture Advanced functors (exponential, profunctors, contravariant) Embedded domain-specific languages (DSLs) using generalized algebraic datatypes (GADTs) Advanced monads (continuation, logic) Type families, functional dependencies (FDs) Design a minimally powerful monad transformer stack Write concurrent and streaming programs Use purely functional mocking in tests. Use type classes to modularly model different effects Recognize type patterns and abstract over them Use functional libraries in novel ways Use optics to manipulate state Write custom lawful monad transformers Use free monads/extensible effects to separate concerns Encode invariants at the type level. Effectively use FDs/type families to create safer code Expert To reach the expert level, you will need to master the following concepts and skills: CONCEPTS SKILLS High performance Kind polymorphism Generic programming Type-level programming Dependent-types, singleton types Category theory Graph reduction Higher-order abstract syntax Compiler design for functional languages Profunctor optics Design a generic, lawful library with broad appeal Prove properties manually using equational reasoning Design and implement a new functional programming language Create novel abstractions with laws Write distributed systems with certain guarantees Use proof systems to formally prove properties of code Create libraries that do not permit invalid states. Use dependent typing to prove more properties at compile time Understand deep relationships between different concepts Profile, debug, and optimize purely functional code with minimal sacrifices Summary This guide should be a good resource to guide you in your future functional-programming learning efforts. Read more on this in our book Hands-On Functional Programming with TypeScript. What makes functional programming a viable choice for artificial intelligence projects? Why functional programming in Python matters: Interview with best selling author, Steven Lott Introducing Coconut for making functional programming in Python simpler
Read more
  • 0
  • 0
  • 8211