Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-building-better-bundles-why-processenvnodeenv-matters-optimized-builds
Mark Erikson
14 Nov 2016
5 min read
Save for later

Building Better Bundles: Why process.env.NODE_ENV Matters for Optimized Builds

Mark Erikson
14 Nov 2016
5 min read
JavaScript developers are keenly aware of the need to reduce the size of deployed assets, especially in today's world of single-page apps. This usually means running increasingly complex JavaScript codebases through build steps that produce a minified bundle for deployment. However, if you read a typical tutorial on setting up a build tool like Browserify or Webpack, you'll see numerous references to a variable called process.env.NODE_ENV. Tutorials always talk about how this needs to be set to a value like "production" in order to produce a properly optimized bundle, but most articles never really spell out why this value matters and how it relates to build optimization. Here's an explanation of why process.env.NODE_ENV is used and how it fits into the typical build process. Operating system environment variables are widely used as a method of configuring applications, especially as a way to activate behavior based on different deployment environments (such as development vs testing vs production). Node.js exposes the current process's environment variables to the script as an object called process.env. From there, the Express web server framework popularized using an environment variable called NODE_ENV as a flag to indicate whether the server should be running in "development" mode vs "production" mode. At runtime, the script looks up that value by checking process.env.NODE_ENV. Because it was used within the Node ecosystem, browser-focused libraries also started using it to determine what environment they were running in, and using it to control optimizations and debug mode behavior. For example, React uses it as the equivalent of a C preprocessor #ifdef to act as conditional checking for debug logging and perf tracking, roughly like this: function someInternalReactFunction() { // do actual work part 1 if(process.env.NODE_ENV === "development") { // do debug-only work, like recording perf stats } // do actual work part 2 } If process.env.NODE_ENV is set to "production", all those if clauses will evaluate to false, and the potentially expensive debug code won't run. In addition, in conjunction with a tool like UglifyJS that does minification and removal of dead code blocks, a clause that is surrounded with if(process.env.NODE_ENV === "development") will become dead code in a production build and be stripped out, thus reducing bundled code size and execution time. However, because the NODE_ENV environment variable and the corresponding process.env.NODE_ENV runtime field are normally server-only concepts, by default those values do not exist in client-side code. This is where build tools such as Webpack's DefinePlugin or the Browserify Envify transform come in, which perform search-and-replace operations on the original source code. Since these build tools are doing transformation of your code anyway, they can force the existence of global values such as process.env.NODE_ENV. (It's also important to note that because DefinePlugin in particular does a direct text replacement, the value given to DefinePlugin must include actual quotes inside of the string itself. Typically, this is done either with alternate quotes, such as '"production"', or by using JSON.stringify("production")). Here's the key: the build tool could set that value to anything, based on any condition that you want, as you're defining your build configuration. For example, I could have a webpack.production.config.js Webpack config file that always uses the DefinePlugin to set that value to "production" throughout the client-side bundle. It wouldn't have to be checking the actual current value of the "real" process.env.NODE_ENV variable while generating the Webpack config, because as the developer I would know that any time I'm doing a "production" build, I would want to set that value in the client code to "production'. This is where the "code I'm running as part of my build process" and "code I'm outputting from my build process" worlds come together. Because your build script is itself most likely to be JavaScript code running under Node, it's going to have process.env.NODE_ENV available to it as it runs. Because so many tools and libraries already share the convention of using that field's value to determine their dev-vs-production status, the common convention is to use the current value of that field inside the build script as it's running to also determine the value of that field as applied to the client code being transformed. Ultimately, it all comes down to a few key points: NODE_ENV is a system environment variable that Node exposes into running scripts. It's used by convention to determine dev-vs-prod behavior, by both server tools, build scripts, and client-side libraries. It's commonly used inside of build scripts (such as Webpack config generation) as both an input value and an output value, but the tie between the two is still just convention. Build tools generally do a transform step on the client-side code, replace any references to process.env.NODE_ENV with the desired value, and the resulting code will contain dead code blocks as debug-only code is now inside of an if(false)-type condition, ensuring that code doesn't execute at runtime. Minifier tools such as UglifyJS will strip out the dead code blocks, leaving the production bundle smaller. So, the next time you see process.env.NODE_ENV mentioned in a build script, hopefully you'll have a much better idea why it's there. About the author Mark Erikson is a software engineer living in southwest Ohio, USA, where he patiently awaits the annual heartbreak from the Reds and the Bengals. Mark is author of the Redux FAQ, maintains the React/Redux Links list and Redux Addons Catalog, and occasionally tweets at @acemarke. He can be usually found in the Reactiflux chat channels, answering questions about React and Redux. He is also slightly disturbed by the number of third-person references he has written in this bio!
Read more
  • 0
  • 0
  • 19573

article-image-brief-history-minecraft-modding
Aaron Mills
03 Jun 2015
7 min read
Save for later

A Brief History of Minecraft Modding

Aaron Mills
03 Jun 2015
7 min read
Minecraft modding has been around since nearly the beginning. During that time it has gone through several transformations or “eras." The early days and early mods looked very different from today. I first became involved in the community during Mid-Beta, so everything that happened before then is second hand knowledge. A great deal has been lost to the sands of time, but the important stops along the way are remembered, as we shall explore. Minecraft has gone through several development stages over the years. Interestingly, these stages also correspond to the various “eras” of Minecraft Modding. Minecraft Survival was first experienced as Survival Test during Classic, then again in the Indev stage, which gave way to Infdev, then to Alpha and Beta before finally reaching Release. But before all that was Classic. Classic was released in May of 2009 and development continued into September of that year. Classic saw the introduction of Survival and Multiplayer. During this period of Minecraft’s history, modding was in its infancy. On the one hand, Server modding thrived during this stage with several different Server mods available. (These mods were the predecessors to Bukkit, which we will cover later.) Generally, the purpose of these mods was to give server admins more tools for maintaining their servers. On the other hand, however, Client side mods, ones that add new content, didn’t really start appearing until the Alpha stage. Alpha was released in late June of 2010, and it would continue for the rest of the year. Prior to Alpha, came Indev and Infdev, but there isn’t much evidence of any mods during that time period, possibly because of the lack of Multiplayer in Indev and Infdev. Alpha brought the return of Multiplayer, and during this time Minecraft began to see its first simple Client mods. Initially it was just simple modification of existing content: adding support for Higher Resolution textures, new arrow types, bug fixes, compass modifications, etc. The mods were simple and small. This began to change, though, beginning with the creation of the Minecraft Coder Pack, which was later renamed the Mod Coder Pack, commonly known as MCP. (One of the primary creators of MCP, Michael “Searge” Stoyke, now actually works for Mojang.) MCP saw its first release for Alpha 1.1.2_01 sometime in mid 2010. Despite being easily decompiled, Minecraft code was also obfuscated. Obfuscation is when you take all the meaningful names and words in the code and replace it with non-human readable nonsense. The computer can still make sense of it just fine, but humans have a hard time. MCP resolved this limitation by applying meaningful names to the code, making modding significantly easier than ever before. At the same time, but developing completely independently, was the server mod hMod, which gave some simple but absolutely necessary tools to server admins. However, hMod was in trouble as the main dev was MIA. This situation eventually led to the creation of Bukkit, a server mod designed from the ground up to support “plugins” and do everything that hMod couldn’t do. Bukkit was created by a group of people who were also eventually hired by Mojang: Nathan 'Dinnerbone' Adams, Erik 'Grum' Broes, Warren 'EvilSeph' Loo, and Nathan 'Tahg' Gilbert. Bukkit went on to become possibly the most popular Minecraft mod ever created. Many in fact believe its existence is largely responsible for the popularity of online Minecraft servers. However, it will remain largely incompatible with client side mods for some time. Not to be left behind, the client saw another major development late in the year: Risugami’s ModLoader. ModLoader was transformational. Prior to the existence of ModLoader, if you wanted to use two mods, you would have to manually merge the code, line by line, yourself. There were many common tasks that couldn’t be done without editing Minecraft’s base code, things such as adding new blocks and items. ModLoader changed that by creating a framework where simple mods could hook into ModLoader code to perform common tasks that previously required base edits. It was simple, and it would never really expand beyond its original scope. Still, it led modding into a new era. Minecraft Beta, what many call the “Golden Age” of modding, was released just before Christmas in 2010 and would continue through 2011. Beta saw the rise of many familiar mods that are still recognized today, including my own mod, Railcraft. Also IndustrialCraft, Buildcraft, Redpower, and Better than Wolves all saw their start during this period. These were major mods that added many new blocks and features to Minecraft. Additionally, the massive Aether mod, which recently received a modern reboot, was also released during Beta. These mods and more redefined the meaning of “Minecraft Mods”. They existed on a completely new scale, sometimes completely changing the game. But there were still flaws. Mods were still painful to create and painful to use. You couldn’t use IndustrialCraft and Buildcraft at the same time; they just edited too many of the same base files. ModLoader only covered the most common base edits, barely touching the code, and not enough for a major mod. Additionally, to use a mod, you still had to manually insert code into the Minecraft jar, a task that turned many players off of modding. Seeing that their mods couldn’t be used together, the creators of several major mods launched a new project. They would call it Minecraft Forge. Started by Eloraam of Redpower and SpaceToad of Buildcraft, it would see rapid adoption by many of the major mods of the time. Forge built on top of ModLoader, greatly expanding the number of base hooks and allowing many more mods to work together than was previously possible. This ushered in the true “Golden Age” of modding, which would continue from Beta and into Release. Minecraft 1.0 was released in November of 2011, heralding Minecraft’s “Official” release. Around the same time, client modding was undergoing a shift. Many of the most prominent developers were moving on to other things, including the entire Forge team. For the most part, their mods would survive without them, but some would not. Redpower, for example ceased all development in late 2012. Eloraam, SpaceToad, and Flowerchild would hand the reigns of Forge off to LexManos, a relatively unknown name at the time. The “Golden Age” was at an end, but it was replaced by an explosion of new mods and modding was becoming even more popular than ever. The new Forge team, consisting mainly of LexManos and cpw, would bring many new innovations to modding. Eventually they even developed a replacement for Risugami’s ModLoader, naming it ForgeModLoader and incorporating it into Forge. Users would no longer be required to muck around with Minecraft’s internals to install mods. Innovation has continued to the present day, and mods for Minecraft have become too numerous to count. However, the picture for server mods hasn’t been so rosy. Bukkit, the long dominant server mod, suffered a killing blow in 2014. Licensing conflicts developed between the original creators and maintainers, largely revolving around the who “owned” the project after the primary maintainers resigned. Ultimately, one of the most prolific maintainers used a technicality to invalidate the rights of the project to use his code, effectively killing the entire project. A replacement has yet to develop, leaving the server community limping along on increasingly outdated code. But one shouldn’t be too concerned about the future. There have been challenges in the past, but nearly every time a project died, it was soon replaced by something even better. Minecraft has one of the largest, most vibrant, and most mainstream modding communities ever to exist. It’s had a long and varied history, and this has been just a brief glimpse into that heritage. There are many more events, both large and small, that have helped shape the community. May the future of Minecraft continue to be as interesting. About the Author Aaron Mills was born in 1983 and lives in the Pacific Northwest, which is a land rich in lore, trees, and rain. He has a Bachelor's Degree in Computer Science and studied at Washington State University Vancouver. He is best known for his work on the Minecraft Mod, Railcraft, but has also contributed significantly to the Minecraft Mods of Forestry and Buildcraft as well some contributions to the Minecraft Forge project.
Read more
  • 0
  • 0
  • 19465

article-image-common-problems-in-delphi-parallel-programming
Pavan Ramchandani
27 Jul 2018
12 min read
Save for later

Common problems in Delphi parallel programming

Pavan Ramchandani
27 Jul 2018
12 min read
This tutorial will be explaining how to find performance bottlenecks and apply the correct algorithm to fix them when working with Delphi. Also, teach you how to improve your algorithms before taking you through parallel programming. The article is an excerpt from a book written by Primož Gabrijelčič, titled Delphi High Performance. Never access UI from a background thread Let's start with the biggest source of hidden problems—manipulating a user interface from a background thread. This is, surprisingly, quite a common problem—even more so as all Delphi resources on multithreaded programming will simply say to never do that. Still, it doesn't seem to touch some programmers, and they will always try to find an excuse to manipulate a user interface from a background thread. Indeed, there may be a situation where VCL or FireMonkey may be manipulated from a background thread, but you'll be treading on thin ice if you do that. Even if your code works with the current Delphi, nobody can guarantee that changes in graphical libraries introduced in future Delphis won't break your code. It is always best to cleanly decouple background processing from a user interface. Let's look at an example which nicely demonstrates the problem. The ParallelPaint demo has a simple form, with eight TPaintBox components and eight threads. Each thread runs the same drawing code and draws a pattern into its own TPaintBox. As every thread accesses only its own Canvas, and no other user interface components, a naive programmer would therefore assume that drawing into paintboxes directly from background threads would not cause problems. A naive programmer would be very much mistaken. If you run the program, you will notice that although the code paints constantly into some of the paint boxes, others stop to be updated after some time. You may even get a Canvas does not allow drawing exception. It is impossible to tell in advance which threads will continue painting and which will not. The following image shows an example of an output. The first two paint boxes in the first row, and the last one in the last row were not updated anymore when I grabbed the image: The lines are drawn in the DrawLine method. It does nothing special, just sets the color for that line and draws it. Still, that is enough to break the user interface when this is called from multiple threads at once, even though each thread uses its own Canvas: procedure TfrmParallelPaint.DrawLine(canvas: TCanvas; p1, p2: TPoint; color: TColor); begin Canvas.Pen.Color := color; Canvas.MoveTo(p1.X, p1.Y); Canvas.LineTo(p2.X, p2.Y); end; Is there a way around this problem? Indeed there is. Delphi's TThread class implements a method, Queue, which executes some code in the main thread. Queue takes a procedure or anonymous method as a parameter and sends it to the main thread. After some short time, the code is then executed in the main thread. It is impossible to tell how much time will pass before the code is executed, but that delay will typically be very short, in the order of milliseconds. As it accepts an anonymous method, we can use the magic of variable capturing and write the corrected code, as shown here: procedure TfrmParallelPaint.QueueDrawLine(canvas: TCanvas; p1, p2: TPoint; color: TColor); begin TThread.Queue(nil, procedure begin Canvas.Pen.Color := color; Canvas.MoveTo(p1.X, p1.Y); Canvas.LineTo(p2.X, p2.Y); end); end; In older Delphis you don't have such a nice Queue method but only a version of Synchronize that accepts a normal  method. If you have to use this method, you cannot count on anonymous method mechanisms to handle parameters. Rather, you have to copy them to fields and then Synchronize a parameterless method operating on these fields. The following code fragment shows how to do that: procedure TfrmParallelPaint.SynchronizedDraw; begin FCanvas.Pen.Color := FColor; FCanvas.MoveTo(FP1.X, FP1.Y); FCanvas.LineTo(FP2.X, FP2.Y); end; procedure TfrmParallelPaint.SyncDrawLine(canvas: TCanvas; p1, p2: TPoint; color: TColor); begin FCanvas := canvas; FP1 := p1; FP2 := p2; FColor := color; TThread.Synchronize(nil, SynchronizedDraw); end; If you run the corrected program, the final result should always be similar to the following image, with all eight  TPaintBox components showing a nicely animated image: Simultaneous reading and writing The next situation which I'm regularly seeing while looking at a badly-written parallel code is simultaneous reading and writing from/to a shared data structure, such as a list.  The SharedList program demonstrates how things can go wrong when you share a data structure between threads. Actually, scrap that, it shows how things will go wrong if you do that. This program creates a shared list, FList: TList<Integer>. Then it creates one background thread which runs the method ListWriter and multiple background threads, each running the ListReader method. Indeed, you can run the same code in multiple threads. This is a perfectly normal behavior and is sometimes extremely useful. The ListReader method is incredibly simple. It just reads all the elements in a list and does that over and over again. As I've mentioned before, the code in my examples makes sure that problems in multithreaded code really do occur, but because of that, my demo code most of the time also looks terribly stupid. In this case, the reader just reads and reads the data because that's the best way to expose the problem: procedure TfrmSharedList.ListReader; var i, j, a: Integer; begin for i := 1 to CNumReads do for j := 0 to FList.Count - 1 do a := FList[j]; end; The ListWriter method is a bit different. It also loops around, but it also sleeps a little inside each loop iteration. After the Sleep, the code either adds to the list or deletes from it. Again, this is designed so that the problem is quick to appear: procedure TfrmSharedList.ListWriter; var i: Integer; begin for i := 1 to CNumWrites do begin Sleep(1); if FList.Count > 10 then FList.Delete(Random(10)) else FList.Add(Random(100)); end; end; If you start the program in a debugger, and click on the Shared lists button, you'll quickly get an EArgumentOutOfRangeException exception. A look at the stack trace will show that it appears in the line a := FList[j];. In retrospect, this is quite obvious. The code in ListReader starts the inner for loop and reads the FListCount. At that time, FList has 11 elements so Count is 11. At the end of the loop, the code tries to read FList[10], but in the meantime ListWriter has deleted one element and the list now only has 10 elements. Accessing element [10] therefore raises an exception. We'll return to this topic later, in the section about Locking. For now you should just keep in mind that sharing data structures between threads causes problems. Sharing a variable OK, so rule number two is "Shared structures bad". What about sharing a simple variable? Nothing can go wrong there, right? Wrong! There are actually multiple ways something can go wrong. The program IncDec demonstrates one of the bad things that can happen. The code contains two methods: IncValue and DecValue. The former increments a shared FValue: integer; some number of times, and the latter decrements it by the same number of times: procedure TfrmIncDec.IncValue; var i: integer; value: integer; begin for i := 1 to CNumRepeat do begin value := FValue; FValue := value + 1; end; end; procedure TfrmIncDec.DecValue; var i: integer; value: integer; begin for i := 1 to CNumRepeat do begin value := FValue; FValue := value - 1; end; end; A click on the Inc/Dec button sets the shared value to 0, runs IncValue, then DecValue, and logs the result: procedure TfrmIncDec.btnIncDec1Click(Sender: TObject); begin FValue := 0; IncValue; DecValue; LogValue; end; I know you can all tell what FValue will hold at the end of this program. Zero, of course. But what will happen if we run IncValue and DecValue in parallel? That is, actually, hard to predict! A click on the Multithreaded button does almost the same, except that it runs IncValue and DecValue in parallel. How exactly that is done is not important at the moment (but feel free to peek into the code if you're interested): procedure TfrmIncDec.btnIncDec2Click(Sender: TObject); begin FValue := 0; RunInParallel(IncValue, DecValue); LogValue; end; Running this version of the code may still sometimes put zero in FValue, but that will be extremely rare. You most probably won't be able to see that result unless you are very lucky. Most of the time, you'll just get a seemingly random number from the range -10,000,000 to 10,000,000 (which is the value of the CNumRepeatconstant). In the following image, the first number is a result of the single-threaded code, while all the rest were calculated by the parallel version of the algorithm: To understand what's going on, you should know that Windows (and all other operating systems) does many things at once. At any given time, there are hundreds of threads running in different programs and they are all fighting for the limited number of CPU cores. As our program is the active one (has focus), its threads will get most of the CPU time, but still they'll sometimes be paused for some amount of time so that other threads can run. Because of that, it can easily happen that IncValue reads the current value of FValue into value (let's say that the value is 100) and is then paused. DecValue reads the same value and then runs for some time, decrementing FValue. Let's say that it gets it down to -20,000. (That is just a number without any special meaning.) After that, the IncValue thread is awakened. It should increment the value to -19,999, but instead of that it adds 1 to 100 (stored in value), gets 101, and stores that into FValue. Ka-boom! In each repetition of the program, this will happen at different times and will cause a different result to be calculated. You may complain that the problem is caused by the two-stage increment and decrement, but you'd be wrong. I dare you—go ahead, change the code so that it will modify FValue with Inc(FValue) and Dec(FValue) and it still won't work correctly. Well, I hear you say, so I shouldn't even modify one variable from two threads at the same time? I can live with that. But surely, it is OK to write into a variable from one thread and read from another? The answer, as you can probably guess given the general tendency of this section, is again—no, you may not. There are some situations where this is OK (for example, when a variable is only one byte long) but, in general, even simultaneous reading and writing can be a source of weird problems. The ReadWrite program demonstrates this problem. It has a shared buffer, FBuf: Int64, and a pointer variable used to read and modify the data, FPValue: PInt64. At the beginning, the buffer is initialized to an easily recognized number and a pointer variable is set to point to the buffer: FPValue := @FBuf; FPValue^ := $7777777700000000; The program runs two threads. One just reads from the location and stores all the read values into a list. This value is created with Sorted and Duplicates properties, set in a way that prevents it from storing duplicate values: procedure TfrmReadWrite.Reader; var i: integer; begin for i := 1 to CNumRepeat do FValueList.Add(FPValue^); end; The second thread repeatedly writes two values into the shared location: procedure TfrmReadWrite.Writer; var i: integer; begin for i := 1 to CNumRepeat do begin FPValue^ := $7777777700000000; FPValue^ := $0000000077777777; end; end; At the end, the contents of the FValueList list are logged on the screen. We would expect to see only two values—$7777777700000000 and $0000000077777777. In reality, we see four, as the following screenshot demonstrates: The reason for that strange result is that Intel processors in 32-bit mode can't write a 64-bit number (as int64 is) in one step. In other words, reading and writing 64-bit numbers in 32-bit code is not atomic. When multithreading programmers talk about something being atomic, they want to say that an operation will execute in one indivisible step. Any other thread will either see a state before the operation or a state after the operation, but never some undefined intermediate state. How do values $7777777777777777 and $0000000000000000 appear in the test application? Let's say that FValue^ contains $7777777700000000. The code then starts writing $0000000077777777 into FValue by firstly storing a $77777777 into the bottom four bytes. After that it starts writing $00000000 into the upper four bytes of FValue^, but in the meantime Reader reads the value and gets $7777777777777777. In a similar way, Reader will sometimes see $0000000000000000 in the FValue^. We'll look into a way to solve this situation immediately, but in the meantime, you may wonder—when is it okay to read/write from/to a variable at the same time? Sadly, the answer is—it depends. Not even just on the CPU family (Intel and ARM processors behave completely differently), but also on a specific architecture used in a processor. For example, older and newer Intel processors may not behave the same in that respect. You can always depend on access to byte-sized data being atomic, but that is that. Access (reads and writes) to larger quantities of data (words, integers) is atomic only if the data is correctly aligned. You can access word sized data atomically if it is word aligned, and integer data if it is double-word aligned. If the code was compiled in 64-bit mode, you can also atomically access in 64 data if it is quad-word aligned. When you are not using data packing (such as packed records) the compiler will take care of alignment and data access should automatically be atomic. You should, however, still check the alignment in code, if nothing else to prevent stupid programming errors. If you want to write and read larger amounts of data, modify the data, or if you want to work on shared data structures, correct alignment will not be enough. You will need to introduce synchronization into your program. If you found this post useful, do check out the book Delphi High Performance to learn more about the intricacies of how to perform High-performance programming with Delphi. Delphi: memory management techniques for parallel programming Parallel Programming Patterns Concurrency programming 101: Why do programmers hang by a thread?
Read more
  • 0
  • 1
  • 19053

article-image-how-to-create-a-web-designer-resume-that-lands-you-a-job
Guest Contributor
19 Jul 2018
7 min read
Save for later

How to create a web designer resume that lands you a Job

Guest Contributor
19 Jul 2018
7 min read
Clearly, there’s lot of competition for web designer jobs, - with the salary rising each year - so it’s crucial you find a way to make your resume stand out. You need to balance creativity with professionalism, all while making sure your experience, personality, and skills shine through. Over the years, people have created numerous resumes for web designers and everybody knows what it takes to get that job. Follow this guide to write a creative and attention-grabbing web designer resume. Note: All images in this article are courtesy of zety.com resume templates page and the guide. Format is a window to your mind Because you’re applying for a design position, the look and format of your resume is very important. It gives prospective employers a sense of your design philosophies. Use white space, and clear, legible fonts to help a hiring manager easily find your information. To stand apart from the crowd, avoid using a word processor to create your resume. Instead use InDesign or Illustrator to design something creative and less cookie-cutter like. Submit your resume in PDF format to avoid formatting errors that will ruin the look of your document. Sometimes a job posting will specifically forbid submitting in PDF though, so watch out for that. Highlight your Experience The key to writing a good experience section for a web designer is keeping it brief and relevant, while highlighting your career achievements. Add no more than three to five bullet points with measurable achievements per past position. Don’t just list your past company and when you worked there. “Discuss what you did and include some tangible accomplishments. If you created a custom graphic set for clients, mention that, and also what percentage were satisfied (hopefully it’s very high.) Prove you have the necessary experience to do the job well,” advises Terrence Wood, resume proofreader at Paper Fellows. Education Your education is not nearly as important as your experience, but you still need to present it right. That means using this section to talk about your strong points. Include coursework and achievements that are relevant to the job description. Maybe you wrote a column about web design in your college’s newspaper, things like this help you stand out to a hiring manager, especially if you are just starting your career in web design. Also, include the GPA here. Showcase your skills Everyone is going to list their skills, if you want that interview you need to do something to make yourself seem exceptional. The first thing you’ll do is take a good look at the job description and note what skills and responsibilities they mention. Now you know what skills to include, but including them is not enough. You need to prove you have them by giving examples of times you used them at past jobs. Don’t just list that you are proficient in Adobe Creative Suite, prove it by describing how you used it to tackle web design for 90% of client projects. Up the ante even more and include samples of your past work. This is a good spot to include your portfolio, any certifications that you have or anything that can help you let them know just how good you are at your job and confirm your skills. If you've had a predominantly freelance career, list the companies or individuals that you've worked for and include examples of work for each of them. You can do the same thing if you had a '9 to 5' career – simply list your previous jobs and show examples of your best work. It's a good idea to let your potential employers know about any future skills you plan to acquire. Use infographics You can give your resume a really unique look by using infographics, while still keeping it professional. Divide your resume layout into a grid with two columns and four or five rows. Now place one category of data into each square of your grid. Next, transform each section into an infographic. Use icons to represent different skills or awards. You can use your design software’s shape tools to create charts and graphs. Programs like Adobe InDesign can be used to create your infographics. You can also use Canva or Visme. Keep it professional It’s a great idea to inject some creativity into you resume. You want to stand out, and after all, it is a design resume. But it’s also important to balance that creativity with professionalism. A hiring manager will make some judgements about your personality based on how your resume looks. Be subtle in your creativity. Use colors that are easy on the eyes, and keep fonts reasonable. There can be a lot of beauty in simplicity. Stick to the basics, place content in an order familiar to recruiters to avoid making them have to work for the information they need. Remember that your primary goal is to communicate your information clearly. Write a cover letter Some people say that it's not really necessary but it's your chance to stand out. Maybe there is something that isn't on your resume or you want to seem more appealing and human – cover letter is a good chance to do all of that. Cover letter is a great place to elaborate how you'll be able to meet their needs. It's a good opportunity to also show them that you have done your research and know their company. Writing resources for your resume ViaWriting and Writing Populist: These are grammar resources you can use to check over your resume for grammatical mistakes. Resume Service: This is a resume service you can use to improve the quality of your web designer resume. Boomessays and EssayRoo: These are online proofreading tools, suggested by Revieweal, you can use to make sure you resume is polished and free of errors. My Writing Way and Academadvisor: Check out these career writing blogs for tips and ideas on how to improve your resume. You’ll find posts here by people who have written web designer resumes before. OXEssays and UKWritings: These are editing tools, recommended by UKWritings review, you can use to go over your resume for typos and other errors. StateofWriting and SimpleGrad: Check out these writing guides for suggestions on how to improve your resume. Even experienced writers can benefit from some extra guidance now and then. ResumeLab: Learn what to include in a cover letter. The job market for web designers is competitive. Make sure you lead with your best qualities and skills, and be sure they fit the job description as closely as possible. Be creative, but ensure you keep your resume professional as well. Now go have fun using this guide to write a creative and attention-grabbing web design resume. Author bio Grace Carter is a resume proofreader at Assignment Writing Service and at Australian Help, where she helps with CV editing and cover letter proofreading. Also, Grace teaches business writing at Academized educational website.   Is your web design responsive? “Be objective, fight for the user, and test with real users on the go!” – Interview with design purist, Will Grant Tips and tricks to optimize your responsive web design  
Read more
  • 0
  • 2
  • 18967

article-image-libraries-for-geospatial-analysis
Aarthi Kumaraswamy
22 May 2018
12 min read
Save for later

Top 7 libraries for geospatial analysis

Aarthi Kumaraswamy
22 May 2018
12 min read
The term geospatial refers to finding information that is located on the earth's surface. This can include, for example, the position of a cellphone tower, the shape of a road, or the outline of a country. Geospatial data often associates some piece of information with a particular location. Geospatial development is the process of writing computer programs that can access, manipulate, and display this type of information. Internally, geospatial data is represented as a series of coordinates, often in the form of latitude and longitude values. Additional attributes, such as temperature, soil type, height, or the name of a landmark, are also often present. There can be many thousands (or even millions) of data points for a single set of geospatial data. In addition to the prosaic tasks of importing geospatial data from various external file formats and translating data from one projection to another, geospatial data can also be manipulated to solve various interesting problems. Obvious examples include the task of calculating the distance between two points, calculating the length of a road, or finding all data points within a given radius of a selected point. We use libraries to solve all of these problems and more. Today we will look at the major libraries used to process and analyze geospatial data. GDAL/OGR GEOS Shapely Fiona Python Shapefile Library (pyshp) pyproj Rasterio GeoPandas This is an excerpt from the book, Mastering Geospatial Analysis with Python by Paul Crickard, Eric van Rees, and Silas Toms. Geospatial Data Abstraction Library (GDAL) and the OGR Simple Features Library The Geospatial Data Abstraction Library (GDAL)/OGR Simple Features Library combines two separate libraries that are generally downloaded together as a GDAL. This means that installing the GDAL package also gives access to OGR functionality. The reason GDAL is covered first is that other packages were written after GDAL, so chronologically, it comes first. As you will notice, some of the packages covered in this post extend GDAL's functionality or use it under the hood. GDAL was created in the 1990s by Frank Warmerdam and saw its first release in June 2000. Later, the development of GDAL was transferred to the Open Source Geospatial Foundation (OSGeo). Technically, GDAL is a little different than your average Python package as the GDAL package itself was written in C and C++, meaning that in order to be able to use it in Python, you need to compile GDAL and its associated Python bindings. However, using conda and Anaconda makes it relatively easy to get started quickly. Because it was written in C and C++, the online GDAL documentation is written in the C++ version of the libraries. For Python developers, this can be challenging, but many functions are documented and can be consulted with the built-in pydoc utility, or by using the help function within Python. Because of its history, working with GDAL in Python also feels a lot like working in C++ rather than pure Python. For example, a naming convention in OGR is different than Python's since you use uppercase for functions instead of lowercase. These differences explain the choice for some of the other Python libraries such as Rasterio and Shapely, which are also covered in this chapter, that has been written from a Python developer's perspective but offer the same GDAL functionality. GDAL is a massive and widely used data library for raster data. It supports the reading and writing of many raster file formats, with the latest version counting up to 200 different file formats that are supported. Because of this, it is indispensable for geospatial data management and analysis. Used together with other Python libraries, GDAL enables some powerful remote sensing functionalities. It's also an industry standard and is present in commercial and open source GIS software. The OGR library is used to read and write vector-format geospatial data, supporting reading and writing data in many different formats. OGR uses a consistent model to be able to manage many different vector data formats. You can use OGR to do vector reprojection, vector data format conversion, vector attribute data filtering, and more. GDAL/OGR libraries are not only useful for Python programmers but are also used by many GIS vendors and open source projects. The latest GDAL version at the time of writing is 2.2.4, which was released in March 2018. GEOS The Geometry Engine Open Source (GEOS) is the C/C++ port of a subset of the Java Topology Suite (JTS) and selected functions. GEOS aims to contain the complete functionality of JTS in C++. It can be compiled on many platforms, including Python. As you will see later on, the Shapely library uses functions from the GEOS library. In fact, there are many applications using GEOS, including PostGIS and QGIS. GeoDjango, also uses GEOS, as well as GDAL, among other geospatial libraries. GEOS can also be compiled with GDAL, giving OGR all of its capabilities. The JTS is an open source geospatial computational geometry library written in Java. It provides various functionalities, including a geometry model, geometric functions, spatial structures and algorithms, and i/o capabilities. Using GEOS, you have access to the following capabilities—geospatial functions (such as within and contains), geospatial operations (union, intersection, and many more), spatial indexing, Open Geospatial Consortium (OGC) well-known text (WKT) and well-known binary (WKB) input/output, the C and C++ APIs, and thread safety. Shapely Shapely is a Python package for manipulation and analysis of planar features, using functions from the GEOS library (the engine of PostGIS) and a port of the JTS. Shapely is not concerned with data formats or coordinate systems but can be readily integrated with such packages. Shapely only deals with analyzing geometries and offers no capabilities for reading and writing geospatial files. It was developed by Sean Gillies, who was also the person behind Fiona and Rasterio. Shapely supports eight fundamental geometry types that are implemented as a class in the shapely.geometry module—points, multipoints, linestrings, multilinestrings, linearrings, multipolygons, polygons, and geometrycollections. Apart from representing these geometries, Shapely can be used to manipulate and analyze geometries through a number of methods and attributes. Shapely has mainly the same classes and functions as OGR while dealing with geometries. The difference between Shapely and OGR is that Shapely has a more Pythonic and very intuitive interface, is better optimized, and has a well-developed documentation. With Shapely, you're writing pure Python, whereas with GEOS, you're writing C++ in Python. For data munging, a term used for data management and analysis, you're better off writing in pure Python rather than C++, which explains why these libraries were created. For more information on Shapely, consult the documentation. This page also has detailed information on installing Shapely for different platforms and how to build Shapely from the source for compatibility with other modules that depend on GEOS. This refers to the fact that installing Shapely will require you to upgrade NumPy and GEOS if these are already installed. Fiona Fiona is the API of OGR. It can be used for reading and writing data formats. The main reason for using it instead of OGR is that it's closer to Python than OGR as well as more dependable and less error-prone. It makes use of two markup languages, WKT and WKB, for representing spatial information with regards to vector data. As such, it can be combined well with other Python libraries such as Shapely, you would use Fiona for input and output, and Shapely for creating and manipulating geospatial data. While Fiona is Python compatible and our recommendation, users should also be aware of some of the disadvantages. It is more dependable than OGR because it uses Python objects for copying vector data instead of C pointers, which also means that they use more memory, which affects the performance. Python shapefile library (pyshp) The Python shapefile library (pyshp) is a pure Python library and is used to read and write shapefiles. The pyshp library's sole purpose is to work with shapefiles—it only uses the Python standard library. You cannot use it for geometric operations. If you're only working with shapefiles, this one-file-only library is simpler than using GDAL. pyproj The pyproj is a Python package that performs cartographic transformations and geodetic computations. It is a Cython wrapper to provide Python interfaces to PROJ.4 functions, meaning you can access an existing library of C code in Python. PROJ.4 is a projection library that transforms data among many coordinate systems and is also available through GDAL and OGR. The reason that PROJ.4 is still popular and widely used is two-fold: Firstly, because it supports so many different coordinate systems Secondly, because of the routes it provides to do this—Rasterio and GeoPandas, two Python libraries covered next, both use pyproj and thus PROJ.4 functionality under the hood The difference between using PROJ.4 separately instead of using it with a package such as GDAL is that it enables you to re-project individual points, and packages using PROJ.4 do not offer this functionality. The pyproj package offers two classes—the Proj class and the Geod class. The Proj class performs cartographic computations, while the Geod class performs geodetic computations. Rasterio Rasterio is a GDAL and NumPy-based Python library for raster data, written with the Python developer in mind instead of C, using Python language types, protocols, and idioms. Rasterio aims to make GIS data more accessible to Python programmers and helps GIS analysts learn important Python standards. Rasterio relies on concepts of Python rather than GIS. Rasterio is an open source project from the satellite team of Mapbox, a provider of custom online maps for websites and applications. The name of this library should be pronounced as raster-i-o rather than ras-te-rio. Rasterio came into being as a result of a project called the Mapbox Cloudless Atlas, which aimed to create a pretty-looking basemap from satellite imagery. One of the software requirements was to use open source software and a high-level language with handy multi-dimensional array syntax. Although GDAL offers proven algorithms and drivers, developing with GDAL's Python bindings feels a lot like C++. Therefore, Rasterio was designed to be a Python package at the top, with extension modules (using Cython) in the middle, and a GDAL shared library on the bottom. Other requirements for the raster library were being able to read and write NumPy ndarrays to and from data files, use Python types, protocols, and idioms instead of C or C++ to free programmers from having to code in two languages. For georeferencing, Rasterio follows the lead of pyproj. There are a couple of capabilities added on top of reading and writing, one of them being a features module. Reprojection of geospatial data can be done with the rasterio.warp module. Rasterio's project homepage can be found on Github. GeoPandas GeoPandas is a Python library for working with vector data. It is based on the pandas library that is part of the SciPy stack. SciPy is a popular library for data inspection and analysis, but unfortunately, it cannot read spatial data. GeoPandas was created to fill this gap, taking pandas data objects as a starting point. The library also adds functionality from geographical Python packages. GeoPandas offers two data objects—a GeoSeries object that is based on a pandas Series object and a GeoDataFrame, based on a pandas DataFrame object, but adding a geometry column for each row. Both GeoSeries and GeoDataFrame objects can be used for spatial data processing, similar to spatial databases. Read and write functionality is provided for almost every vector data format. Also, because both Series and DataFrame objects are subclasses from pandas data objects, you can use the same properties to select or subset data, for example .loc or .iloc. GeoPandas is a library that employs the capabilities of newer tools, such as Jupyter Notebooks, pretty well, whereas GDAL enables you to interact with data records inside of vector and raster datasets through Python code. GeoPandas takes a more visual approach by loading all records into a GeoDataFrame so that you can see them all together on your screen. The same goes for plotting data. These functionalities were lacking in Python 2 as developers were dependent on IDEs without extensive data visualization capabilities which are now available with Jupyter Notebooks. We've provided an overview of the most important open source packages for processing and analyzing geospatial data. The question then becomes when to use a certain package and why. GDAL, OGR, and GEOS are indispensable for geospatial processing and analyzing, but were not written in Python, and so they require Python binaries for Python developers. Fiona, Shapely, and pyproj were written to solve these problems, as well as the newer Rasterio library. For a more Pythonic approach, these newer packages are preferable to the older C++ packages with Python binaries (although they're used under the hood). Now that you have an idea of what options are available for a certain use case and why one package is preferable over another, here’s something you should always remember. As is often the way in programming, there might be multiple solutions for one particular problem. For example, when dealing with shapefiles, you could use pyshp, GDAL, Shapely, or GeoPandas, depending on your preference and the problem at hand. Introduction to Data Analysis and Libraries 15 Useful Python Libraries to make your Data Science tasks Easier “Pandas is an effective tool to explore and analyze data”: An interview with Theodore Petrou Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data  
Read more
  • 0
  • 0
  • 18677

article-image-7-ai-tools-mobile-developers-need-to-know
Bhagyashree R
20 Sep 2018
11 min read
Save for later

7 AI tools mobile developers need to know

Bhagyashree R
20 Sep 2018
11 min read
Advancements in artificial intelligence (AI) and machine learning has enabled the evolution of mobile applications that we see today. With AI, apps are now capable of recognizing speech, images, and gestures, and translate voices with extraordinary success rates. With a number of apps hitting the app stores, it is crucial that they stand apart from competitors by meeting the rising standards of consumers. To stay relevant it is important that mobile developers keep up with these advancements in artificial intelligence. As AI and machine learning become increasingly popular, there is a growing selection of tools and software available for developers to build their apps with. These cloud-based and device-based artificial intelligence tools provide developers a way to power their apps with unique features. In this article, we will look at some of these tools and how app developers are using them in their apps. Caffe2 - A flexible deep learning framework Source: Qualcomm Caffe2 is a lightweight, modular, scalable deep learning framework developed by Facebook. It is a successor of Caffe, a project started at the University of California, Berkeley. It is primarily built for production use cases and mobile development and offers developers greater flexibility for building high-performance products. Caffe2 aims to provide an easy way to experiment with deep learning and leverage community contributions of new models and algorithms. It is cross-platform and integrates with Visual Studio, Android Studio, and Xcode for mobile development. Its core C++ libraries provide speed and portability, while its Python and C++ APIs make it easy for you to prototype, train, and deploy your models. It utilizes GPUs when they are available. It is fine-tuned to take full advantage of the NVIDIA GPU deep learning platform. To deliver high performance, Caffe2 uses some of the deep learning SDK libraries by NVIDIA such as cuDNN, cuBLAS, and NCCL. Functionalities Enable automation Image processing Perform object detection Statistical and mathematical operations Supports distributed training enabling quick scaling up or down Applications Facebook is using Caffe2 to help their developers and researchers train large machine learning models and deliver AI on mobile devices. Using Caffe2, they significantly improved the efficiency and quality of machine translation systems. As a result, all machine translation models at Facebook have been transitioned from phrase-based systems to neural models for all languages. OpenCV - Give the power of vision to your apps Source: AndroidPub OpenCV short for Open Source Computer Vision Library is a collection of programming functions for real-time computer vision and machine learning. It has C++, Python, and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. It also supports the deep learning frameworks TensorFlow and PyTorch. Written natively in C/C++, the library can take advantage of multi-core processing. OpenCV aims to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. The library consists of more than 2500 optimized algorithms including both classic and state-of-the-art computer vision algorithms. Functionalities These algorithms can be used for the following: To detect and recognize faces Identify objects Classify human actions in videos Track camera movements and moving objects Extract 3D models of objects Produce 3D point clouds from stereo cameras Stitch images together to produce a high-resolution image of an entire scene Find similar images from an image database Applications Plickers is an assessment tool, that lets you poll your class for free, without the need for student devices. It uses OpenCV as its graphics and video SDK. You just have to give each student a card called a paper clicker, and use your iPhone/iPad to scan them to do instant checks-for-understanding, exit tickets, and impromptu polls. Also check out FastCV BoofCV TensorFlow Lite and Mobile - An Open Source Machine Learning Framework for Everyone Source: YouTube TensorFlow is an open source software library for building machine learning models. Its flexible architecture allows easy model deployment across a variety of platforms ranging from desktops to mobile and edge devices. Currently, TensorFlow provides two solutions for deploying machine learning models on mobile devices: TensorFlow Mobile and TensorFlow Lite. TensorFlow Lite is an improved version of TensorFlow Mobile, offering better performance and smaller app size. Additionally, it has very few dependencies as compared to TensorFlow Mobile, so it can be built and hosted on simpler, more constrained device scenarios. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. But the catch here is that TensorFlow Lite is currently in developer preview and only has coverage to a limited set of operators. So, to develop production-ready mobile TensorFlow apps, it is recommended to use TensorFlow Mobile. Also, TensorFlow Mobile supports customization to add new operators not supported by TensorFlow Mobile by default, which is a requirement for most of the models of different AI apps. Although TensorFlow Lite is in developer preview, its future releases “will greatly simplify the developer experience of targeting a model for small devices”. It is also likely to replace TensorFlow Mobile, or at least overcome its current limitations. Functionalities Speech recognition Image recognition Object localization Gesture recognition Optical character recognition Translation Text classification Voice synthesis Applications The Alibaba tech team is using TensorFlow Lite to implement and optimize speaker recognition on the client side. This addresses many of the common issues of the server-side model, such as poor network connectivity, extended latency, and poor user experience. Google uses TensorFlow for advanced machine learning models including Google Translate and RankBrain. Core ML - Integrate machine learning in your iOS apps Source: AppleToolBox Core ML is a machine learning framework which can be used to integrate machine learning model in your iOS apps. It supports Vision for image analysis, Natural Language for natural language processing, and GameplayKit for evaluating learned decision trees. Core ML is built on top of the following low-level APIs, providing a simple higher level abstraction to these: Accelerate optimizes large-scale mathematical computations and image calculations for high performance. Basic neural network subroutines (BNNS) provides a collection of functions using which you can implement and run neural networks trained with previously obtained data. Metal Performance Shaders is a collection of highly optimized compute and graphic shaders that are designed to integrate easily and efficiently into your Metal app. To train and deploy custom models you can also use the Create ML framework. It is a machine learning framework in Swift, which can be used to train models using native Apple technologies like Swift, Xcode, and Other Apple frameworks. Functionalities Face and face landmark detection Text detection Barcode recognition Image registration Language and script identification Design games with functional and reusable architecture Applications Lumina is a camera designed in Swift for easily integrating Core ML models - as well as image streaming, QR/Barcode detection, and many other features. ML Kit by Google - Seamlessly build machine learning into your apps Source: Google ML Kit is a cross-platform suite of machine learning tools for its Firebase mobile development platform. It comprises of Google's ML technologies, such as the Google Cloud Vision API, TensorFlow Lite, and the Android Neural Networks API together in a single SDK enabling you to apply ML techniques to your apps easily. You can leverage its ready-to-use APIs for common mobile use cases such as recognizing text, detecting faces, identifying landmarks, scanning barcodes, and labeling images. If these APIs don't cover your machine learning problem, you can use your own existing TensorFlow Lite models. You just have to upload your model on Firebase and ML Kit will take care of the hosting and serving. These APIs can run on-device or in the cloud. Its on-device APIs process your data quickly and work even when there’s no network connection. Its cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give you an even higher level of accuracy. Functionalities Automate tedious data entry for credit cards, receipts, and business cards, or help organize photos. Extract text from documents, which you can use to increase accessibility or translate documents. Real-time face detection can be used in applications like video chat or games that respond to the player's expressions. Using image labeling you can add capabilities such as content moderation and automatic metadata generation. Applications A popular calorie counter app, Lose It! uses Google ML Kit Text Recognition API to quickly capture nutrition information to ensure it’s easy to record and extremely accurate. PicsArt uses ML Kit custom model APIs to provide TensorFlow–powered 1000+ effects to enable millions of users to create amazing images with their mobile phones. Dialogflow - Give users new ways to interact with your product Source: Medium Dialogflow is a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. With Dialogflow you can build interfaces, such as chatbots and conversational IVR that enable natural and rich interactions between your users and your business. It provides this human-like interaction with the help of agents. Agents can understand the vast and varied nuances of human language and translate that to standard and structured meaning that your apps and services can understand. It comes in two types: Dialogflow Standard Edition and Dialogflow Enterprise Edition. Dialogflow Enterprise Edition users have access to Google Cloud Support and a service level agreement (SLA) for production deployments. Functionalities Provide customer support One-click integration on 14+ platforms Supports multilingual responses Improve NLU quality by training with negative examples Debug using more insights and diagnostics Applications Domino’s simplified the process of ordering pizza using Dialogflow’s conversational technology. Domino's leveraged large customer service knowledge and Dialogflow's NLU capabilities to build both simple customer interactions and increasingly complex ordering scenarios. Also check out Wit.ai Rasa NLU Microsoft Cognitive Services - Make your apps see, hear, speak, understand and interpret your user needs Source: Neel Bhatt Cognitive Services is a collection of APIs, SDKs, and services to enable developers easily add cognitive features to their applications such as emotion and video detection, facial, speech, and vision recognition, among others. You need not be an expert in data science to make your systems more intelligent and engaging. The pre-built services come with high-quality RESTful intelligent APIs for the following: Vision: Make your apps identify and analyze content within images and videos. Provides capabilities such as image classification, optical character recognition in images, face detection, person identification, and emotion identification. Speech: Integrate speech processing capabilities into your app or services such as text-to-speech, speech-to-text, speaker recognition, and speech translation. Language: Your application or service will understand the meaning of the unstructured text or the intent behind a speaker's utterances. It comes with capabilities such as text sentiment analysis, key phrase extraction, automated and customizable text translation. Knowledge: Create knowledge-rich resources that can be integrated into apps and services. It provides features such as QnA extraction from unstructured text, knowledge base creation from collections of Q&As, and semantic matching for knowledge bases. Search: Using Search API you can find exactly what you are looking for across billions of web pages. It provides features like ad-free, safe, location-aware web search, Bing visual search, custom search engine creation, and many more. Applications To safeguard against fraud, Uber uses the Face API, part of Microsoft Cognitive Services, to help ensure the driver using the app matches the account on file. Cardinal Blue developed an app called PicCollage, a popular mobile app that allows users to combine photos, videos, captions, stickers, and special effects to create unique collages. Also check out AWS machine learning services IBM Watson These were some of the tools that will help you integrate intelligence into your apps. These libraries make it easier to add capabilities like speech recognition, natural language processing, computer vision, and many others, giving users the wow moment of accomplishing something that wasn’t quite possible before. Along with choosing the right AI tool, you must also consider other factors that can affect your app performance. These factors include the accuracy of your machine learning model, which can be affected by bias and variance, using correct datasets for training, seamless user interaction, and resource-optimization, among others. While building any intelligent app it is also important to keep in mind that the AI in your app is solving a problem and it doesn’t exist because it is cool. Thinking from the user’s perspective will allow you to assess the importance of a particular problem. A great AI app will not just help users do something faster, but enable them to do something they couldn’t do before. With the growing popularity and the need to speed up the development of intelligent apps, many companies ranging from huge tech giants to startups are providing AI solutions. In the future we will definitely see more developer tools coming into the market, making AI in apps a norm. 6 most commonly used Java Machine learning libraries 5 ways artificial intelligence is upgrading software engineering Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence
Read more
  • 0
  • 0
  • 18591
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-earn-1m-per-year-hint-learn-machine-learning
Neil Aitken
01 Aug 2018
10 min read
Save for later

How to earn $1m per year? Hint: Learn machine learning

Neil Aitken
01 Aug 2018
10 min read
Internet job portal ‘Indeed.com’ links potential employers with people who are looking to take the next step in their careers. The proportion of job posts on their site, relating to ‘Data Science’, a specific job in the AI category, is growing fast (see chart below). More broadly, Artificial Intelligence & machine learning skills, of which ‘Data Scientist’ is just one example, are in demand. No wonder that it has been termed as the sexiest job role of the 21st century. Interest comes from an explosion of jobs in the field from big companies and Start-Ups, all of which are competing to come up with the best AI business and to earn the money that comes with software that automates tasks. The skills shortage associated with Artificial Intelligence represents an opportunity for any developer. There has never been a better time to consider whether reskilling or upskilling in AI could be a lucrative path for you. Below : Indeed.com. Proportion of job postings containing Data Scientist or Data Science. [caption id="attachment_21240" align="aligncenter" width="1525"] Artificial Intelligence skills are increasingly in demand and create a real opportunity for those prepared to reskill or upskill.[/caption] Source: Indeed  The AI skills gap the market is experiencing comes from the difficulty associated with finding an individual demonstrating a competent mixture of the very disparate faculties that AI roles require. Artificial Intelligence and it’s near equivalents such as Machine Learning and Neural Networks operate at the intersection of what have mostly been two very different disciplines – statistics and software development. In simple terms, they are half coding, half maths. Hamish Ogilvy, CEO of AI based Internal Search company Sajari is all too familiar with the problem. He’s on the front line, hiring AI developers. “The hardest part”, says Ogilvy, “is that AI is pretty complex and the average developer/engineer does not have the background in maths/stats/science to actually understand what is happening. On the flip side the trouble with the stats/maths/science people is that they typically can't code, so finding people in that sweet spot that have both is pretty tough.” He’s right. The New York Times suggests that the pool of qualified talent is only 10,000 people, worldwide. Those who do have jobs are typically happily ensconced, paid well, treated appropriately and given no reason whatsoever to want to leave. [caption id="attachment_21244" align="aligncenter" width="1920"] Judged by $ investments in the area alone, AI skills are worth developing for those wishing to stay current on technology skills.[/caption] In fact, an instinct to develop AI skills will serve any technology employee well. No One can have escaped the many estimates, from reputable consultancies, suggesting that Automation will replace up to 30% of jobs in the next 10 years. No job is safe. Every industry is touched by AI in some form. Any responsible individual with a view to the management of their own skills could learn ML and AI skills to stay relevant in current times. Even if you don't want to move out of your current job, learning ML will probably help you adapt better in your industry. What is a typical AI job and what will it pay? OpenAI, a world class Artificial Intelligence research laboratory, revealed the salaries of some of its key Data Science employees recently. Those working in the AI field with a specialization can earn $300 to $500k in their first year out of university. Experts in Artificial Intelligence now command salaries of up to $1m. [caption id="attachment_21242" align="aligncenter" width="432"] The New York Times observes AI salaries[/caption] [caption id="attachment_21241" align="aligncenter" width="1121"] The New York Times observes AI salaries[/caption] Source: The New York times Indraneil Roy, an Expert in AI and Talent Acquisition who works for Edge Networks puts it this way when outlining the difficulties of hiring the right skills and to explain why wages in the field are so high. “The challenge is the quality of resources. As demand is high for this skill, we are already seeing candidates with fake experience and work pedigree not up to standards.” The phenomenon is also causing a ‘brain drain’ in Universities. About a third of jobs in the AI field will go to someone with a Ph.D., and all of those are drawn from universities working on the discipline, often lured by the significant pay packages which are available. So, with huge demand and the universities drained, where will future AI employees come from? 3 ways to skill up to become an AI expert (And earn all that money?) There is still not a lot of agreed terminology or even job roles and responsibility in the sector. However, some things are clear. Those wishing to evolve in to the field of AI must understand the conceptual thinking involved, as a starting point, whether that view is found on the job or as part of an informal / formal educational course. Specifically, most jobs in the specialty require a working knowledge of neural networks, data / analytics, predictive analytics, with some basic programming and database skills. There are some great resources available online to train you up. Most, as you’d expect, are available on your smartphone so there really is no excuse for not having a look. 1. Free online course: Machine Learning & Statistics and probability Hamish Ogilvy summed the online education which is available in the area well. There are “so many free courses now on AI from Stanford,” he said, “that people are able to educate themselves and make up for the failings of antiquated university courses. AI is just maths really,” he says “complex models and stats. So that's what people need grounding in to be successful.” Microsoft offer free AI courses for technical professionals: Microsoft’s training materials are second to none. They’re also provided free and provide a shortcut to a credible understanding in an area simply because it comes from a technical behemoth. Importantly, they also have a list of AI services which you can play with, again for free. For example, a Natural Language engine offers a facility for you to submit text from Instant Messaging conversations and establish the sentiment being felt by the writer. Practical experience of the tools, processes and concepts involved will set you apart. See below. [caption id="attachment_21245" align="aligncenter" width="1999"] Check out Microsoft’s free AI training program for developers.[/caption] Google are taking a proactive stance on Machine Learning. They see it’s potential to improve efficiency in every industry and also offer free ML training courses on their site. 2. Take courses on AI/ML Packt’s machine learning courses, books and videos: Packt is working towards a mission to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. It has published over 6,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools. You can choose from a variety of Packt’s books, videos and courses for AI/ML. Here’s a list of top ones: Artificial Intelligence by Example [Book] Artificial Intelligence for Big data [Book] Learn Artificial Intelligence with TensorFlow [Video] Introduction to Artificial Intelligence with Java [Video] Advanced Artificial Intelligence Projects with Python [Video] Python Machine learning - Second Edition [Book] Machine Learning with R - Second Edition [Book] Coursera’s machine learning courses Coursera is a company which make training courses, for a variety of subjects, available online. Taken from actual University course content and delivered with tests, videos and training notes, all accessed online, each course is roughly a University Module. Students pick up an ‘up to under-graduate’ level of understanding of the content involved. Coursera’s courses are often cited as merit worthy and are recognizable in the industry. Costs vary but are typically between $2k and $5k per course. 3. Learn by doing Familiarize yourself with relevant frameworks and tools including Tensor Flow, Python and Keras. TensorFlow from Google is the most used open source AI software library. You can use existing code in your experiments and experiment with neural networks in much the same way as you can in Microsoft’s. Python is a programming language written for a big data world. Its proponents will tell you that Python saves developers hundreds of lines of code, allowing you to tie together information and systems faster than ever before. Python is used extensively in ML and AI applications and should be at the top of your study list. Keras, a deep learning library is similarly ubiquitous. It’s a high level Neural Network API designed to allow prototyping of your software as fast as possible. Finally, a lesser known but still valuable resources is the Accord.net. It is one final example of the many  software elements with which you can engage with to train yourself up. Accord Framework.net will expose you to image libraries, natural learning and real time facial recognition. Earn extra points with employers AI has several lighthouse tasks which are proving the potential of the technology in these still early stages. We’ve included a couple of examples, Natural Language processing and image recognition, above. Practical expertise in these areas specifically, image or voice recognition or pattern matching are valued highly by employers. Alternatively, have you patented something? A registered patent in your name is highly prized. Especially something to do with Machine Learning. Both will help you showcase Extra skills / achievements that will help your application.’ The specifics of how to apply for patents differ by country but you can find out more about the overall principles of how to submit an idea here. Passion and engagement in the subject are also, clearly appealing characteristics for potential employers to see in applicants. Participating in competitions like Kaggle, and having a portfolio of projects you can showcase on facilities like GitHub are also well prized. Of all of these suggestions, for those employed, any on the job experience you can get will stand you in the best stead. Indraneil says "Individual candidates need to spend more time doing relevant projects while in employment. Start ups involved in building products and platforms on AI seem to have better talent." The fact that there are not many AI specialists around is a bad sign There is a demand for employees with AI skills and an investment in relevant training may pay you well. Unfortunately, the underlying problem this situation reveals could be far worse than the problems experienced so far. Together, once found, all these AI scientists are going to automate millions of jobs, in every industry, in every country around the world. If Industry, Governments and Universities cannot train enough people to fill the roles being created by an evolving skills market, we may rightly be concerned to worry about how they will deal with retraining all those displaced by AI, for whom there may be no obvious replacement role available. 18 striking AI Trends to watch in 2018 – Part 1 DeepMind, Elon Musk, and others pledge not to build lethal AI Attention designers, Artificial Intelligence can now create realistic virtual textures
Read more
  • 0
  • 0
  • 18283

article-image-dl-wars-pytorch-vs-tensorflow
Savia Lobo
15 Sep 2017
6 min read
Save for later

Is Facebook-backed PyTorch better than Google's TensorFlow?

Savia Lobo
15 Sep 2017
6 min read
[dropcap]T[/dropcap]he rapid rise of tools and techniques in Artificial Intelligence and Machine learning of late has been astounding. Deep Learning, or “Machine learning on steroids” as some say, is one area where data scientists and machine learning experts are spoilt for choice in terms of the libraries and frameworks available. There are two libraries that are starting to emerge as frontrunners. TensorFlow is the best in class, but PyTorch is a new entrant in the field that could compete. So, PyTorch vs TensorFlow, which one is better? How do the two deep learning libraries compare to one another? TensorFlow and PyTorch: the basics Google’s TensorFlow is a widely used machine learning and deep learning framework. Open sourced in 2015 and backed by a huge community of machine learning experts, TensorFlow has quickly grown to be THE framework of choice by many organizations for their machine learning and deep learning needs. PyTorch, on the other hand, a recently developed Python package by Facebook for training neural networks is adapted from the Lua-based deep learning library Torch. PyTorch is one of the few available DL frameworks that uses tape-based autograd system to allow building dynamic neural networks in a fast and flexible manner. Pytorch vs TensorFlow Let's get into the details - let the Python vs TensorFlow match up begin... What programming languages support PyTorch and TensorFlow? Although primarily written in C++ and CUDA, Tensorflow contains a Python API sitting over the core engine, making it easier for Pythonistas to use. Additional APIs for C++, Haskell, Java, Go, and Rust are also included which means developers can code in their preferred language. Although PyTorch is a Python package, there’s provision for you to code using the basic C/ C++ languages using the APIs provided. If you are comfortable using Lua programming language, you can code neural network models in PyTorch using the Torch API. How easy are PyTorch and TensorFlow to use? TensorFlow can be a bit complex to use if used as a standalone framework, and can pose some difficulty in training Deep Learning models. To reduce this complexity, one can use the Keras wrapper which sits on top of TensorFlow’s complex engine and simplifies the development and training of deep learning models. TensorFlow also supports Distributed training, which PyTorch currently doesn’t. Due to the inclusion of Python API, TensorFlow is also production-ready i.e., it can be used to train and deploy enterprise-level deep learning models. PyTorch was rewritten in Python due to the complexities of Torch. This makes PyTorch more native to developers. It has an easy to use framework that provides maximum flexibility and speed. It also allows quick changes within the code during training without hampering its performance. If you already have some experience with deep learning and have used Torch before, you will like PyTorch even more, because of its speed, efficiency, and ease of use. PyTorch includes custom-made GPU allocator, which makes deep learning models highly memory efficient. Due to this, training large deep learning models becomes easier. Hence, large organizations such as Facebook, Twitter, Salesforce, and many more are embracing Pytorch. In this PyTorch vs TensorFlow round, PyTorch wins out in terms of ease of use. Training Deep Learning models with PyTorch and TensorFlow Both TensorFlow and PyTorch are used to build and train Neural Network models. TensorFlow works on SCG (Static Computational Graph) that includes defining the graph statically before the model starts execution. However, once the execution starts the only way to tweak changes within the model is using tf.session and tf.placeholder tensors. PyTorch is well suited to train RNNs( Recursive Neural Networks) as they run faster in PyTorch than in TensorFlow. It works on DCG (Dynamic Computational Graph) and one can define and make changes within the model on the go. In a DCG, each block can be debugged separately, which makes training of neural networks easier. TensorFlow has recently come up with TensorFlow Fold, a library designed to create TensorFlow models that works on structured data. Like PyTorch, it implements the DCGs and gives massive computational speeds of up to 10x on CPU and more than 100x on GPU! With the help of Dynamic Batching, you can now implement deep learning models which vary in size as well as structure. Comparing GPU and CPU optimizations TensorFlow has faster compile times than PyTorch and provides flexibility for building real-world applications. It can run on literally any kind of processor from a CPU, GPU, TPU, mobile devices, to a Raspberry Pi (IoT Devices). PyTorch, on the other hand, includes Tensor computations which can speed up deep neural network models upto 50x or more using GPUs. These tensors can dwell on CPU or GPU. Both CPU and GPU are written as independent libraries; making PyTorch efficient to use, irrespective of the Neural Network size. Community Support TensorFlow is one of the most popular Deep Learning frameworks today, and with this comes a huge community support. It has great documentation, and an eloquent set of online tutorials. TensorFlow also includes numerous pre-trained models which are hosted and available on github. These models aid developers and researchers who are keen to work with TensorFlow with some ready-made material to save their time and efforts. PyTorch, on the other hand, has a relatively smaller community since it has been developed fairly recently. As compared to TensorFlow, the documentation isn’t that great, and codes are not readily available. However, PyTorch does allow individuals to share their pre-trained models with others. PyTorch and TensorFlow - A David & Goliath story As it stands, Tensorflow is clearly favoured and used more than PyTorch for a variety of reasons. Tensorflow best suited for a wide range of practical purposes. It is the obvious choice for many machine learning and deep learning experts because of its vast array of features. Its maturity in the market is important too. It has a better community support along with multiple language APIs available. It has a good documentation and is production-ready due to the availability of ready-to-use code. Hence, it is better suited for someone who wants to get started with Deep Learning, or for organizations wanting to productize their Deep Learning models. PyTorch is relatively new and has a smaller community than TensorFlow, but it is fast and efficient. In short, it gives you all the power of Torch wrapped in the usefulness and ease of Python. Because of its efficiency and speed, it's a good option for small, research based projects. As mentioned earlier, companies such as Facebook, Twitter, and many others are using Pytorch to train deep learning models. However, its adoption is yet to go mainstream. The potential is evident, PyTorch is just not ready yet to challenge the beast that is TensorFlow. However considering its growth, the day is not far when PyTorch is further optimized and offers more functionalities - to the point that it becomes the David to TensorFlow’s Goliath.
Read more
  • 0
  • 0
  • 18207

article-image-tools-for-reinforcement-learning
Pravin Dhandre
21 May 2018
4 min read
Save for later

Top 5 tools for reinforcement learning

Pravin Dhandre
21 May 2018
4 min read
After deep learning, reinforcement Learning (RL), the hottest branch of Artificial Intelligence that is finding speedy adoption in tech-driven companies. Simply put, reinforcement learning is all about algorithms tracking previous actions or behaviour and providing optimized decisions using trial-and-error principle. Read How Reinforcement Learning works to know more. It might sound theoretical but gigantic firms like Google and Uber have tested out this exceptional mechanism and have been highly successful in cutting edge applied robotics fields such as self driving vehicles. Other top giants including Amazon, Facebook and Microsoft have centralized their innovations around deep reinforcement learning across Automotive, Supply Chain, Networking, Finance and Robotics. With such humongous achievement, reinforcement learning libraries has caught the Artificial Intelligence developer communities’ eye and have gained prime interest for training agents and reinforcing the behavior of the trained agents. In fact, researchers believe in the tremendous potential of reinforcement learning to address unsolved real world challenges like material discovery, space exploration, drug discovery etc and build much smarter artificial intelligence solutions. In this article, we will have a look at the most promising open source tools and libraries to start building your reinforcement learning projects on. OpenAI Gym OpenAI Gym, the most popular environment for developing and comparing reinforcement learning models, is completely compatible with high computational libraries like TensorFlow. The Python based rich AI simulation environment offers support for training agents on classic games like Atari as well as for other branches of science like robotics and physics such as Gazebo simulator and MuJoCo simulator. The Gym environment also offers APIs which facilitate feeding observations along with rewards back to agents. OpenAI has also recently released a new platform, Gym Retro made up of 58 varied and specific scenarios from Sonic the Hedgehog, Sonic the Hedgehog 2, and Sonic 3 games. Reinforcement learning enthusiasts and AI game developers can register for this competition. Read: How to build a cartpole game using OpenAI Gym TensorFlow This is an another well-known open-source library by Google followed by more than 95,000 developers everyday in areas of natural language processing, intelligent chatbots, robotics, and more. The TensorFlow community has developed an extended version called TensorLayer providing popular RL modules that can be easily customized and assembled for tackling real-world machine learning challenges. The TensorFlow community allows for the framework development in most popular languages such as Python, C, Java, JavaScript and Go. Google & its TensorFlow team are in the process of coming up with a Swift-compatible version to enable machine learning  on Apple environment. Read How to implement Reinforcement Learning with TensorFlow Keras Keras presents simplicity in implementing neural networks with just a few lines of codes with faster execution. It provides senior developers and principal scientists with a high-level interface to high tensor computation framework, TensorFlow and centralizes on the model architecture. So, if you have any existing RL models written in TensorFlow, just pick the Keras framework and you can transfer the learning to the related machine learning problem. DeepMind Lab DeepMind Lab is a Google 3D platform with customization for agent-based AI research. It is utilized to understand how self-sufficient artificial agents learn complicated tasks in large, partially observed environments. With the victory of its AlphaGo program against go players, in early 2016, DeepMind captured the public’s attention. With its three hubs spread across London, Canada and France, the DeepMind team is focussing on core AI fundamentals which includes building a single AI system backed by state-of-the-art methods and distributional reinforcement learning. To know more about how DeepMind Lab works, read How Google’s DeepMind is creating images with artificial intelligence. Pytorch Pytorch, open sourced by Facebook, is another well-known deep learning library adopted by many reinforcement learning researchers. It was recent preferred almost unanimously by top 10 finishers in Kaggle competition. With dynamic neural networks and strong GPU acceleration, Rl practitioners use it extensively to conduct experiments on implementing policy-based agent and to create new adventures. One crazy research project is Playing GridWorld, where Pytorch unchained its capabilities with renowned RL algorithms like policy gradient and simplified Actor-Critic method. Summing It Up There you have it, the top tools and libraries for reinforcement learning. The list doesn't end here, as there is a lot of work happening in developing platforms and libraries for scaling reinforcement learning. Frameworks like RL4J, RLlib are already in development and very soon would be full-fledged available for developers to simulate their models in their preferred coding language.
Read more
  • 0
  • 0
  • 18174

article-image-7-best-practices-for-logging-in-node-js
Guest Contributor
05 Mar 2019
5 min read
Save for later

7 Best Practices for Logging in Node.js

Guest Contributor
05 Mar 2019
5 min read
Node.js is one of the easiest platforms for prototyping and agile development. It’s used by large companies looking to scale their products quickly. However, using a platform on its own isn’t enough for most big projects today. Logging is also a key part of ensuring your web or mobile app runs smoothly for all users. Application logging is the practice of recording information about your application’s runtime. These files are usually saved a logging platform which helps identify potential problems. While no app is perfect 100% of the time, logging helps developers cut down on errors and even cyber attacks. The nature of software is complex. We can’t always predict how an application will react to data, errors, or system changes. Logging helps us better understand our own programs. So how do you handle logging in Node.js specifically? Following are some of the best practices for logging in Node.js to get the best results. 1. Understand the Regulations Let’s discuss the current legal regulations about what you can and cannot log. You should never log sensitive information or personal data. That means excluding credentials like passwords, credit card number or even email addresses. Recent changes to regulation like Europe’s GDPR make this even more essential. You don’t want to get tied up in the legal red tape of sensitive data. When in doubt, stick to the 3 things that are needed for a solid log message: timestamp, log level, and description. Beyond this, you don’t need any extensive framework. 2. Take advantage of Winston Node.js is built with a logging framework known as Winston. Winston is defined as transport for your logs, and you can install it directly into your application. Follow this guide to install Winston on your own. Winston is a powerful tool that comes with different logging levels with values. You can fully customize this console as well with colors, messages, and output details. The most recent version available is 3.0.0, but always make sure you have the latest edition to keep your app running smoothly. 3. Add Morgan In addition to Winston, Morgan is an HTTP request logger that collects server logs and standardizes them. Think of it as a logger simplification. Morgan. While you’re free to use Morgan on its own, most developers choose to use it with Winston since they make a powerful team. Morgan also works well with express.js. 4. Consider the Intel Package While Winston and Morgan are a great combination, they’re not your only option. Intel is another package solution with similar features as well as unique options. While you’ll see a lot of overlap in what they offer, Intel also includes a stack trace object. These features will come in handy when it’s time to actually debug. Because it gives a stack trace as a JSON object, it’s much easier to pass messages up the logger chain. Think of Intel like the breadcrumbs taking your developers to the error. 5. Use Environment Variables You’ll hear a lot of discussion about configuration management in the Node.js world. Decoupling your code from services and database is no straightforward process. In Node.js, it’s best to use environment variables. You can also look up values from process.env within your code. To determine which environment your program is running on, look up the NODE_ENV variables. You can also use the nconf module found here. 6. Choose a Style Guide No developer wants to spend time reading through lines of code only to have to change the spaces to tabs, reformat the braces, etc. Style guides are a must, especially when logging on Node.js. If you’re working with a team of developers, it’s time to decide on a team style guide that everyone sticks to across the board. When the code is written in a consistent style, you don’t have to worry about opinionated developers fighting for a say. It doesn’t matter which style you stick with, just make sure you can actually stick to it. The Googe style guide for Java is a great place to start if you can’t make a single decision. 7. Deal with Errors Finally, accept that errors will happen and prepare for them. You don’t want an error to bring down your entire software or program. Exception management is key. Use an asyn structure to cleanly handle any errors. Whether the app simply restarts or moves on to the next stage, make sure something happens. Users need their errors to be handled. As you can see, there are a few best practices to keep in mind when logging in Node.js. Don’t rely on your developers alone to debug the platform. Set a structure in place to handle these problems as they arise. Your users expect quality experience every time. Make sure you can deliver with these tips above. Author Bio Ashley Lipman Content marketing specialist Ashley is an award-winning writer who discovered her passion for providing creative solutions for building brands online. Since her first high school award in Creative Writing, she continues to deliver awesome content through various niches. Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown 5 reasons you should learn Node.js Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 18021
article-image-best-game-engines-for-ai-game-development
Natasha Mathur
24 Aug 2018
8 min read
Save for later

Best game engines for Artificial Intelligence game development

Natasha Mathur
24 Aug 2018
8 min read
"A computer would deserve to be called intelligent if it could deceive a human into believing that it was human" — Alan Turing It is quite common to find games which are initially exciting but take a boring turn eventually, making you want to quit the game. Then, there are games which are too difficult to hold your interest and you end up quitting in the beginning phase itself.  These are also two of the most common problems that game developers face when building games. This is where AI comes to your rescue, to spice things up. Why use Artificial Intelligence in games? The major reason for using AI in games is to provide a challenging opponent to make the game more fun to play. But, AI in the gaming industry is not a recent news. The gaming world has been leveraging the wonders of AI for a long time now. One of the first examples of AI is the computerized game, Nim, was created back in 1951. Other games such as Façade, Black & White, The Sims, Versu, and F.E.A.R. are all great AI games, that hit the market long time back. Even modern-day games like Need for Speed, Civilization, or Counter-Strike use AI. AI controls a lot of elements in games and is usually behind characters such as enemy creeps, neutral merchants, or even animals. AI in games is used to enable the non-human characters (NPCs) with responsive, adaptive, and intelligent behaviors similar to human-like intelligence. AI helps make NPCs seem intelligent as they are able to actively change their level of skills based on the person playing the game. This makes the game seem more personalized to the gamer. Playing video games is fun, and developing these games is equally fun. There are different game engines on the market to help with the development of games. A game engine is a software that provides game creators with the necessary set of features to build games quickly and efficiently. Let’s have a look at the top game engines for Artificial Intelligence game development. Unity3D Developer:  Unity Technologies Release Date: June 8, 2005 Unity is a cross-platform game engine which provides users with the ability to create games in both 2D and 3D. It is extremely popular and loved by game designers from large and small studios alike. Apart from 3D, and 2D games, it also helps with simulations for desktops, laptops, home consoles, smart TVs, and mobile devices. Key AI features: Unity offers a machine learning agents toolkit to the game developers, which help them include AI agents within games. As per the Unity team, “machine Learning Agents Toolkit (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents”. Unity AI - Unity 3D Artificial Intelligence  The ML-Agents SDK transforms games and simulations created using the Unity Editor into environments for training intelligent agents. These ML agents are trained using deep Reinforcement Learning, imitation learning, neuroevolution, or other machine learning methods via Python APIs. There’s also a TensorFlow based algorithm provided by Unity to allow game developers to easily train intelligent agents for 2D, 3D, and VR/AR games. These trained agents are then used for controlling the NPC behavior within games. The ML-Agents toolkit is beneficial for both game developers and AI researchers. Apart from this, Unity3D is easy to use and learn, compatible with every game platform and provides great community support. Learning Resources: Unity AI Programming Essentials Unity 2017 Game AI programming - Third Edition Unity 5.x Game AI Programming Cookbook Unreal Engine 4 Developer: Epic games Release Date: May 1998 Unreal Engine is widely used among developers all around the world. It is a collection of integrated tools for game developers which helps them build games, simulations, and visualization. It is also among the top game engines which are used to develop high-end AAA titles. Gears of War, Batman: Arkham Asylum and Mass Effect are some of the popular games developed using Unreal Engine. Key AI features: Unreal Engine uses a set of tools which helps add AI capabilities to a game. It uses tools such as behavior Tree, navigation Component, blackboard asset, enumeration, target point, AI Controller, and Navigation Volumes. Behavior tree creates different states and the logic behind AI. Navigation Component helps with handling movement for AI. Blackboard Asset stores information and acts as the local variable for AI. Enumeration creates states. It also allows alternating between these states. Target Point creates a basic Path node form. The AI Controller and Character tool is responsible for handling the communication between the world and the controlled pawn for AI. At last, the Navigation Volumes feature creates Navigation Mesh in the environment to allow easy Pathfinding for AI. There are also features such as Blueprint Visual Scripting which can be converted into performant C++ code, AIComponents, and the Environment Query System (EQS) which provides agents the ability to perceive their environment. Apart from its AI capabilities, the Unreal engine offers the largest community support with lifetime hours of video tutorials and assets. It is also compatible with a variety of operating platforms such as iOS, Android, Linux, Mac, Windows, and most game consoles. But there are certain inbuilt-tools in Unreal Engine which can be hard for beginners to learn. Learning resources: Unreal Engine 4 AI programming essentials CryEngine 3 Developer: Crytek Release Date: May 2, 2002 CryEngine is a powerful game development platform that comes packed with a set of tools and features to create world-class gaming experiences. It is the game engine behind games such as Sniper: Ghost Warrior 2, SNOW, etc. Key AI features: CryEngine comes with an AI system designed for easy creation of custom AI actors. This is flexible enough to handle a larger set of complex and different worlds. The core of CryEngine’s AI system is based on lots of scripting. There are different AI elements within this system that add the AI capabilities to the NPCs within the game. Some of these elements are AI Actions which allows the developers to script AI behaviors without creating new code. AI Actors Logger can log AI events and signals to files. AI Control Objects use AI object to control AI entities/actors. AI Debug Draw is the primary tool offered by CryEngine for information on the current state of the AI System and AI actors. AI Debugger registers the inputs that AI agents receive and the decisions that they make in real-time during a game session. AI Sequence system works in parallel to FG and AI systems. This helps to simplify and group AI control. CryEngine offers the easiest A.l. coding of any tech currently on the market. Since CryEngine is relatively new as compared to other game engines, it does not have a very flourishing community yet. Despite the easy AI coding, the overall learning curve of Unreal Engine is high. Panda3D Developer: Disney Interactive until 2010,  Walt Disney Imagineering, Carnegie Mellon University Release Date: 2002 Panda3D is a game engine, a framework for 3D rendering and game development for Python and C++ programs. It includes graphics, audio, I/O, collision detection, and other abilities for the creation of 3D games. Key AI features: Panda3D comes packed with an AI library named PandAI v1.0. PandAI is an AI library which provides 'Artificially Intelligent' behavior in NPC (Non-Playable Characters) in games. The PandAI library offers functionality for Steering Behaviors (Seek, Flee, Pursue, Evade, Wander, Flock, Obstacle Avoidance, Path Following) and path finding (helps the NPCs to intelligently avoiding obstacles via the shortest path ). This AI library is composed of several different entities. For instance, there’s a main AIWorld Class to update any AICharacters added to it. Each AICharacter has its own AIBehavior object for tracking all the position and rotation updates. Each AIBehavior object has the functionality to implement all the steering behaviors and pathfinding behaviors. These features within Panda3D gives you the ability to call the respective functions. Panda3D is a relatively simple game engine which lets you add AI capabilities within your games. The community is not as robust and has a low learning curve. AI is a fantastic tool which makes the entities in games seem more organic, alive, and real. The main goal here is not to copy the entire human thought process but to just sell the illusion of life. These game engines provide the developers with the entire framework needed to add AI capabilities to their games. The entire game development process is more fun as there is no need to create all systems including the physics, graphics, and AI, from scratch. Now, if you’re wondering about the best AI game engines out of the four mentioned in this article then there is no specific answer to that as selecting the best AI game engine depends on the requirements of your project. Game Engine Wars: Unity vs Unreal Engine Unity switches to WebAssembly as the output format for the Unity WebGL build target Developing Games Using AI  
Read more
  • 0
  • 1
  • 17960

article-image-5-ways-artificial-intelligence-is-upgrading-software-engineering
Melisha Dsouza
02 Sep 2018
8 min read
Save for later

5 ways artificial intelligence is upgrading software engineering

Melisha Dsouza
02 Sep 2018
8 min read
47% of digitally mature organizations, or those that have advanced digital practices, said they have a defined AI strategy (Source: Adobe). It is estimated that  AI-enabled tools alone will generate $2.9 trillion in business value by 2021.  80% of enterprises are smartly investing in AI. The stats speak for themselves. AI clearly follows the motto “go big or go home”. This explosive growth of AI in different sectors of technology is also beginning to show its colors in software development. Shawn Drost, co-founder and lead instructor of coding boot camp ‘Hack Reactor’ says that AI still has a long way to go and is only impacting the workflow of a small portion of software engineers on a minority of projects right now. AI promises to change how organizations will conduct business and to make applications smarter. It is only logical then that software development, i.e., the way we build apps, will be impacted by AI as well. Forrester Research recently surveyed 25 application development and delivery (AD&D) teams, and respondents said AI will improve planning, development and especially testing. We can expect better software created under traditional environments. 5 areas of Software Engineering AI will transform The 5 major spheres of software development-  Software design, Software testing, GUI testing, strategic decision making, and automated code generation- are all areas where AI can help. A majority of interest in applying AI to software development is already seen in automated testing and bug detection tools. Next in line are the software design precepts, decision-making strategies, and finally automating software deployment pipelines. Let's take an in-depth look into the areas of high and medium interest of software engineering impacted by AI according to the Forrester Research report.     Source: Forbes.com #1 Software design In software engineering, planning a project and designing it from scratch need designers to apply their specialized learning and experience to come up with alternative solutions before settling on a definite solution. A designer begins with a vision of the solution, and after that retracts and forwards investigating plan changes until they reach the desired solution. Settling on the correct plan choices for each stage is a tedious and mistake-prone action for designers. Along this line, a few AI developments have demonstrated the advantages of enhancing traditional methods with intelligent specialists. The catch here is that the operator behaves like an individual partner to the client. This associate should have the capacity to offer opportune direction on the most proficient method to do design projects. For instance, take the example of AIDA- The Artificial Intelligence Design Assistant, deployed by Bookmark (a website building platform). Using AI, AIDA understands a users needs and desires and uses this knowledge to create an appropriate website for the user. It makes selections from millions of combinations to create a website style, focus, image and more that are customized for the user. In about 2 minutes, AIDA designs the first version of the website, and from that point it becomes a drag and drop operation. You can get a detailed overview of this tool on designshack. #2 Software testing Applications interact with each other through countless  APIs. They leverage legacy systems and grow in complexity everyday. Increase in complexity also leads to its fair share of challenges that can be overcome by machine-based intelligence. AI tools can be used to create test information, explore information authenticity, advancement and examination of the scope and also for test management. Artificial intelligence, trained right, can ensure the testing performed is error free. Testers freed from repetitive manual tests thus have more time to create new automated software tests with sophisticated features. Also, if software tests are repeated every time source code is modified, repeating those tests can be not only time-consuming but extremely costly. AI comes to the rescue once again by automating the testing for you! With AI automated testing, one can increase the overall scope of tests leading to an overall improvement of software quality. Take, for instance, the Functionize tool. It enables users to test fast and release faster with AI enabled cloud testing. The users just have to type a test plan in English and it will be automatically get converted into a functional test case. The tool allows one to elastically scale functional, load, and performance tests across every browser and device in the cloud. It also includes Self-healing tests that update autonomously in real-time. SapFix is another AI Hybrid tool deployed by Facebook which can automatically generate fixes for specific bugs identified by 'Sapienz'. It then proposes these fixes to engineers for approval and deployment to production.   #3 GUI testing Graphical User Interfaces (GUI) have become important in interacting with today's software. They are increasingly being used in critical systems and testing them is necessary to avert failures. With very few tools and techniques available to aid in the testing process, testing GUIs is difficult. Currently used GUI testing methods are ad hoc. They require the test designer to perform humongous tasks like manually developing test cases, identifying the conditions to check during test execution, determining when to check these conditions, and finally evaluate whether the GUI software is adequately tested. Phew! Now that is a lot of work. Also, not forgetting that if the GUI is modified after being tested, the test designer must change the test suite and perform re-testing. As a result, GUI testing today is resource intensive and it is difficult to determine if the testing is adequate. Applitools is a GUI tester tool empowered by AI. The Applitools Eyes SDK automatically tests whether visual code is functioning properly or not. Applitools enables users to test their visual code just as thoroughly as their functional UI code to ensure that the visual look of the application is as you expect it to be. Users can test how their application looks in multiple screen layouts to ensure that they all fit the design. It allows users to keep track of both the web page behaviour, as well as the look of the webpage. Users can test everything they develop from the functional behavior of their application to its visual look. #4 Using Artificial Intelligence in Strategic Decision-Making Normally, developers have to go through a long process to decide what features to include in a product. However, machine learning AI solution trained on business factors and past development projects can analyze the performance of existing applications and help both teams of engineers and business stakeholders like project managers to find solutions to maximize impact and cut risk. Normally, the transformation of business requirements into technology specifications requires a significant timeline for planning. Machine learning can help software development companies to speed up the process, deliver the product in lesser time, and increase revenue within a short span. AI canvas is a well known tool for Strategic Decision making.The canvas helps identify the key questions and feasibility challenges associated with building and deploying machine learning models in the enterprise. The AI Canvas is a simple tool that helps enterprises organize what they need to know into seven categories, namely- Prediction, Judgement, Action, Outcome, Input, Training and feedback. Clarifying these seven factors for each critical decision throughout the organization will help in identifying opportunities for AIs to either reduce costs or enhance performance.   #5 Automatic Code generation/Intelligent Programming Assistants Coding a huge project from scratch is often labour intensive and time consuming. An Intelligent AI programming assistant will reduce the workload by a great extent. To combat the issues of time and money constraints, researchers have tried to build systems that can write code before, but the problem is that these methods aren’t that good with ambiguity. Hence, a lot of details are needed about what the target program aims at doing, and writing down these details can be as much work as just writing the code. With AI, the story can be flipped. ”‘Bayou’- an A.I. based application is an Intelligent programming assistant. It began as an initiative aimed at extracting knowledge from online source code repositories like GitHub. Users can try it out at askbayou.com. Bayou follows a method called neural sketch learning. It trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associates this sketch with the “intent” that lies behind the program. This DARPA initiative aims at making programming easier and less error prone. Sounds intriguing? Now that you know how this tool works, why not try it for yourself on i-programmer.info. Summing it all up Software engineering has seen massive transformation over the past few years. AI and software intelligence tools aim to make software development easier and more reliable. According to a Forrester Research report on AI's impact on software development, automated testing and bug detection tools use AI the most to improve software development. It will be interesting to see the future developments in software engineering empowered with AI. I’m expecting faster, more efficient, more effective, and less costly software development cycles while engineers and other development personnel focus on bettering their skills to make advanced use of AI in their processes. Implementing Software Engineering Best Practices and Techniques with Apache Maven Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce
Read more
  • 0
  • 0
  • 17852

article-image-top-5-programming-languages-big-data
Amey Varangaonkar
04 Apr 2018
8 min read
Save for later

Top 5 programming languages for crunching Big Data effectively

Amey Varangaonkar
04 Apr 2018
8 min read
One of the most important decisions that Big Data professionals have to make, especially the ones who are new to the scene or are just starting out, is choosing the best programming languages for big data manipulation and analysis. Understanding the Big Data problem and framing the architecture to solve it is not quite enough these days - the execution needs to be perfect as well, and choosing the right language goes a long way. The best languages for big data In this article, we look at the 5 of the most popularly used - not to mention highly effective - programming languages for developing Big Data solutions. Scala A beautiful crossover of the object-oriented and functional programming paradigms, Scala is fast and robust, and a popular choice of language for many Big Data professionals.The fact that two of the most popular Big Data processing frameworks in Apache Spark and Apache Kafka have been built on top of Scala tells you everything you need to know about the power of Scala. Scala runs on the JVM, which means the codes written in Scala can be easily used within a Java-based Big Data ecosystem. One significant factor that differentiates Scala from Java, though, is that Scala is a lot less verbose in comparison. You can write 100s of lines of confusing-looking Java code in less than 15 lines in Scala. One negative aspect of Scala, though, is its steep learning curve when compared to languages like Go and Python, and this may put off beginners looking to use it. Why use Scala for big data? Fast and robust Suitable for working with Big Data tools like Apache Spark for distributed Big Data processing JVM compliant, can be used in a Java-based ecosystem Python Python has been declared as one of the fastest growing programming languages in 2018 as per the recently held Stack Overflow Developer Survey. Its general-purpose nature means it can be used across a broad spectrum of use-cases, and Big Data programming is one major area of application. Many libraries for data analysis and manipulation which are increasingly being used in a Big Data framework to clean and manipulate large chunks of data, such as pandas, NumPy, SciPy - are all Python-based. Not just that, most popular machine learning and deep learning frameworks such as scikit-learn, Tensorflow and many more, are also written in Python and are finding increasing application within the Big Data ecosystem. One drawback of using Python, and a reason why it is not a first-class citizen when it comes to Big Data programming yet, is that it’s slow. Although very easy to use, Big Data professionals have found systems built with languages such as Java or Scala faster and more robust to use than the systems built with Python. However, Python makes up for this limitation with other qualities. As Python is primarily a scripting language, interactive coding and development of analytical solutions for Big Data becomes very easy. Python can integrate effortlessly with the existing Big Data frameworks such as Apache Hadoop and Apache Spark, allowing you to perform predictive analytics at scale without any problem. Why use Python for big data? General-purpose Rich libraries for data analysis and machine learning Easy to use Supports iterative development Rich integration with Big Data tools Interactive computing through Jupyter notebooks R It won’t come as a surprise to many that those who love statistics, love R. The ‘language of statistics’ as it is popularly called as, R is used to build data models which can be used for effective and accurate data analysis. Powered by a large repository of R packages (CRAN, also called as Comprehensive R Archive Network), with R you have just about every type of tool to accomplish any task in Big Data processing - right from analysis to data visualization. R can be integrated seamlessly with Apache Hadoop and Apache Spark, among other popular frameworks, for Big Data processing and analytics. One issue with using R as a programming language for Big Data is that it is not very general-purpose. It means the code written in R is not production-deployable and generally has to be translated to some other programming language such as Python or Java. That said, if your goal is to only build statistical models for Big Data analytics, R is an option you should definitely consider. Why use R for big data? Built for data science Support for Hadoop and Spark Strong statistical modeling and visualization capabilities Support for Jupyter notebooks Java Last, but not the least, there’s always the good old Java. Some of the traditional Big Data frameworks such as Apache Hadoop and all the tools within its ecosystem are all Java-based, and still in use today in many enterprises. Not to mention the fact that Java is the most stable and production-ready language among all the languages we have discussed so far! Using Java to develop your Big Data applications gives you the ability to use a large ecosystem of tools and libraries for interoperability, monitoring and much more, most of which have already been tried and tested. One major drawback of Java is its verbosity. The fact that you have to write hundreds of lines of codes in Java for a task which can written in barely 15-20 lines of code in Python or Scala, can turnoff many budding programmers. However, the introduction of lambda functions in Java 8 does make life quite easier. Java also does not support iterative development unlike newer languages like Python, and this is an area of focus for the future Java releases. Despite the flaws, Java remains a strong contender when it comes to the preferred language for Big Data programming because of its history and the continued reliance on the traditional Big Data tools and frameworks. Why use Java for big data? Traditional Big Data tools and frameworks are written in Java Stable and production-ready Large ecosystem of tried and tested tools and libraries Go Last but not the least, there’s Go - one of the fastest rising programming languages in recent times. Designed by a group of Google engineers who were frustrated with C++, we think Go is a good shout in this list - simply because of the fact that it powers so many tools used in the Big Data infrastructure, including Kubernetes, Docker and many more. Go is fast, easy to learn, and fairly easy to develop applications with, not to mention deploy them. More importantly, as businesses look at building data analysis systems that can operate at scale, Go-based systems are being used to integrate machine learning and parallel processing of data. It is also possible to interface other languages with Go-based systems with relative ease. Why use Go for big data? Fast, easy to use Many tools used in the Big Data infrastructure are Go-based Efficient distributed computing There are a few other languages you might want to consider - Julia, SAS and MATLAB being some major ones which are useful in their own right. However, when compared to the languages we talked about above, we thought they fell a bit short in some aspects - be it speed, efficiency, ease of use, documentation, or community support, among other things. Let’s take a quick look at the comparison table of all the languages we discussed above. Note that we have used the ✓ symbol for the best possible language/s to help you make an informed decision. This is just our view, and that’s not to say that the other languages are any worse! Scala Python R Java Go Speed ✓ ✓ ✓ Ease of use ✓ ✓ ✓ Quick Learning curve ✓ ✓ Data Analysis capability ✓ ✓ ✓ General-purpose ✓ ✓ ✓ ✓ Big Data support ✓ ✓ ✓ ✓ ✓ Interfacing with other languages ✓ ✓ ✓ Production-ready ✓ ✓ ✓ So...which language should you choose? To answer the question in short - it all depends on the use-case you want to develop. If your focus is hardcore data analysis which involves a lot of statistical computing, R would be your go-to language. On the other hand, if you want to develop streaming applications for your Big Data, Scala can be a preferable choice. If you wish to use Machine Learning to leverage your Big Data and build predictive models, Python will come to your rescue. Lastly, if you plan to build Big Data solutions using just the traditionally-available tools, Java is the language for you. You also have the option of combining the power of two languages to get a more efficient and powerful solution. For example, you can train your machine learning model in Python and deploy it on Spark in a distributed mode. Ultimately, it all depends on how efficiently your solution can function, and more importantly, how fast and accurate it is. Which language do you prefer for crunching your Big Data? Do let us know!
Read more
  • 0
  • 1
  • 17639
article-image-5-types-of-deep-transfer-learning
Bhagyashree R
25 Nov 2018
5 min read
Save for later

5 types of deep transfer learning

Bhagyashree R
25 Nov 2018
5 min read
Transfer learning is a method of reusing a model or knowledge for another related task. Transfer learning is sometimes also considered as an extension of existing ML algorithms. Extensive research and work is being done in the context of transfer learning and on understanding how knowledge can be transferred among tasks. However, the Neural Information Processing Systems (NIPS) 1995 workshop Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems is believed to have provided the initial motivations for research in this field. The literature on transfer learning has gone through a lot of iterations, and the terms associated with it have been used loosely and often interchangeably. Hence, it is sometimes confusing to differentiate between transfer learning, domain adaptation, and multitask learning. Rest assured, these are all related and try to solve similar problems. In this article, we will look into the five types of deep transfer learning to get more clarity on how these differ from each other. [box type="shadow" align="" class="" width=""]This article is an excerpt from a book written by Dipanjan Sarkar, Raghav Bali, and Tamoghna Ghosh titled Hands-On Transfer Learning with Python. This book covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples.[/box] #1 Domain adaptation Domain adaptation is usually referred to in scenarios where the marginal probabilities between the source and target domains are different, such as P (Xs) ≠ P (Xt). There is an inherent shift or drift in the data distribution of the source and target domains that requires tweaks to transfer the learning. For instance, a corpus of movie reviews labeled as positive or negative would be different from a corpus of product-review sentiments. A classifier trained on movie-review sentiment would see a different distribution if utilized to classify product reviews. Thus, domain adaptation techniques are utilized in transfer learning in these scenarios. #2 Domain confusion Different layers in a deep learning network capture different sets of features. We can utilize this fact to learn domain-invariant features and improve their transferability across domains. Instead of allowing the model to learn any representation, we nudge the representations of both domains to be as similar as possible. This can be achieved by applying certain preprocessing steps directly to the representations themselves. Some of these have been discussed by Baochen Sun, Jiashi Feng, and Kate Saenko in their paper Return of Frustratingly Easy Domain Adaptation. This nudge toward the similarity of representation has also been presented by Ganin et. al. in their paper, Domain-Adversarial Training of Neural Networks. The basic idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence domain confusion. #3 Multitask learning Multitask learning is a slightly different flavor of the transfer learning world. In the case of multitask learning, several tasks are learned simultaneously without distinction between the source and targets. In this case, the learner receives information about multiple tasks at once, as compared to transfer learning, where the learner initially has no idea about the target task. This is depicted in the following diagram: Multitask learning: Learner receives information from all tasks simultaneously #4 One-shot learning Deep learning systems are data hungry by nature, such that they need many training examples to learn the weights. This is one of the limiting aspects of deep neural networks, though such is not the case with human learning. For instance, once a child is shown what an apple looks like, they can easily identify a different variety of apple (with one or a few training examples); this is not the case with ML and deep learning algorithms. One-shot learning is a variant of transfer learning where we try to infer the required output based on just one or a few training examples. This is essentially helpful in real-world scenarios where it is not possible to have labeled data for every possible class (if it is a classification task) and in scenarios where new classes can be added often. The landmark paper by Fei-Fei and their co-authors, One Shot Learning of Object Categories, is supposedly what coined the term one-shot learning and the research in this subfield. This paper presented a variation on a Bayesian framework for representation learning for object categorization. This approach has since been improved upon, and applied using deep learning systems. #5 Zero-shot learning Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task. This might sound unbelievable, especially when learning using examples is what most supervised learning algorithms are about. Zero-data learning, or zero-short learning, methods make clever adjustments during the training stage itself to exploit additional information to understand unseen data. In their book on Deep Learning, Goodfellow and their co-authors present zero-shot learning as a scenario where three variables are learned, such as the traditional input variable, x, the traditional output variable, y, and the additional random variable that describes the task, T. The model is thus trained to learn the conditional probability distribution of P(y | x, T). Zero-shot learning comes in handy in scenarios such as machine translation, where we may not even have labels in the target language. In this article we learned about the five types of deep transfer learning types: Domain adaptation, domain confusion, multitask learning, one-shot learning, and zero-shot learning. If you found this post useful, do check out the book, Hands-On Transfer Learning with Python, which covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples. CMU students propose a competitive reinforcement learning approach based on A3C using visual transfer between Atari games What is Meta Learning? Is the machine learning process similar to how humans learn?
Read more
  • 0
  • 0
  • 17588

article-image-devops-engineering-and-full-stack-development
Richard Gall
28 Jul 2015
5 min read
Save for later

DevOps engineering and full-stack development – 2 sides of the same agile coin

Richard Gall
28 Jul 2015
5 min read
Two of the most talked-about and on-trend roles in tech dominated our Skill Up survey – DevOps engineers and Full-Stack developers. Even before we started exploring our data, we knew that both would feature heavily. Given the amount of time spent online arguing about DevOps and the merits and drawbacks of full-stack development, it’s interesting to see exactly what it means to be a DevOps engineer or full-stack developer. From salary to tool use, both our Web Development and SysAdmin and Security Salary and Skills Reports offer an insight into the professional lives of people actually performing these roles every day. The similarities between DevOps engineering and full-stack development The similarities between the two roles are striking. Both DevOps engineering and full-stack development are having a considerable impact on the way in which technology is used and understood within organizations and businesses – which makes them particularly valuable. In SMEs, for example, DevOps engineers command almost the same amount of money as in Enterprise. Considering the current economic climate, it’s a clear signal of the value of DevOps practices in environments where flexibility and the ability to adapt to changing demands and expectations are crucial to growth. Full-stack developers also command the highest salaries in many industries. In consultancy, for example, full-stack developers earn significantly more than any other web development role. While this could suggest that organizations aren’t yet willing to invest in (or simply don’t need) in-house full-stack developers, it highlights that they are nevertheless willing to spend money on individuals with full-stack knowledge, who are capable of delivering cutting-edge insight. However, just as we saw Cloud consultancies dominate the tech consultancy market a few years ago, over time it’s likely that full-stack development will become more and more established as a standard. DevOps engineers and full-stack developers share the same philosophical germ. They are symptoms of a growing business demand for greater agility and flexibility, and hint at a trend towards greater generalization in the skillset of technical professionals. part of the thrill of #devops to me is how there's no true agreement about what it is. it's like watching LOST all over again — jon devops hendren (@devops) May 18, 2015 Full-stack developers are using DevOps tools I’ve always seen them as manifestations of similar ideas in different technical areas. However, when you look at the data we’ve collected in our survey, alongside some wider research, the relationship between the DevOps engineer and the Full-Stack developer might possibly be more than purely conceptual. ‘Full-Stack’ and ‘DevOps’ are both terms that blur the lines between developer and engineer, and both are two sides of an intriguing form of cross-pollination; technologies more commonly used for deployment and automation. Docker and Vagrant were the most notable, highlighting the impact of containerization and virtualization on web development, but we also found a number of references to the Microsoft automation tool PowerShell – a distinctly DevOps-esque tool if ever there was one. Perhaps there’s a danger of overstating my point – surely we shouldn’t be surprised if web developers are using these tools – it’s not that strange, right? Maybe, but the fact that tools such as these are being used by web developers in their day-to-day work suggests that they are no longer simply expected to develop: they also need to deploy and configure their projects. Indeed, it’s worth noting that across all our web development respondents, a large number plan on learning Docker over the next 12 months. DevOps engineers use a huge range of tools DevOps Engineers were even more eclectic in their tool-usage than full-stack developers. Python is the language of-choice and Puppet the go-to configuration management tool, but web tools such as JavaScript and PHP are also being used. References to Flask, for example, the Python microframework, emphasise the way in which DevOps Engineers have an eye on web development while they’re automating your infrastructure. Taken alone, these responses might not fully evidence the relationship between DevOps engineers and Full-Stack developers. However, there are jobs out there asking for a combination of both skillsets. One, posted by a recruiter working for a nameless ‘creative media house’ in London, was looking for someone to become ‘a key member of multi-party cloud research projects, helping to bring a microservices-based video automation system to life, integrate development and developed systems into onside and global infrastructure’. The tools being asked for were very varied indeed. From a high-level language, such as JavaScript, to scripting languages such as Bash, Python and Perl, to continuous integration tools, configuration management tools and containerization technologies, whoever eventually gets the job certainly deserves to be called a polyglot. Blurring the line between full-stack and DevOps A further indication of the blurred line between engineers and developers can be found in this article from computing.co.uk. It’s an interesting tale of how working practices develop according to necessity and how methodologies and ideas interact with the practical details of a given situation. It tells the story of how the Washington Post went about building its submission platform, and how the way in which the project was resourced and managed changed according to certain pressures – internal and external. The title might actually be misleading – if you read it, it’s not so much that DevOps necessitates full-stack development, more that each thing grows out of the next. It might even be said that the reverse is true – that full-stack development necessitates DevOps thinking. The relationship between DevOps and full-stack development gives a real indication of the state of the tech world in 2015. Within a tech landscape of increasing complexity and cross-pollination there are going to be opportunities for developers and engineers to significantly drive their value as technical professionals. It’s simply a question of learning more, and of being open to new challenges and ideas about how to work effectively. It probably won’t be easy, but it might just be a fun journey.
Read more
  • 0
  • 0
  • 17468