Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-francois-chollet-tensorflow-2-0-keras-integration-tricky-design-decisions-deep-learning
Sugandha Lahoti
10 Dec 2019
6 min read
Save for later

François Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in Deep Learning, and more

Sugandha Lahoti
10 Dec 2019
6 min read
TensorFlow 2.0 was made available in October. One of the major highlights of this release was the integration of Keras into TensorFlow. Keras is an open-source deep-learning library that is designed to enable fast, user-friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into TensorFlow main codebase in TensorFlow 2.0. In September, Lex Fridman, Research scientist at MIT popularly known for his podcasts, spoke to François Chollet, who is the author of Keras on Keras, Deep Learning, and the Progress of AI. In this post, we have tried to highlight François’ views on the Keras and TensorFlow 2.0 integration, early days of Keras and the importance of design decisions for building deep learning models. We recommend the full podcast that’s available on Fridman’s YouTube channel. Want to build Neural Networks? [box type="shadow" align="" class="" width=""]If you want to build multiple neural network architectures such as CNN, RNN, LSTM in Keras, we recommend you to read Neural Networks with Keras Cookbook by V Kishore Ayyadevara. This book features over 70 recipes such as object detection and classification, building self-driving car applications, understanding data encoding for image, text and recommender systems and more. [/box] Early days of Keras and how it was integrated into TensorFlow I started working on Keras in 2015, says Chollet. At that time Caffe was the popular deep learning library, based on C++ and was popular for building Computer Vision projects. Chollet was interested in Recurrent Neural Networks (RNNs) which was a niche topic at that time. Back then, there was no good solution or reusable open-source implementation of RNNs and LSTMs, so he decided to build his own and that’s how Keras started. “It was going to be mostly around RNNs and LSTMs and the models would be defined by Python code, which was going against mainstream,” he adds. Later, he joined Google’s research team working on image classification. At that time, he was exposed to the early internal version of Tensorflow - which was an improved version of Theano. When Tensorflow was released in 2015, he refactored Keras to run on TensorFlow. Basically he was abstracting away all the backend functionality into one module so that the same codebase could run on top of multiple backends. A year later, the TensorFlow team requested him to integrate the Keras API into TensorFlow more tightly.  They build a temporary TensorFlow-only version of Keras that was in tf.contrib for a while. Then they finally moved to TensorFlow Core in 2017. TensorFlow 2.0 gives both usability and flexibility to Keras Keras has been a very easy-to-use high-level interface to do deep learning. However, it lacked in flexibility - Keras framework was not the optimal way to do things compared to just writing everything from scratch. TensorFlow 2.0 offers both usability and flexibility to Keras. You have the usability of the high-level interface but you have the flexibility of the lower-level interface. You have this spectrum of workflows where you can get more or less usability and flexibility,  the trade-offs depending on your needs. It's very flexible, easy to debug, and powerful but also integrates seamlessly with higher-level features up to classic Keras workflows. “You have the same framework offering the same set of APIs that enable a spectrum of workflows that are more or less high level and are suitable for you know profiles ranging from researchers to data scientists and everything in between,” says Chollet. Design decisions are especially important while integrating Keras with Tensorflow “Making design decisions is as important as writing code”, claims Chollet. A lot of thought and care is taken in coming up with these decisions, taking into account the diverse user base of TensorFlow - small-scale production users, large-scale production users, startups, and researchers. Chollet says, “A lot of the time I spend on Google is actually discussing design. This includes writing design Docs, participating in design review meetings, etc.” Making a design decision is about satisfying a set of constraints but also trying to do so in the simplest way possible because this is what can be maintained and expanded in the future. You want to design APIs that are modular and hierarchical so that they have an API surface that is as small as possible. You want this modular hierarchical architecture to reflect the way that domain experts think about the problem. On the future of Keras and TensorFlow. What’s going to happen in TensorFlow 3.0? Chollet says that he’s really excited about developing even higher-level APIs with Keras. He’s also excited about hyperparameter tuning by automated machine learning. He adds, “The future is not just, you know, defining a model, it's more like an automatic model.” Limits of deep learning wrt function approximators that try to generalize from data Chollet emphasizes that “Neural Networks don't generalize well, humans do.” Deep Learning models are like huge parametric and differentiable models that go from an input space to an output space, trained with gradient descent. They are learning a continuous geometric morphing from an input vector space to an output space. As this is done point by point; a deep neural network can only make sense of points in space that are very close to things that it has already seen in string data. At best it can do the interpolation across points. However, that means in order to train your network you need a dense sampling of the input, almost a point-by-point sampling which can be very expensive if you're dealing with complex real-world problems like autonomous driving or robotics.  In contrast to this, you can look at very simple rules algorithms. If you have a symbolic rule it can actually apply to a very large set of inputs because it is abstract, it is not obtained by doing a point by point mapping. Deep learning is really like point by point geometric morphings. Meanwhile, abstract rules can generalize much better. I think the future is which can combine the two. Chollet also talks about self-improving Artificial General Intelligence, concerns about short-term and long-term threats in AI, Program synthesis, Good test for intelligence and more. The full podcast is available on Lex’s YouTube channel. If you want to implement neural network architectures in Keras for varied real-world applications, you may go through our book Neural Networks with Keras Cookbook. TensorFlow.js contributor Kai Sasaki on how TensorFlow.js eases web-based machine learning application development 10 key announcements from Microsoft Ignite 2019 you should know about What does a data science team look like?
Read more
  • 0
  • 0
  • 3297

article-image-how-to-perform-exception-handling-in-python-with-try-catch-and-finally
Guest Contributor
10 Dec 2019
9 min read
Save for later

How to perform exception handling in Python with ‘try, catch and finally’

Guest Contributor
10 Dec 2019
9 min read
An integral part of using Python involves the art of handling exceptions. There are primarily two types of exceptions; Built-in exceptions and User-Defined Exceptions. In such cases, the error handling resolution is to save the state of execution in the moment of error which interrupts the normal program flow to execute a special function or a code which is called Exception Handler. There are many types of errors like ‘division by zero’, ‘file open error’, etc. where an error handler needs to fix the issue. This allows the program to continue based on prior data saved. Source: Eyehunts Tutorial Just like Java, exceptions handling in Python is no different. It is a code embedded in a try block to run exceptions. Compare that to Java where catch clauses are used to catch the Exceptions. The same sort of Catch clause is used in Python that begins with except. Also, custom-made exception is possible in Python by using the raise statement where it forces a specified exception to take place. Reason to use exceptions Errors are always expected while writing a program in Python which requires a backup mechanism. Such a mechanism is set to handle any encountered errors and not doing so may crash the program completely. The reason to equip python program with the exception mechanism is to set and define a backup plan just in case any possible error situation erupts while executing it. Catch exceptions in Python Try statement is used for handling the exception in Python. A Try clause will consist of a raised exception associated with a particular, critical operation. For handling the exception the code is written within the Except Clause. The choice of performing a type of operation depends on the programmer once catching the exception is done. The below-defined program loops until the user enters an integer value having a valid reciprocal. A part of code that triggers an exception is contained inside the Try block. In case of absence of any exceptions then the normal flow of execution continues skipping the except block. And in case of exceptions raising the except block is caught. Checkout the example: The Output will be: Naming the exception is possible by using the ex_info() function that is present inside the sys module. It asks the user to make another attempt for naming it. Any unexpected values like 'a' or '1.3' will trigger the ValueError. Also, the return value of '0' leads to ZeroDivisionError. Exception handling in Python: try, except and finally There are instances where the suspicious code may raise exceptions which are placed inside such try statement block. Again, there is a code that is dedicated to handling such raised exceptions and the same is placed within the Except block. Below is an example of above-explained try and except statement when used in Python. try:   ** Operational/Suspicious Code except for SomeException:   ** Code to handle the exception How do they work in Python: The primarily used try block statements are triggered for checking whether or not there is any exception occurring within the code. In the event of non-occurrence of exception, the except block (Containing the exceptions handling statements) is executed post executing the try block. When the exception matches the predefined name as mentioned in 'SomeException' for handling the except block, it does the handling and enables the program to continue. In case of absence of any corresponding handlers that deals with the ones to be found in the except block then the activity of program execution is halted along with the error defining it. Defining Except without the exception To define the Except Clause isn’t always a viable option regardless of which programming language is used. As equipping the execution with the try-except clause is capable of handling all the possible types of exceptions. It will keep users ignorant about whether the exception was even raised in the first place. It is also a good idea to use the except statement without the exceptions field, for example some of the statements are defined below: try:    You do your operations here;    ...................... except:    If there is an exception, then execute this block.    ...................... else:    If there is no exception then execute this block.  OR, follow the below-defined syntax: try:   #do your operations except:   #If there is an exception raised, execute these statements else:   #If there is no exception, execute these statements Here is an example if the intent is to catch an exception within the file. This is useful when the intention is to read the file but it does not exist. try:   fp = open('example.txt', r) except:   print ('File is not found')   fp.close This example deals with opening the 'example.txt'. In such cases, when the called upon file is not found or does not exist then the code executes the except block giving the error read like 'File is not found'. Defining except clause for multiple exceptions It is possible to deal with multiple exceptions in a single block using the try statement. It allows doing so by enabling programmers to specify the different exception handlers. Also, it is recommended to define a particular exception within the code as a part of good programming practice. The better way out in such cases is to define the multiple exceptions using the same, above-mentioned except clause. And it all boils down to the process of execution wherein if the interpreter gets hold of a matching exception, then the code written under the except code will be executed. One way to do is by defining a tuple that can deal with the predefined multiple exceptions within the except clause. The below example shows the way to define such exceptions: try:    # do something  except (Exception1, Exception2, ..., ExceptionN):    # handle multiple exceptions    pass except:    # handle all other exceptions You can also use the same except statement to handle multiple exceptions as follows − try:    You do your operations here;    ...................... except(Exception1[, Exception2[,...ExceptionN]]]):    If there is an exception from the given exception list,     then execute this block.    ...................... else:    If there is no exception then execute this block.  Exception handling in Python using the try-finally clause Apart from implementing the try and except blocks within one, it is also a good idea to put together try and finally blocks. Here, the final block will carry all the necessary statements required to be executed regardless of the exception being raised in the try block. One benefit of using this method is that it helps in releasing external resources and clearing up the cache memories beefing up the program. Here is the pseudo-code for try..finally clause. try:    # perform operations finally:    #These statements must be executed Defining exceptions in try... finally block The example given below executes an event that shuts the file once all the operations are completed. try:    fp = open("example.txt",'r')    #file operations finally:    fp.close() Again, using the try statement in Python, it is wise to consider that it also comes with an optional clause – finally. Under any given circumstances, this code is executed which is usually put to use for releasing the additional external resource. It is not new for the developers to be connected to a remote data centre using a network. Also, there are chances of developers working with a file loaded with Graphic User Interface. Such situations will push the developers to clean up the used resources. Even if the resources used, yield successful results, such post-execution steps are always considered as a good practice. Actions like shutting down the GUI, closing a file or even disconnecting from a connected network written down in the finally block assures the execution of the code. The finally block is something that defines what must be executed regardless of raised exceptions. Below is the syntax used for such purpose: The file operations example below illustrates this very well: try: f = open("test.txt",encoding = 'utf-8') # perform file operations finally: f.close() Or In simpler terms: try:    You do your operations here;    ......................    Due to any exception, this may be skipped. finally:    This would always be executed.    ...................... Constructing such a block is a better way to ensure the file is closed even if the exception has taken place. Make a note that it is not possible to use the else clause along with the above-defined finally clause. Understanding user-defined exceptions Python users can create exceptions and it is done by deriving classes out of the built-in exceptions that come as standard exceptions. There are instances where displaying any specific information to users is crucial, especially upon catching the exception. In such cases, it is best to create a class that is subclassed from the RuntimeError. For that matter, the try block will raise a user-defined exception. The same is caught in the except block. Creating an instance of the class Networkerror will need the user to use variable e. Below is the syntax: class Networkerror(RuntimeError):    def __init__(self, arg):       self.args = arg   Once the class is defined, raising the exception is possible by following the below-mentioned syntax. try:    raise Networkerror("Bad hostname") except Networkerror,e:    print e.args Key points to remember Note that an exception is an error that occurs while executing the program indicating such events (error) occur though less frequently. As mentioned in the examples above, the most common exceptions are ‘divisible by 0’, ‘attempt to access non-existent file’ and ‘adding two non-compatible types’. Ensure putting up a try statement with a code where you are not sure whether or not the exception will occur. Specify an else block alongside try-except statement which will trigger when there is no exception raised in a try block. Author bio Shahid Mansuri Co-founder Peerbits, one of the leading software development company, USA, founded in 2011 which provides Python development services. Under his leadership, Peerbits used Python on a project to embed reports & researches on a platform that helped every user to access the dashboard that was freely available and also to access the dashboard that was exclusively available. His visionary leadership and flamboyant management style have yield fruitful results for the company. He believes in sharing his strong knowledge base with a learned concentration on entrepreneurship and business. Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track Fake Python libraries removed from PyPi when caught stealing SSH and GPG keys, reports ZDNet There’s more to learning programming than just writing code
Read more
  • 0
  • 0
  • 17048

article-image-openjdk-project-valhallas-head-shares-how-they-plan-to-enhance-the-java-language-and-jvm-with-value-types-and-more
Bhagyashree R
10 Dec 2019
4 min read
Save for later

OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more

Bhagyashree R
10 Dec 2019
4 min read
Announced in 2014, Project Valhalla is an experimental OpenJDK project to bring major new language features to Java 10 and beyond. It primarily focuses on enabling developers to create and utilize value types, or non-reference values. Last week, the project’s head Brian Goetz shared the goal, motivation, current status, and other details about the project in a set of documents called “State of Valhalla”. Goetz shared that in the span of five years, the team has come up with five distinct prototypes of the project. Sharing the current state of the project, he wrote, “We believe we are now at the point where we have a clear and coherent path to enhance the Java language and virtual machine with value types, have them interoperate cleanly with existing generics, and have a compatible path for migrating our existing value-based classes to inline classes and our existing generic classes to specialized generics.” The motivation behind Project Valhalla One of the main motivations behind Project Valhalla was adapting the Java language and runtime to modern hardware. It’s almost been 25 years since Java was introduced and a lot has changed since then. At that time, the cost of a memory fetch and an arithmetic operation was roughly the same, but this is not the case now. Today, the memory fetch operations have become 200 to 1,000 times more expensive as compared to arithmetic operations. Java is often considered to be a pointer-heavy language as most Java data structures in an application are objects or reference types. This is why Project Valhalla aims to introduce value types to get rid of the type overhead both in memory as well as in computation. Goetz wrote, “We aim to give developers the control to match data layouts with the performance model of today’s hardware, providing Java developers with an easier path to flat (cache-efficient) and dense (memory-efficient) data layouts without compromising abstraction or type safety.” The language model for incorporating inline types Goetz further moved on to talk about how the team is accommodating inline classes in the language type system. He wrote, “The motto for inline classes is: codes like a class, works like an int; the latter part of this motto means that inline types should align with the behaviors of primitive types outlined so far.” This means that inline classes will enable developers to write types that behave more like Java's inbuilt primitive types. Inline classes are similar to current classes in the sense that they can have properties, methods, constructors, and so on. However, the difference that Project Valhalla brings is that instances of inline classes or inline objects do not have an identity, the property that distinguishes them from other objects. This is why operations that are identity-sensitive like synchronization are not possible with inline objects. There are a bunch of other differences between inline and identity classes. Goetz wrote, “Object identity serves, among other things, to enable mutability and layout polymorphism; by giving up identity, inline classes must give up these things. Accordingly, inline classes are implicitly final, cannot extend any other class besides Object...and their fields are implicitly final.” In Project Valhalla, the types are divided into inline and reference types where inline types include primitives and reference types are those that are not inline types such as declared identity classes, declared interfaces, array types, etc. He further listed a few migration scenarios including value-based classes, primitives, and specialized generics. Check out Goetz’s post to know more in detail about the Project Valhalla. OpenJDK Project Valhalla is ready for developers working in building data structures or compiler runtime libraries OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test OpenJDK Project Valhalla is now in Phase III
Read more
  • 0
  • 0
  • 5256

article-image-w3c-world-wide-web-consortium-declares-webassembly-1-0-as-an-official-web-standard
Sugandha Lahoti
09 Dec 2019
3 min read
Save for later

W3C (World Wide Web Consortium) declares WebAssembly 1.0 as an official web standard

Sugandha Lahoti
09 Dec 2019
3 min read
Last Thursday, the World Wide Web Consortium declared Web Assembly 1.0 as an official W3C Recommendation. With this announcement, WebAssembly becomes the fourth language to run natively in browsers following HTML, CSS, and JavaScript. “The arrival of Web Assembly expands the range of applications that can be achieved by simply using Open Web Platform technologies. In a world where machine learning and Artificial Intelligence become more and more common, it is important to enabling high-performance applications on the Web, without compromising the safety of the users,” declared Philippe Le Hégaret, W3C Project Lead in the official press release. Web Assembly has been the talk of the town for providing a safe, portable, low-level code format designed for efficient execution and compact representation. According to the W3C consortium, WebAssembly enables the Web platform to perform a more efficient execution of computationally-intensive algorithms, which in turn makes it practical to deliver whole new classes of user experience on the Web and elsewhere. Because it is a platform-independent execution environment, it can also be used on any other computer platform. W3C has published three WebAssembly specifications as W3C Recommendations: WebAssembly Core Specification defines a low-level virtual machine that closely mimics the functionality of many microprocessors upon which it is run. WebAssembly Web API which defines a Promise-based interface for requesting and executing a .wasm resource. WebAssembly JavaScript Interface that provides a JavaScript API for invoking and passing parameters to WebAssembly functions. W3C is also working on a range of features for future versions of the standard. These include Threading: Threads provide the benefits of shared-memory multi-threading and atomic memory accesses. Fixed-width SIMD: Vector operations that execute loops in parallel. Reference types: Allow Web Assembly code to directly reference host objects. Tail calls: Enable calling functions without using extra stack space. ECMAScript module integration: Interact with JavaScript by loading WebAssembly executables as ES6 modules. There are many other longer-term projects that W3C is working on. Many of them are aimed at improving the usability and availability of Web Assembly. For example garbage collection, debugging interfaces, and Web Assembly System Interface (WASI). In other news, recently, Mozilla partnered with Fastly, Intel and Red Hat to form the Bytecode Alliance to build a secure-by-default future for WebAssembly and to take it beyond the browser. Introducing Woz, a Progressive WebAssembly Application (PWA + WebAssembly) generator written entirely in Rust. You can now use WebAssembly from .NET with Wasmtime! 4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more.
Read more
  • 0
  • 0
  • 6439
Banner background image

article-image-microsoft-technology-evangelist-matthew-weston-on-how-microsoft-powerapps-is-democratizing-app-development-interview
Bhagyashree R
04 Dec 2019
10 min read
Save for later

Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview]

Bhagyashree R
04 Dec 2019
10 min read
In recent years, we have seen a wave of app-building tools coming in that enable users to be creators. Another such powerful tool is Microsoft PowerApps, a full-featured low-code / no-code platform. This platform aims to empower “all developers” to quickly create business web and mobile apps that fit best with their use cases. Microsoft defines “all developers” as citizen developers, IT developers, and Pro developers. To know what exactly PowerApps is and its use cases, we sat with Matthew Weston, the author of Learn Microsoft PowerApps. Weston was kind enough to share a few tips for creating user-friendly and attractive UI/UX. He also shared his opinion on the latest updates in the Power Platform including AI Builder, Portals, and more. [box type="shadow" align="" class="" width=""] Further Learning Weston’s book, Learn Microsoft PowerApps will guide you in creating powerful and productive apps that will help you transform old and inefficient processes and workflows in your organization. In this book, you’ll explore a variety of built-in templates and understand the key difference between types of PowerApps such as canvas and model-driven apps. You’ll learn how to generate and integrate apps directly with SharePoint, gain an understanding of PowerApps key components such as connectors and formulas, and much more. [/box] Microsoft PowerApps: What is it and why is it important With the dawn of InfoPath, a tool for creating user-designed form templates without coding, came Microsoft PowerApps. Introduced in 2016, Microsoft PowerApps aims to democratize app development by allowing users to build cross-device custom business apps without writing code and provides an extensible platform to professional developers. Weston explains, “For me, PowerApps is a solution to a business problem. It is designed to be picked up by business users and developers alike to produce apps that can add a lot of business value without incurring large development costs associated with completely custom developed solutions.” When we asked how his journey started with PowerApps, he said, “I personally ended up developing PowerApps as, for me, it was an alternative to using InfoPath when I was developing for SharePoint. From there I started to explore more and more of its capabilities and began to identify more and more in terms of use cases for the application. It meant that I didn’t have to worry about controls, authentication, or integration with other areas of Office 365.” Microsoft PowerApps is a building block of the Power Platform, a collection of products for business intelligence, app development, and app connectivity. Along with PowerApps, this platform includes Power BI and Power Automate (earlier known as Flow). These sit on the top of the Common Data Service (CDS) that stores all of your structured business data. On top of the CDS, you have over 280 data connectors for connecting to popular services and on-premise data sources such as SharePoint, SQL Server, Office 365, Salesforce, among others. All these tools are designed to work together seamlessly. You can analyze your data with Power BI, then act on the data through web and mobile experiences with PowerApps, or automate tasks in the background using Power Automate. Explaining its importance, Weston said, “Like all things within Office 365, using a single application allows you to find a good solution. Combining multiple applications allows you to find a great solution, and PowerApps fits into that thought process. Within the Power Platform, you can really use PowerApps and Power Automate together in a way that provides a high-quality user interface with a powerful automation platform doing the heavy processing.” “Businesses should invest in PowerApps in order to maximize on the subscription costs which are already being paid as a result of holding Office 365 licenses. It can be used to build forms, and apps which will automatically integrate with the rest of Office 365 along with having the mobile app to promote flexible working,” he adds. What skills do you need to build with PowerApps PowerApps is said to be at the forefront of the “Low Code, More Power” revolution. This essentially means that anyone can build a business-centric application, it doesn’t matter whether you are in finance, HR, or IT. You can use your existing knowledge of PowerPoint or Excel concepts and formulas to build business apps. Weston shared his perspective, “I believe that PowerApps empowers every user to be able to create an app, even if it is only quite basic. Most users possess the ability to write formulas within Microsoft Excel, and those same skills can be transferred across to PowerApps to write formulas and create logic.” He adds, “Whilst the above is true, I do find that you need to have a logical mind in order to really create rich apps using the Power Platform, and this often comes more naturally to developers. Saying that, I have met people who are NOT developers, and have actually taken to the platform in a much better way.” Since PowerApps is a low-code platform, you might be wondering what value it holds for developers. We asked Weston what he thinks about developers investing time in learning PowerApps when there are some great cross-platform development frameworks like React Native or Ionic. Weston said, “For developers, and I am from a development background, PowerApps provides a quick way to be able to create apps for your users without having to write code. You can use formulae and controls which have already been created for you reducing the time that you need to invest in development, and instead you can spend the time creating solutions to business problems.” Along with enabling developers to rapidly publish apps at scale, PowerApps does a lot of heavy lifting around app architecture and also gives you real-time feedback as you make changes to your app. This year, Microsoft also released the PowerApps component framework which enables developers to code custom controls and use them in PowerApps. You also have tooling to use alongside the PowerApps component framework for a better end to end development experience: the new PowerApps CLI and Visual Studio plugins. On the new PowerApps capabilities: AI Builder, Portals, and more At this year’s Microsoft Ignite, the Power Platform team announced almost 400 improvements and new features to the Power Platform. The new AI Builder allows you to build and train your own AI model or choose from pre-built models. Another feature is PowerApps Portals using which organizations can create portals and share them with both internal and external users. We also have UI flows, currently a preview feature that provides Robotic Process Automation (RPA) capabilities to Power Automate. Weston said, “I love to use the Power Platform for integration tasks as much as creating front end interfaces, especially around Power Automate. I have created automations which have taken two systems, which won’t natively talk to each other, and process data back and forth without writing a single piece of code.” Weston is already excited to try UI flows, “I’m really looking forward to getting my hands on UI flows and seeing what I can create with those, especially for interacting with solutions which don’t have the usual web service interfaces available to them.” However, he does have some concerns regarding RPA. “My only reservation about this is that Robotic Process Automation, which this is effectively doing, is usually a deep specialism, so I’m keen to understand how this fits into the whole citizen developer concept,” he shared. Some tips for building secure, user-friendly, and optimized PowerApps When it comes to application development, security and responsiveness are always on the top in the priority list. Weston suggests, “Security is always my number one concern, as that drives my choice for the data source as well as how I’m going to structure the data once my selection has been made. It is always harder to consider security at the end, when you try to shoehorn a security model onto an app and data structure which isn’t necessarily going to accept it.” “The key thing is to never take your data for granted, secure everything at the data layer first and foremost, and then carry those considerations through into the front end. Office 365 employs the concept of security trimming, whereby if you don’t have access to something then you don’t see it. This will automatically apply to the data, but the same should also be done to the screens. If a user shouldn’t be seeing the admin screen, don’t even show them a link to get there in the first place,” he adds. For responsiveness, he suggests, “...if an app is slow to respond then the immediate perception about the app is negative. There are various things that can be done if working directly with the data source, such as pulling data into a collection so that data interactions take place locally and then sync in the background.” After security, another important aspect of building an app is user-friendly and attractive UI and UX. Sharing a few tips for enhancing your app's UI and UX functionality, Weston said, “When creating your user interface, try to find the balance between images and textual information. Quite often I see PowerApps created which are just line after line of text, and that immediately switches me off.” He adds, “It could be the best app in the world that does everything I’d ever want, but if it doesn’t grip me visually then I probably won’t be using it. Likewise, I’ve seen apps go the other way and have far too many images and not enough text. It’s a fine balance, and one that’s quite hard.” Sharing a few ways for optimizing your app performance, Weston said, “Some basic things which I normally do is to load data on the OnVisible event for my first screen rather than loading everything on the App OnStart. This means that the app can load and can then be pulling in data once something is on the screen. This is generally the way that most websites work now, they just get something on the screen to keep the user happy and then pull data in.” “Also, give consideration to the number of calls back and forth to the data source as this will have an impact on the app performance. Only read and write when I really need to, and if I can, I pull as much of the data into collections as possible so that I have that temporary cache to work with,” he added. About the author Matthew Weston is an Office 365 and SharePoint consultant based in the Midlands in the United Kingdom. Matthew is a passionate evangelist of Microsoft technology, blogging and presenting about Office 365 and SharePoint at every opportunity. He has been creating and developing with PowerApps since it became generally available at the end of 2016. Usually, Matthew can be seen presenting within the community about PowerApps and Flow at SharePoint Saturdays, PowerApps and Flow User Groups, and Office 365 and SharePoint User Groups. Matthew is a Microsoft Certified Trainer (MCT) and a Microsoft Certified Solutions Expert (MCSE) in Productivity. Check out Weston’s latest book, Learn Microsoft PowerApps on PacktPub. This book is a step-by-step guide that will help you create lightweight business mobile apps with PowerApps.  Along with exploring the different built-in templates and types of PowerApps, you will generate and integrate apps directly with SharePoint. The book will further help you understand how PowerApps can use several Microsoft Power Automate and Azure functionalities to improve your applications. Follow Matthew Weston on Twitter: @MattWeston365. Denys Vuika on building secure and performant Electron apps, and more Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 6715

article-image-aws-reinvent-2019-day-2-highlights-aws-wavelength-provisioned-concurrency-for-lambda-functions-and-more
Savia Lobo
04 Dec 2019
6 min read
Save for later

AWS re:Invent 2019 Day 2 highlights: AWS Wavelength, Provisioned Concurrency for Lambda functions, and more!

Savia Lobo
04 Dec 2019
6 min read
Day 2 of the ongoing AWS re:Invent 2019 conference at Las Vegas, included a lot of new announcements such as AWS Wavelength, Provisioned Concurrency for Lambda functions, Amazon Sagemaker Autopilot, and much more. The Day 1 highlights included a lot of exciting releases too, such as preview of AWS’ new quantum service, Braket; Amazon SageMaker Operators for Kubernetes, among others. Day Two announcements at AWS re:Invent 2019 AWS Wavelength to deliver ultra-low latency applications for 5G devices With AWS Wavelength, developers can build applications that deliver single-digit millisecond latencies to mobile devices and end-users. AWS developers can deploy their applications to Wavelength Zones, AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G networks, and seamlessly access the breadth of AWS services in the region. This enables developers to deliver applications that require single-digit millisecond latencies such as game and live video streaming, machine learning inference at the edge, and augmented and virtual reality (AR/VR). AWS Wavelength brings AWS services to the edge of the 5G network. This minimizes the latency to connect to an application from a mobile device. Application traffic can reach application servers running in Wavelength Zones without leaving the mobile provider’s network. This reduces the extra network hops to the Internet that can result in latencies of more than 100 milliseconds, preventing customers from taking full advantage of the bandwidth and latency advancements of 5G. To know more about AWS Wavelength, read the official post. Provisioned Concurrency for Lambda Functions To provide customers with improved control over their mission-critical app performance on serverless, AWS introduces Provisioned Concurrency, which is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIs, GraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction. This is a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This addition is helpful for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs. On enabling Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. To know more Provisioned Concurrency in detail, read the official document. Amazon Managed Cassandra Service open preview launched Amazon Managed Apache Cassandra Service (MCS) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Since the Amazon MCS is serverless, you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. With Amazon MCS, it becomes easy to run Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today. Amazon MCS implements the Apache Cassandra version 3.11 CQL API, allowing you to use the code and drivers that you already have in your applications. Updating your application is as easy as changing the endpoint to the one in the Amazon MCS service table. To know more about Amazon MCS in detail, read AWS official blog post. Introducing Amazon SageMaker Autopilot to auto-create high-quality Machine Learning models with full control and visibility The AWS team launched Amazon SageMaker Autopilot to automatically create classification and regression machine learning models with full control and visibility. SageMaker Autopilot first checks the dataset and then runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms, and hyperparameters. All this with a single API call or few clicks in the Amazon SageMaker Studio. Further, it uses this combination to train an Inference Pipeline, which can be easily deployed either on a real-time endpoint or for batch processing. All of this takes place on fully-managed infrastructure. SageMaker Autopilot also generates Python code showing exactly how data was preprocessed: not only can you understand what SageMaker Autopilot does, you can also reuse that code for further manual tuning if you’re so inclined. SageMaker Autopilot supports: Input data in tabular format, with automatic data cleaning and preprocessing, Automatic algorithm selection for linear regression, binary classification, and multi-class classification, Automatic hyperparameter optimization, Distributed training, Automatic instance and cluster size selection. To know more about Amazon Sagemaker Autopilot, read the official document. Announcing ML-powered Amazon Kendra Amazon Kendra is an ML-powered highly accurate enterprise search service. It provides powerful natural language search capabilities to your websites and applications such that end users can easily find the required information within the vast amount of content spread across the organization. Key benefits of Kendra include: Users can get immediate answers to questions asked in natural language. This eliminates sifting through long lists of links and hoping one has the information you need. Kendra lets you easily add content from file systems, SharePoint, intranet sites, file-sharing services, and more, into a centralized location so you can quickly search all of your information to find the best answer. The search results get better over time as Kendra’s machine learning algorithms learn which results users find most valuable. To know more about Amazon Kendra in detail, read the official document. Introducing preview of Amazon Codeguru Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations. It helps developers find the most expensive lines of code that affect application performance and causes difficulty while troubleshooting. CodeGuru is powered by machine learning, best practices, and hard-learned lessons across millions of code reviews and thousands of applications profiled on open source projects and internally at Amazon. It helps developers find and fix code issues such as resource leaks, potential concurrency race conditions, and wasted CPU cycles. To know more about Amazon Codeguru in detail, read the official blog post. A few other highlights of Day two at AWS re:Invent 2019 include: General availability of Amazon EKS on AWS Fargate, AWS Fargate Spot, and ECS Cluster Auto Scaling. The Deep Graph Library, an open source library built for easy implementation of graph neural networks, is now available on Amazon SageMaker. Amazon re:Invent will continue throughout this week till the 6th of December. You can access the Livestream. Keep checking this space for news for further updates and releases. Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases SageMaker Operators for Kubernetes Amazon’s hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, and more 10 key announcements from Microsoft Ignite 2019 you should know about
Read more
  • 0
  • 0
  • 3641
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-pytorch-is-bridging-the-gap-between-research-and-production-at-facebook-pytorch-team-at-f8-conference
Vincy Davis
04 Dec 2019
7 min read
Save for later

How PyTorch is bridging the gap between research and production at Facebook: PyTorch team at F8 conference

Vincy Davis
04 Dec 2019
7 min read
PyTorch, the machine learning library which was originally developed as a research framework by a Facebook intern in 2017, has now grown into a popular deep learning workflow. One of the most loved products by Facebook, PyTorch is free, open source, and used for applications like computer vision and natural language processing (NLP).  At the F8 conference held this year, the PyTorch team consisting of Joe Spisak, the project manager for PyTorch at Facebook AI and Dmytro Dzhulgakov, the tech lead at Facebook AI gave a talk on how Facebook is developing and scaling AI experiences with PyTorch.  Spisak describes PyTorch as an eager and graph-based execution that is defined by ‘run’. This means that when a user executes a Python code, it generates a graph on the fly. It is dynamic in nature and allows the compilation of the static graph. The dynamic neural networks are accessible, thus, allowing the user to change the parameters very quickly. This feature comes in handy for applications like control flow in NLP. Another important feature of PyTorch, according to Spisak, is the ability to generate accurately distributed training models that possess close to billion parameters, including the cutting-edge ones. It also has a simple and easy API that is very intuitive by nature. This is one of the qualities of PyTorch which has endeared many developers, claims Spisak.  Become a pro at Deep Learning with PyTorch! If you want to become an expert in building and training neural network models with high speed and flexibility in text, vision, and advanced analytics using PyTorch 1.x, read our book Deep Learning with PyTorch 1.x - Second Edition written by Sri. Yogesh K., Laura Mitchell, et al.  It will give you an insight into solving real-world problems using CNNs, RNNs, and LSTMs, along with discovering state-of-the-art modern deep learning architectures, such as ResNet, DenseNet, and Inception. How PyTorch is bridging the gap between research and production at Facebook Dzhulgakov points out how general advances in AI are driven by innovative research in the fields of academia or industry and why it’s necessary to bridge this big lag between research and production. He says, “If you have a new idea and you want to take it all the way through to deployment, you usually need to go through multiple steps - figure out what the approach is and then find the training data maybe prepare massage it a little bit. Actually, build and train your model after that and then there is this painful step of transferring your model to a production environment which often historically involved reimplementation of a lot of code so you can actually take and deploy it and scale-up.”  According to Dzhulgakov, PyTorch is trying to minimize this big gap by encouraging advances and experimentations in the field, so that the research is brought into production in a few days, instead of months. Challenges in bringing research to production Following are the various classes of challenges associated with bringing research to production, according to the PyTorch team. Hardware efficiency: In case of a tight latency constraint environment, users are required to fit all the hardware into the performance budget. On the other hand, an underused hardware environment can lead to an increase in cost. Scalability: In Facebook’s recent work, Dzhulgakov says, they have trained on billions of public images, thus indicating significant accuracy gains as compared to regular datasets like imageNet. Similarly, when models are taken to inference, it means that billions of inferences per second are running with multiple diverse models sharing the same hardware. Cross-platform: Neural networks are mostly not isolated as they need to be deployed inside their target application. It has a lot of interdependence with the surrounding code and application, thus posing different constraints like the user will not be able to run Python code or the user will have to work on very constrained computer capabilities if running a mobile device, and more. Reliability: A lot of PyTorch jobs run for multiple weeks on hundreds of GPUs, hence it is important to design a reliable software which can tolerate hardware failures and deliver results.  How PyTorch is tackling these challenges In order to tackle the above-listed challenges, Dzhulgakov says Facebook develops systems that can take up a training job and perform optimizations focused on performance for the performance-critical pieces. The system also applies “recipes for reliability” so that the developer written modeling code is automatically transformed. The Jit package comes into the picture here and acts like a key factor that is built to capture the structure of the Python program with minimal changes. The main goal of the Jit package is to make this process almost seamless. He asserts that PyTorch has been successful since it feels like regular programming in Python and most of its users start developing in traditional PyTorch mode (eager mode) just by writing and prototyping in the program. “For the subset of promising models which shall show what results you need to bring to production either scale up, so you can apply techniques provided by Jit to exist mental codes and annotated in order to run in so-called script code.”   The Jit is like a subset of Python with a thread list of request semantics, which allows the user to apply transparent transformations for the eager mode to the user. The annotations include adding a few lines of Python code on top of the function in such a way that it can be done incrementally on function by function or module by module fashion. This hybrid fashion ensures that the model works along the way. Such powerful PyTorch tools permit the user to share the same code base between research and production environments.  Next, Dzhulgakov deduces that the common factor between research and production is that both teams of developers work on the same code base built on top of PyTorch. Thus, they share the codes among the teams that have a common domain like text classification or object detection or reinforcement learning. These developers prototype models, train new algorithms and address new tasks for quickly transitioning this functionality to the opposite environment. Watch the full talk to see Dzhulgakov’s examples of PyTorch bridging the gap between research and production at Facebook. If you want to become an expert at implementing deep learning applications in PyTorch, check out our latest book Deep Learning with PyTorch 1.x - Second Edition written by Sri. Yogesh K., Laura Mitchell, and Et al. This book will show you how to apply neural networks to domains such as computer vision and NLP. It will also guide you to build, train, and scale a model with PyTorch and cover complex neural networks such as GANs and autoencoders for producing text and images. NVIDIA releases Kaolin, a PyTorch library to accelerate research in 3D computer vision and AI Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more Transformers 2.0: NLP library with deep interoperability between TensorFlow 2.0 and PyTorch, and 32+ pretrained models in 100+ languages PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility
Read more
  • 0
  • 0
  • 3793

article-image-solidworks-specialist-tayseer-almattar-takes-us-into-the-world-of-3d-modeling-using-solidworks-2020-interview
Sugandha Lahoti
04 Dec 2019
10 min read
Save for later

SOLIDWORKS specialist Tayseer Almattar takes us into the world of 3D modeling using SOLIDWORKS 2020 [Interview]

Sugandha Lahoti
04 Dec 2019
10 min read
A large number of design-based organizations are now increasingly using SOLIDWORKS to design and model their products or systems, prior to manufacturing. Written in Parasolid-kernel, SOLIDWORKS utilizes a parametric feature-based approach to create models and assemblies. Some major environments of SOLIDWORKS include the Part modeling environment, Assembly environment, and Drawing environment, which teach you how to use the SOLIDWORKS mechanical design software. To learn more about SOLIDWORKS we interviewed designer and SOLIDWORKS  specialist Tayseer Almattar who talked about his experience in building parametric models and assemblies in SOLIDWORKS. He also pointed out the important features of the latest SOLIDWORKS release and how it compares to other CAD software in the market. Tayseer also shared his experiences working on the book Learn SOLIDWORKS 2020 where he teaches the concept of design and usages of the tools/commands of SOLIDWORKS. Being a climate change enthusiast, Tayseer aspires to help build a world in which civilizations and natural environments can co-thrive utilizing the power of design. He has also shared his thoughts on the need to deeply integrate Environmental Sustainability with product design. On SOLIDWORKS 2020 and how it compares to other competing CAD software SOLIDWORKS is touted as one of the most popular CAD software. What are some of the best features from SOLIDWORKS 2020 release? Before answering this question, we have to keep in mind that SOLIDWORKS is a very mature software that existed since 1995. Thus, the adjustments are often not very significant from the perspective of most users. The last major shift was in 2016 when the coloring of the interface was changed. Looking more specifically at SOLIDWORKS 2020, some of the major enhancements include applying Torsion Continuity relation in sketching, extending the hole callout to section views within drawings, as well as major improvements in making flexible parts components within assemblies. SOLIDWORKS Corp. did publish a 200-page document highlighting all the updates in the 2020 version which I cannot explain in detail in this interview. However, anyone can access the document here. How does SOLIDWORKS compare to other competing CAD software like Fusion 360, Autodesk Revit and AutoDesk Inventor? Are these direct rivals or are there varying use cases for these tools? If so, how does one choose the right tool for their 3D modeling project? It is very difficult to give an absolute recommendation for which software one should choose. For many users, the choice often comes to what different organizations use in a specific location or what software is being used with a certain university. This is not to say that they don’t have any differences between them. Let’s take a closer look at some of those differences. Fusion 360 is a relatively new, cloud-based software that many find to be more intuitive to use, especially when modeling more mech-based models like a car surface. However, for simulation and larger parts and assemblies, it falls behind other software like SOLIDWORKS and Inventor. It is also cheaper [Fusion 360 “paid” version costs $495 / year (or $60 / month if you only commit to the monthly version)]. SOLIDWORKS and Autodesk Inventors can be looked at as direct rivals. For most people, the choice between them would be according to an organization’s requirements. Looking at them independently, SOLIDWORKS has the advantage of a larger community and a wider variety of learning resources available. All software mentioned were made with mechanical designs in mind. This makes them good for industries such as aerospace, automotive, construction, and consumer products. Other 3D modeling software can have a different purpose. For example, Autodesk Revit was made keeping architects in mind with better features geared towards designing buildings and making blueprints. To conclude, if a user is looking for a 3D modeling software for mechanical design capacities functioning with industries like automotive and consumer products, SOLIDWORKS is one of the best options available due to its ease of use and availability of large and varied learning resources. Tayseer’s inspiration for his book and his journey in 3D modeling What was the inspiration behind your book, Learn SOLIDWORKS 2020? How does your book prepare its readers to learn 3D modeling? What are the key takeaways for readers? Aside from using SOLIDWORKS myself, I have also been very active in SOLIDWORKS Training since 2015, having more than 15,000 students in my bestselling online training offerings. This pushed the idea of producing a new mode of SOLIDWORKS training which brought about this book. In making the book, I put in all that I learned from using and training SOLIDWORKS and my experience in curriculum design to produce this book. In the book, I have adapted a learning-by-doing approach which I believe works best for learning SOLIDWORKS. Every tool we cover in the book is explained through a focused application. We explain what the tool is, then directly use it while explaining how it works. In addition, we have included exercises at the end of each chapter to emphasize the skills learned in that chapter. Also, where applicable, we have provided SOLIDWORKS models for the reader to download to better follow up with the exercises. In terms of coverage, the book covers the majority of tools and skills expected from a certified SOLIDWORKS Associate (CSWA) and a Certified SOLIDWORKS Professional (CSWP). This means that the book can also function as preparation material for those two certifications which are the most common ones for SOLIDWORKS users. You have a solid track record of working with SOLIDWORKS. Tell us about your journey into the world of 3D modeling. How long have you been using SOLIDWORKS? How do you typically use it on a day-to-day basis? I started using SOLIDWORKS in 2008 and have continued using it since then. The software itself is very versatile as it can be used for a vast variety of applications. My personal experience was mostly around innovation as it supplements my various entrepreneurial activities. Thus, I mostly use the software in the early stages of product design including modeling the early stages of prototyping and testing. These include everyday consumer devices like phones, special phone cases, medical devices, and furniture. Also, to a lesser extent, I have used software to model industrial machinery as well as parts for research simulations. Integrating Environmental Sustainability with product design You are also a climate change activist and a believer in clean software. Tell us more about your SustainabilityEdu Initiative and your broader involvement in the Climate Change movement. Why do you think there is a need to deeply integrate Environmental Sustainability with product design? What is the role of tech and tech workers in addressing climate change? Let us start with the big picture. Currently, the sustainability challenge is becoming known to the vast majority of people. A big part of this is related to environmental sustainability. In that, we are coming to realize that our natural resources are limited. This pushed many governments to issue laws and regulations to help curb the environmental damage we are causing; this is by helping in recycling for the public or setting pollution limits to establishments. However, most of these solutions are temporary and not fully sufficient. So we should be contributing in our own ways. Given my strength in education, I launched the SustainabilityEdu initiative which aims to educate the public about environmental sustainability issues and, hopefully, further extend it to specifically target designers and engineers. When a product reaches its end-of-life stage, for example, a smartphone, what options does the user have to discard the product in a completely environmentally friendly manner? As of now, there is no way to do that. This is because many device parts are not up-cyclable or require efforts to dismantle. Thus, the product ends up as toxic e-waste. Not only this, during a smartphone’s production considerable waste is generated in the extraction of raw materials, manufacturing, transportation, etc, that the consumer does not know about. For a typical smartphone, the majority of carbon dioxide emission occurs during the manufacturing stages. To break this process, designers and engineers have to innovate on foundational levels at the product birth stages keeping in mind how to deal with the same product at its end-of-life stage. This makes it essential to integrate environmental sustainability with product design and tech workers in general. What is your advice to new people working with SOLIDWORKS and what are some trends to look out for in the coming years? For beginners, I would recommend that the best way to learn SOLIDWORKS is by using it and having fun with it. They can model random objects around or think about a new object and model it. Also, when doing that, you should not expect not to face difficulties, rather, push yourself to navigate different learning resources and figure out how to accomplish what you want. Structured learning resources like a book or a course can save a lot of time in the learning process, however, you should also explore and experiment on your own. Another important note to keep in mind is that there is not just one way to build a specific 3D model. Do not panic if you see yourself building a model in a sequence that is different from other people. Rather, you should question yourself and evaluate what benefit one approach has over the other. Over time, you will have a list of your own best practices suitable for a large range of applications. Most importantly, just like learning any skill, you should give yourself time. As for trends to look out for, those would, more or less, be my own wishlist without knowing for sure what would pick up. First, I want to see more notable development in design communication, this is especially in alignment with the notable development taking place in the immersive experiences area like VR and AR. Another aspect I am expecting more of would be in relation to cloud-computing and faster communication. I think those will enable lowering the hardware bar for individuals to utilize the features of hardware demanding software like SOLIDWORKS. This coupled with faster communication (e.g. 5G networks), can also improve the ways in which designers collaborate in real-time. If you want to gain a deeper understanding of SOLIDWORKS and the most commonly used commands for part modeling, assembly and drawings, read Tayseer’s book  Learn SOLIDWORKS 2020. Author Bio Tayseer holds a Bachelor (B.S.) degree in Mechanical Engineering and Master of Design (MDes) degree in International Design and Business Management. He has many years of experience in corporate training and instructional design and training quality assurance. He has also been an avid user of SOLIDWORKS for over 10 years and has published multiple online SOLIDWORKS training courses with about 15,000 enrolled students from over 100 countries. Tayseer aspires to help build a world in which modern civilizations and the natural environment can co-thrive utilizing the power of design. You can find him on Linkedin. Facebook Reality Labs launch SUMO Challenge to improve 3D scene understanding and modeling algorithms Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Unity Learn Premium, a learning platform for professionals to master real-time 3D development
Read more
  • 0
  • 0
  • 3410

article-image-what-is-unitys-new-data-oriented-technology-stack-dots
Guest Contributor
04 Dec 2019
7 min read
Save for later

What is Unity’s new Data-Oriented Technology Stack (DOTS)

Guest Contributor
04 Dec 2019
7 min read
If we look at the evolution of computing and gaming over the last decade, we can see how different things are with respect to ten years ago. However, one of the most significant change was moving from a world where 90% of the code ran on a single thread on a single core, to a world where we all carry in our pockets hundreds of GPU cores, and we must design efficient code that can run in parallel. If we look at this change, we can imagine why Unity feels the urge to adapt to this new paradigm. Unity’s original design born in a different era, and now it is time for it to adjust to the future. The Data-Oriented Technology Stack (DOTS) is the collective name for Unity's attempt at reshaping its internal architecture in a way that is faster, lighter, and, more important, optimized for the current massive multi-threading world. In this article, we will take a look at the main three components of DOTS and how it can help you develop next-generation games. Want to learn more optimization techniques in Unity? Unity engine comes with a great set of features to help you build high-performance games. If you want to know the techniques for writing better game scripts and learn how to optimize a game using Unity technologies such as ECS and the Burst compiler, read the book Unity Game Optimization - Third Edition written by Chris Dickinson and Dr. Davide Aversa. This book will help you get up to speed with a series of performance-enhancing coding techniques and methods that will help you improve the performance of your Unity applications. The Data-Oriented Technology Stack Three components compose the Data-Oriented Technology Stack: The Entity Component System (ECS) The C# Job System The Burst compiler Let's see each one of them. The Entity Component System (ECS) If you know Unity, you know that two basic structures represent every part of a game: the GameObject and the MonoBehavior. Every GameObject contains one or more MonoBehavior, which in turn describes the data (what the object knows) and the behavior (what the object does) of each element in a scene. GameObject and MonoBehavior worked well during Unity’s initial years; however, with the rise of multithreaded programming, many issues with the GameObject architecture started to become more evident. First of all, a GameObject is a fat, heavy, data structure. In theory, it should only be a container of MonoBehavior instances. In practice, instead, it has a significant number of problems. To name a few:  Every GameObject has a name and an ID.  Every GameObject has a C# wrapping object pointing to the native C++ code Creating and deleting a GameObject requires to lock and edit a global list (that is, these operations cannot run in parallel). Moreover, both GameObject and MonoBehavior are dynamic objects, and they are stored everywhere in memory. It would be much better if we could keep all the MonoBehavior of a GameObject close to each other so that finding and running them would be more efficient. To solve all these issues, Unity introduced the Entity Component System (ECS), a new paradigm alternative to the traditional GameObject/MonoBehavior one. As the name suggests, there are three elements in ECS: Components: They are conceptually similar to a MonoBehavior, but they contain only data. For instance, a Position component will contain only a 3D vector representing the entity position in space; a LinearVelocity component would contain only the velocity of the object, and so on. They are just plain data. Entities: They are just a “collection” of components. For example, if I have a particle in space, I can represent it just with the list of components, e.g., Position and LinearVelocity components. System: A system is where the behavior is. Each system takes a list of components and executes a function over all the entities composed by the components of the archetype. [box type="shadow" align="" class="" width=""]To be technically correct, an entity is not a collection data structure. Instead, it is a pointer to a location in memory where the entity’s components are stored. The actual storage, though, is handled by Unity.[/box] With this system, we can store components into contiguous arrays, and an entity is just a pointer to the archetype instance. A single function for each system can define the behavior of thousands of similar entities. This is more efficient than running an Update on every MonoBehavior in every GameObject. For this reason, with ECS, we can use entities without any slowdown or system overhead where it was impossible with GameObject instances. For instance, having an entity for each particle of a particle system. For more technical info on ECS there is a very detailed blog post on Unity’s official website. The C# Job System If ECS is how we describe the scene, we need a way to run the systems efficiently. As we said in the introduction, the modern approach to efficiency is to exploit every core in our system, and this means to run code in parallel using massive multithreaded systems. Sadly, multi-threading is hard. Extremely hard. As any experienced developer can tell you, moving from single-thread to multi-thread programming introduce a large class of new issues and bugs such as race conditions. Moreover, for true multi-threading, we should go as much close as possible to the metal, avoiding all the dynamic allocations and deallocations of C# and the Garbage Collector and code part of our game in C++. Luckily for us, Unity introduced a component in Data-Oriented Technology Stack with the specific purpose of simplifying multithreaded programming in Unity using only C#: the Job System. You can imagine a Job as a piece of code that you want to run in parallel over as much cores as possible. The Unity C# Job System helps you design this code in a way to avoid all common multi-threading pitfalls using only C#. You can finally unleash all the power of your machine without writing a single line of C++ code. The Burst Compiler What if I tell you that it is possible to obtain higher performances by writing C# code instead of C++? You would think I am crazy. However, I am not, and this the goal of the last component of Data-Oriented Technology Stack (DOTS): the Burst compiler. The Burst compiler is a specialized code-generator that compiles a subset of C# (often called High-Performance C# or HPC#) into machine code that is, most of the time, smaller and faster than the one that is generated by an equivalent C++ code. The Burst compiler is still in preview, but you can already try it by using the Unity's Package Manager. Of course, you get the most from it when combined with the other two DOTS components. For more technical info on the Burst compiler, you can refer to Unity’s blog post. Learn More About Unity Optimization In this article, we only scratched the surface of Data-Oriented Technology Stack (DOTS). If you want to learn more on how to use the DOTS technologies and other optimization techniques for Unity you can read more in my  book Unity Game Optimization - Third Edition. This Unity book is your guide to optimizing various aspects of your game development, from game characters and scripts, right through to animations. You will also explore techniques for solving performance issues with your VR projects and learn best practices for project organization to save time through an improved workflow. Author Bio Dr. Davide Aversa holds a PhD in artificial intelligence and an MSc in artificial intelligence and robotics from the University of Rome La Sapienza in Italy. He has a strong interest in artificial intelligence for the development of interactive virtual agents and procedural content generation. He served as a Program Committee member of video game-related conferences such as the IEEE conference on computational intelligence and games, and he also regularly participates in game-jam contests. He also writes a blog on game design and game development. You can find him on Twitter, Github, Linkedin. Unity 2019.2 releases with updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler and more Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool
Read more
  • 0
  • 0
  • 14469

article-image-amazon-reinvent-2019-day-one-aws-launches-braket-its-new-quantum-service-and-releases-sagemaker-operators-for-kubernetes
Sugandha Lahoti
03 Dec 2019
6 min read
Save for later

Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases SageMaker Operators for Kubernetes

Sugandha Lahoti
03 Dec 2019
6 min read
At day one of the ongoing Amazon re:Invent 2019, there was a flurry of announcements made for AWS. Most importantly, AWS announced the preview launch of Braket, its own quantum computing service following the likes of IBM, Microsoft, and Google. Amazon also released Amazon SageMaker Operators for Kubernetes to help data scientists using Kubernetes to train, tune, and deploy machine learning models in Amazon SageMaker. re:Invent is Amazon’s flagship conference hosted by Amazon Web Services for the global cloud computing community. This year re: Invent is taking place in Las Vegas, December 2-6, 2019. re:Invent 2019 Day One announcements Braket: AWS’ new quantum service in preview now Amazon Braket (named after the common notation for quantum states) is a fully managed service that helps you get started with quantum computing. Braket consists of a full development environment that helps data scientists to: design quantum algorithms from scratch or choose from a set of pre-built algorithms, test these algorithms on simulated quantum computers (including gate based and quantum annealing superconductors, and ion trap hardware) run them on your choice of different quantum hardware technologies ( including D-Wave, IonQ, and Rigetti) Once your tests are complete, you will be automatically notified and your results will be stored in Amazon S3. Amazon Braket publishes event logs and performance metrics such as completion status and execution time to Amazon CloudWatch. To make it easier to develop hybrid algorithms that combine classical and quantum tasks, Amazon Braket helps manage classical compute resources and establish low-latency connections to the quantum hardware. At re:Invent 2019, AWS also launched the Amazon Quantum Solutions Lab, a collaborative research program that connects you with quantum computing experts from Amazon and its technology and consulting partners. They can help you identify potential uses of quantum computing, build internal expertise, and collaborate on programs to design and test quantum algorithms. Braket is available in preview now. Amazon SageMaker Operators for Kubernetes Now developers and data scientists can use Kubernetes to train, tune, and deploy machine learning models in Amazon SageMaker, with the new Amazon SageMaker Operators for Kubernetes. Customers can install these Amazon SageMaker Operators on their Kubernetes cluster to create Amazon SageMaker jobs natively using the Kubernetes API and command-line Kubernetes tools such as ‘kubectl’. Operators can be used to train machine learning models, optimize hyperparameters for a given model, run batch transform jobs over existing models, and set up inference endpoints. With these operators, users can manage their jobs in Amazon SageMaker from their Kubernetes cluster in Amazon Elastic Kubernetes Service EKS. Amazon SageMaker Operators for Kubernetes are available in select AWS regions. AWS DeepComposer, a creative way to learn Machine Learning Amazon has launched AWS DeepComposer, the world’s first machine learning-enabled musical keyboard at re:Invent 2019. AWS DeepComposer is an educational tool to teach people Machine Learning. AWS DeepComposer gives developers of all skill levels a creative way to experience machine learning – music. https://youtu.be/XH2EbK9dQlg You can input a melody by connecting the AWS DeepComposer keyboard to your computer, or play the virtual keyboard in the AWS DeepComposer console. You can generate an original music composition using the pre-trained genre models in the console. You can then publish your tracks to SoundCloud. It is designed specifically to educate developers by means of tutorials, sample code, and training data. These can be used to get started with building generative AI models, all without having to write a single line of code. With AWS DeepComposer, you can train and optimize GAN models to create original music. GAN models pit two different neural networks against each other to produce new and original digital works based on sample inputs. AWS DeepComposer is available in preview now. Amazon Transcribe now extended to healthcare patients Amazon’s automatic speech recognition service Amazon Transcribe is now available for medical speech as announced in re:Invent 2019. Amazon Transcribe Medical allows physicians to easily and quickly dictate their clinical notes and see their speech converted to accurate text in real-time, without any human intervention. Clinicians can use natural speech and do not have to explicitly call out punctuation like “comma” or “full stop”. This text can then be automatically fed to downstream applications such as EHR systems, or to AWS language services such as Amazon Comprehend Medical for entity extraction. To make it work, you need to capture audio using your device’s microphone and send PCM (Pulse-code modulation) audio to a streaming API based on the popular Websocket protocol. This API will respond with a series of JSON blobs with the transcribed text, as well as word-level time stamps, punctuation, etc. Optionally, you can save this data to an Amazon Simple Storage Service (S3) bucket. Amazon Transcribe Medical is available in US East (N. Virginia) and US West (Oregon) regions. Updates to Microsoft Windows Server AWS has released a bring-your-own-license (BYOL) experience for customers as an easier way to bring, and manage, their existing licenses for Microsoft Windows Server and SQL Server to AWS. The new BYOL experience enables customers who want to use their existing Windows Server or SQL Server licenses to seamlessly create virtual machines in EC2, while AWS takes care of managing their licenses to help ensure compliance to licensing rules specified by the customer. Amazon is also providing End-of-Support Migration Program (EMP) for Windows Server. On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. Having an application that can run only on an unsupported version of Windows Server is problematic as you will no longer get free security patch updates, leaving you vulnerable to security and compliance risks. This new program combines technology with expert guidance, to migrate your legacy applications running on outdated versions of Windows Server to newer, supported versions on AWS. Other updates announced at Amazon re:Invent 2019 Amazon EventBridge Schema Registry is now in preview.  The schema registry stores the structure (schema) of Amazon EventBridge events and maps them to Java, Python, and Typescript bindings so that you can use the events as typed objects. The existing AWS IoT SiteWise preview adds new features such as creating a virtual representation of your facility, monitor production performance metrics and use AWS IoT SiteWise Monitor to visualize the data in real-time. AWS IoT SiteWise Monitor is a new SaaS application that lets you monitor and interact with the data collected and organized by AWS IoT SiteWise. The upcoming AWS DeepRacer Evo car will include a stereo camera and a Light Detection and Ranging (LIDAR) sensor.  The DeepRacer League in 2020 will have 8 additional races in 5 countries. The preview of EC2 Image Builder, a service that makes it easier and faster to build and maintain secure OS images for Windows Server and Amazon Linux 2, using automated build pipelines. Amazon re:Invent will continue throughout this week (the last day is the 6th of December). You can access the Livestream here. Keep checking this space for news on other updates and launches. Amazon EKS Windows Container Support is now generally available Amazon’s hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, and more 10 key announcements from Microsoft Ignite 2019 you should know about
Read more
  • 0
  • 0
  • 3061
article-image-tensorflow-js-contributor-kai-sasaki-on-how-tensorflow-js-eases-web-based-machine-learning-application-development
Sugandha Lahoti
28 Nov 2019
6 min read
Save for later

TensorFlow.js contributor Kai Sasaki on how TensorFlow.js eases web-based machine learning application development

Sugandha Lahoti
28 Nov 2019
6 min read
Running Machine Learning applications on the web browser is one of the hottest trends in software development right now. Many notable machine learning projects are being built with Tensorflow.js. It is one of the most popular frameworks for building performant machine learning applications that run smoothly in a web browser. Recently, we spoke with Kai Sasaki, who is one of the initial contributors to TensorFlow.js. He talked about current and future versions of TF.js, how it compares to other browser-based ML tools and his contributions to the community. He also shared his views on why he thinks Javascript good for Machine Learning. If you are a web developer with working knowledge of Javascript who wants to learn how to integrate machine learning techniques with web-based applications, we recommend you to read the book, Hands-on Machine Learning with TensorFlow.js. This hands-on course covers important aspects of machine learning with TensorFlow.js using practical examples. Throughout the course, you'll learn how different algorithms work and follow step-by-step instructions to implement them through various examples. On how TensorFlow.js has improved web-based machine learning How do you think Machine Learning for the Web has evolved in the last 2-3 years? What are some current applications of web-based machine learning and TensorFlow.js? What can we expect in future releases? Machine Learning on the web platform is a field attracting more developers and machine learning practitioners. There are two reasons. First, the web platform is universally available. The web browser mostly provides us a way to access the underlying resource transparently. The second reason is security.raining a model on the client-side means you can keep sensitive data inside the client environment as the entire training process is completed on the client-side itself. The data is not sent to the cloud, making it more secure and less susceptible to vulnerabilities or hacking. In future releases as well, TensorFlow.js is expected to provide more secure and accessible functionalities. You can find various kinds of TensorFlow.js based applications here. How does TensorFlow.js compare with other web and browser-based machine learning tools? Does it make web-based machine learning application development easier? The most significant advantage of TensorFlow.js is the full compatibility of the TensorFlow ecosystem. Not only can a TensorFlow model be seamlessly used in TensorFlow.js, tools for visualization and model deployment in the TensorFlow ecosystem can also be used in TensorFlow.js. TensorFlow 2 was released in October. What are some new changes made specific to TensorFlow.js as a part of TF 2.0 that machine learning developers will find useful? What are your first impressions of this new release? Although there is nothing special related to TensorFlow 2.0, the full support of new backends is actively developed, such as WASM and WebGPU. These hardware acceleration mechanisms provided by the web platform can enhance performance for any TensorFlow.js application. It surely makes the potential of TensorFlow.js stronger and possible use cases broader. On Kai’s experience working on his book, Hands-on Machine Learning with TensorFlow.js Tell us the motivation behind writing your book Hands-on Machine Learning with TensorFlow.js. What are some of your favorite chapters/projects from the book? TensorFlow.js does not have much history because only three years have passed since its initial publication. Due to the lack of resources to learn TensorFlow.js usage, I was motivated to write a book illustrating how to using TensorFlow.js practically. I think chapters 4 - 9 of my book Hands-On Machine Learning with TensorFlow.js provide readers good material to practice how to write the ML application with TensorFlow.js. Why Javascript for Machine Learning Why do you think Javascript is good for Machine Learning? What are some of the good machine learning packages available in Javascript? How does it compare to other languages like Python, R, Matlab, especially in terms of performance? JavaScript is a primary programming language in the web platform so it can work as a bridge between the web and machine learning applications. We have several other libraries working similarly. For example, machinelearn.js is a general machine learning framework running with JavaScript. Although JavaScript is not a highly performant language, its universal availability in the web platform is attractive to developers as they can build their machine learning applications that are “write once, run anywhere”. We can compare the performance by running state-of-the-art machine learning models such as MobileNet or ResNet practically. On his contribution towards TF.js You are a contributor for TensorFlow.js and were awarded by the Google Open Source Peer Bonus Program. What were your main contributions? How was your experience working for TF.js? One of the significant contributions I have made was fast Fourier transformation operations. I have created the initial implementation of fft, ifft, rfft and irfft. I also added stft (short term Fourier transformation). These operators are mainly used for performing signal analysis for audio applications. I have done several bug fixes and test enhancements in TensorFlow.js too. What are the biggest challenges today in the field of Machine Learning and AI in web development? What do you see as some of the greatest technology disruptors in the next 5 years? While many developers are writing Python programming languages in the machine learning field, not many web developers have familiarity and knowledge of machine learning in spite of the substantial advantage of the integration between machine learning and web platform. I believe machine learning technologies will be democratized among web developers so that a vast amount of creativity is flourished in the next five years. By cooperating with these enthusiastic developers in the community, I believe the machine learning on the client-side or edge device will be one of the major contributions in the machine learning field. About the author Kai Sasaki works as a software engineer in Treasure Data to build large-scale distributed systems. He is one of the initial contributors to TensorFlow.js and contributes to developing operators for newer machine learning models. He has also received the Google Open Source Peer Bonus in 2018. You can find him on Twitter, Linkedin, and GitHub. About the book Hands-On Machine Learning with TensorFlow.js is a comprehensive guide that will help you easily get started with machine learning algorithms and techniques using TensorFlow.js. Throughout the course, you'll learn how different algorithms work and follow step-by-step instructions to implement them through various examples. By the end of this book, you will be able to create and optimize your own web-based machine learning applications using practical examples. Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle platform. Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track TensorFlow 2.0 released with tighter Keras integration, eager execution enabled by default, and more!
Read more
  • 0
  • 0
  • 3940

article-image-julia-computing-research-team-runs-machine-learning-model-on-encrypted-data-without-decrypting-it
Fatema Patrawala
28 Nov 2019
5 min read
Save for later

Julia Computing research team runs machine learning model on encrypted data without decrypting it

Fatema Patrawala
28 Nov 2019
5 min read
Last week, the team at Julia Computing published a research based on cutting edge cryptographic techniques. The research involved cryptography techniques to practically perform computation on data without ever decrypting it. For example, the user would send encrypted data (e.g. images) to the cloud API, which would run the machine learning model and then return the encrypted answer. Nowhere is the user data decrypted and in particular the cloud provider does not have access to either the original image nor is it able to decrypt the prediction it computed. The team made this possible by building a machine learning service for handwriting recognition of encrypted images (from the MNIST dataset). The ability to compute on encrypted data is generally referred to as “secure computation” and is a fairly large area of research, with many different cryptographic approaches and techniques for a plethora of different application scenarios. For their research, Julia team focused on using a technique known as “homomorphic encryption”. What is homomorphic encryption Homomorphic encryption is a form of encryption that allows computation on ciphertexts, generating an encrypted result which, when decrypted, matches the result of the operations as if they had been performed on the plaintext. This technique can be used for privacy-preserving outsourced storage and computation. It allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted. In highly regulated industries, such as health care, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing. In this research, the Julia Computing team used a homomorphic encryption system which involves the following operations: pub_key, eval_key, priv_key = keygen() encrypted = encrypt(pub_key, plaintext) decrypted = decrypt(priv_key, encrypted) encrypted′ = eval(eval_key, f, encrypted) So the first three are fairly straightforward and are familiar to anyone who has used asymmetric cryptography before. The last one is important as it evaluates some function f on the encryption and returns another encrypted value corresponding to the result of evaluating f on the encrypted value. It is this property that gives homomorphic computation its name. Further the Julia Computing team talks about CKKS (Cheon-Kim-Kim-Song), a homomorphic encryption scheme that allowed homomorphic evaluation on the following primitive operations: Element-wise addition of length n vectors of complex numbers Element-wise multiplication of length n complex vectors Rotation (in the circshift sense) of elements in the vector Complex conjugation of vector elements But they also mentioned that computations using CKKS were noisy, and hence they tested to perform these operations in Julia. Which convolutional neural network did the Julia Computing team use As a starting point the Julia Computing team used the convolutional neural network example given in the Flux model zoo. They kept training the loop, prepared the data and tweaked the ML model slightly. It is essentially the same model as the one used in the paper “Secure Outsourced Matrix Computation and Application to Neural Networks”, which uses the same (CKKS) cryptographic scheme. This paper also encrypts the model, which the Julia team neglected for simplicity and they involved bias vectors after every layer (which Flux does by default). This resulted in a higher test set accuracy of the model used by Julia team which was (98.6% vs 98.1%). An unusual feature in this model are the x.^2 activation functions. More common choices here would have been tanh or relu or something more advanced. While those functions (relu in particular) are cheap to evaluate on plaintext values, they would however, be quite expensive to evaluate on encrypted values. Also, the team would have ended up evaluating a polynomial approximation had they adopted these common choices. Fortunately  x.^2 worked fine for their purpose. How was the homomorphic operation carried out The team performed homomorphic operation on Convolutions and Matrix Multiply assuming a batch size of 64. They precomputed each convolution window of 7x7 extraction from the original images which gave them 64 7x7 matrices per input image. Then they collected the same position in each window into one vector and got a 64-element vector for each image, (i.e. a total of 49 64x64 matrices), and encrypted these matrices. In this way the convolution became a scalar multiplication of the whole matrix with the appropriate mask element, and by summing all 49 elements later, the team got the result of the convolution. Then the team moved to Matrix Multiply by rotating elements in the vector to effect a re-ordering of the multiplication indices. They considered a row-major ordering of matrix elements in the vector. Then shifted the vector by a multiple of the row-size, and got the effect of rotating the columns, which is a sufficient primitive for implementing matrix multiply. The team was able to get everything together and it worked. You can take a look at the official blog post to know the step by step implementation process with codes. Further they also executed the whole encryption process in Julia as it allows powerful abstractions and they could encapsulate the whole convolution extraction process as a custom array type. The Julia Computing team states, “Achieving the dream of automatically executing arbitrary computations securely is a tall order for any system, but Julia’s metaprogramming capabilities and friendly syntax make it well suited as a development platform.” Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia v1.3 released with new multithreading features, and much more! The Julia team shares its finalized release process with the community Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 How to make machine learning based recommendations using Julia [Tutorial]
Read more
  • 0
  • 0
  • 3704

article-image-kubecon-cloudnativecon-north-america-2019-highlights-helm-3-0-release-codeready-workspaces-2-0-and-more
Savia Lobo
26 Nov 2019
6 min read
Save for later

KubeCon + CloudNativeCon North America 2019 Highlights: Helm 3.0 release, CodeReady Workspaces 2.0, and more!

Savia Lobo
26 Nov 2019
6 min read
Update: On November 26, the OpenFaaS community released a post including a few of its highlights at KubeCon, San Diego. The post also includes a few highlights from OpenFaaS Cloud, Flux from Weaveworks, Okteto, Dive from Buoyant and k3s going GA. The KubeCon + CloudNativeCon 2019 held at San Diego, North America from 18 - 21 November, witnessed over 12,000 attendees to discuss and improve their education and advancement about containers, Kubernetes and cloud-native. The conference was home to many major announcements including the release of Helm 3.0, Red Hat’s CodeReady Workspaces 2.0, GA of Managed Istio on IBM Cloud Kubernetes Service, and many more. Major highlights at the KubeCon + CloudNativeCon 2019 General availability of Managed Istio on IBM Cloud Kubernetes Service IBM cloud announced that the managed Istio on its cloud Kubernetes service is generally available. This service provides a seamless installation of Istio, automatic updates, lifecycle management of Istio control plane components, and integration with platform logging and monitoring tools. With managed Istio, a user’s service mesh is tuned for optimal performance in IBM Cloud Kubernetes Service. Istio is a service mesh that it is able to provide its features without developers having to make any modifications to their applications. The Istio installation is tuned to perform optimally on IBM Cloud Kubernetes Service and is pre-configured to work out of the box with IBM Log Analysis with LogDNA and IBM Cloud Monitoring with Sysdig. Red Hat announces CodeReady Workspaces 2.0 CodeReady Workspaces 2.0 helps developers to build applications and services similar to the production environment, i.e all apps run on Red Hat OpenShift. A few new services and tools in the CodeReady Workspaces 2.0 include: Air-gapped installs: These enable CodeReady Workspaces to be downloaded, scanned and moved into more secure environments when access to the public internet is limited or unavailable. It doesn’t "call back" to public internet services. An updated user interface: This brings an improved desktop-like experience to developers. Support for VSCode extensions: This gives developers access to thousands of IDE extensions. Devfile: A sharable workspace configuration that specifies everything a developer needs to work, including repositories, runtimes, build tools and IDE plugins, and is stored and versioned with the code in Git. Production consistent containers for developers: This clones the sources in where needed and adds development tools (such as debuggers, language servers, unit test tools, build tools) as sidecar containers so that the running application container mirrors production. Brad Micklea, vice president of Developer Tools, Developer Programs, and Advocacy, Red Hat, said, “Red Hat is working to make developing in cloud native environments easier offering the features developers need without requiring deep container knowledge. Red Hat CodeReady Workspaces 2 is well-suited for security-sensitive environments and those organizations that work with consultants and offshore development teams.” To know more about CodeReady Workspaces 2.0, read the press release on the Red Hat official blog. Helm 3.0 released Built upon the success of Helm 2, the internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change in Helm 3.0 is the removal of Tiller. A rich set of new features have been added as a result of the community’s input and requirements. A few features include: Improved Upgrade strategy: Helm 3 uses 3-way strategic merge patches Secrets as the default storage driver Go import path changes Validating Chart Values with JSONSchema Some features have been deprecated or refactored in ways that make them incompatible with Helm 2. Some new experimental features have also been introduced, including OCI support. Also, the Helm Go SDK has been refactored for general use. The goal is to share and re-use code open sourced with the broader Go community. To know more about Helm 3.0 in detail, read the official blog post. AWS, Intuit and WeaveWorks Collaborate on Argo Flux Recently, Weaveworks announced a partnership with Intuit to create Argo Flux, a major open-source project to drive GitOps application delivery for Kubernetes via an industry-wide community. Argo Flux combines the Argo CD project led by Intuit with the Flux CD project driven by Weaveworks, two well known open source tools with strong community support. At KubeCon, AWS announced that it is integrating the GitOps tooling based on Argo Flux in Elastic Kubernetes Service and Flagger for AWS App Mesh. The collaboration resulted in a new project called GitOps Engine to simplify application deployment in Kubernetes. The GitOps Engine will be responsible for the following functionality: Access to Git repositories Kubernetes resource cache Manifest Generation Resources reconciliation Sync Planning To know more about this collaboration in detail, read the GitOps Engine page on GitHub. Grafana Labs announces general availability of Loki 1.0 Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Read about Loki 1.0 on GitHub to know more in detail. Rancher Extends Kubernetes to the Edge with the general availability of K3s Rancher, creator of the vendor-agnostic and cloud-agnostic Kubernetes management platform, announced the general availability of K3s, a lightweight, certified Kubernetes distribution purpose-built for small footprint workloads. Rancher partnered with ARM to build a highly optimized version of Kubernetes for the edge. It is packaged as a single <40MB binary with a small footprint which reduces the dependencies and steps needed to install and run Kubernetes in resource-constrained environments such as IoT and edge devices. To know more about this announcement in detail, read the official press release. There were many additional announcements including Portworx launched PX-Autopilot, Huawei presented their latest advances on KubeEdge, Diamanti Announced Spektra Hybrid Cloud Solution, and may more! To know more about all the keynotes and tutorials in KubeCon North America 2019, visit its GitHub page. Chaos engineering comes to Kubernetes thanks to Gremlin “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!
Read more
  • 0
  • 0
  • 4476
article-image-10-key-announcements-from-microsoft-ignite-2019-you-should-know-about
Sugandha Lahoti
26 Nov 2019
7 min read
Save for later

10 key announcements from Microsoft Ignite 2019 you should know about

Sugandha Lahoti
26 Nov 2019
7 min read
This year’s Microsoft Ignite was jam-packed with new releases and upgrades in Microsoft’s line of products and services. The company elaborated on its growing focus to address the needs of its customers to help them do their business in smarter, more productive and more efficient ways. Most of the products were AI-based and Microsoft was committed to security and privacy. Microsoft Ignite 2019 took place on November 4-8, 2019 in Orlando, Florida and was attended by 26,000 IT implementers and decision-makers, developers, data professionals and people from various industries. There were a total of 175 separate announcements made! We have tried to cover the top 10 here. Microsoft’s Visual Studio IDE is now available on the web The web-based version of Microsoft’s Visual Studio IDE is now available to all developers. Called the Visual Studio Online, this IDE will allow developers to configure a fully configured development environment for their repositories and use the web-based editor to work on their code. Visual Studio Online is deeply integrated with GitHub (also owned by Microsoft), although developers can also attach their own physical and virtual machines to their Visual Studio-based environments. Visual Studio Online’s cloud-hosted environments, as well as extended support for Visual Studio Code and the web UI, are now available in preview. Support for Visual Studio 2019 is in private preview, which you can also sign up for through the Visual Studio Online web portal. Project Cortex will classify all content in a single network Project Cortex is a new service in Microsoft 365 useful to maintain the everyday flow of work in enterprises. Project Cortex collates enterprises generated documents and data, which is often spread across numerous repositories. It uses AI and machine learning to automatically classify all your content into topics to form a knowledge network. Cortex improves individual productivity and organizational intelligence and can be used across Microsoft 365, such as in the Office apps, Outlook, and Microsoft Teams. Project Cortex is now in private preview and will be generally available in the first half of 2020. Single-view device management with ‘Microsoft Endpoint Manager’ Microsoft has combined its Configuration Manager with Intune, its cloud-based endpoint management system to form what they call an Endpoint Manager. ConfigMgr allows enterprises to manage the PCs, laptops, phones, and tablets they issue to their employees. Intune is used for cloud-based management of phones. The Endpoint Manager will provide unique co-management options to organizations to provision, deploy, manage and secure endpoints and applications across their organization. Touted as the most important release of the event by Satya Nadella, this solution will give enterprises a single view of their deployments. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management. No-code bot builder ‘Microsoft Power Virtual Agents’ is available in public preview Built on the Azure Bot Framework, Microsoft Power Virtual Agents is a low-code and no-code bot-building solution now available in public preview. Power Virtual Agents enables programmers with little to no developer experience to create and deploy intelligent virtual agents. The solution also includes Azure Machine Learning to help users create and improve conversational agents for personalized customer service. Power Virtual Agents will be generally available Dec. 1. Microsoft’s Chromium-based version of Edge is now more privacy-focused Microsoft Ignite announced the release candidate for Microsoft’s Chromium-based version of Edge browser with the general availability release on January 15. InPrivate search will be available for Microsoft Edge and Microsoft Bing to keep online searches and identities private, giving users more control over their data.  When searching InPrivate, search history and personally identifiable data will not be saved nor be associated back to you. Users’ identities and search histories are completely private. There will also be a new security baseline for the all-new Microsoft Edge. Security baselines are pre-configured groups of security settings and default values that are recommended by the relevant security teams. The next version of Microsoft Edge will feature a new icon symbolizing the major changes in Microsoft Edge, built on the Chromium open source project. It will appear in an Easter egg hunt designed to reward the Insider community. ML.NET 1.4 announces General Availability ML.NET 1.4, Microsoft’s open-source machine learning framework is now generally available. The latest release adds image classification training with the ML.NET API, as well as a relational database loader API for reading data used for training models with ML.NET. ML.NET also includes Model Builder (easy to use UI tool in Visual Studio) and Command-Line Interface to make it super easy to build custom Machine Learning models using AutoML. This release also adds a new preview of the Visual Studio Model Builder extension that supports image classification training from a graphical user interface. A preview of Jupyter support for writing C# and F# code for ML.NET scenarios is also available. Azure Arc extends Azure services across multiple infrastructures One of the most important features of Microsoft Ignite 2019 was Azure Arc. This new service enables Azure services anywhere and extends Azure management to any infrastructure — including those of competitors like AWS and Google Cloud.  With Azure Arc, customers can use Azure’s cloud management experience for their own servers (Linux and Windows Server) and Kubernetes clusters by extending Azure management across environments. Enterprises can also manage and govern resources at scale with powerful scripting, tools, Azure Portal and API, and Azure Lighthouse. Announcing Azure Synapse Analytics Azure Synapse Analytics builds upon Microsoft’s previous offering Azure SQL Data Warehouse. This analytics service combines traditional data warehousing with big data analytics bringing serverless on-demand or provisioned resources—at scale. Using Azure Synapse Analytics, customers can ingest, prepare, manage, and serve data for immediate BI and machine learning applications within the same service. Safely share your big data with Azure Data Share, now generally available As the name suggests, Azure Data Share allows you to safely share your big data with other organizations. Organizations can share data stored in their data lakes with third party organizations outside their Azure tenancy. Data providers wanting to share data with their customers/partners can also easily create a new share, populate it with data residing in a variety of stores and add recipients. It employs Azure security measures such as access controls, authentication, and encryption to protect your data. Azure Data Share supports sharing from SQL Data Warehouse and SQL DB, in addition to Blob and ADLS (for snapshot-based sharing). It also supports in-place sharing for Azure Data Explorer (in preview). Azure Quantum to be made available in private preview Microsoft has been working on Quantum computing for some time now. At Ignite, Microsoft announced that it will be launching Azure Quantum in private preview in the coming months. Azure Quantum is a full-stack, open cloud ecosystem that will bring quantum computing to developers and organizations. Azure Quantum will assemble quantum solutions, software, and hardware across the industry in a  single, familiar experience in Azure. Through Azure Quantum, you can learn quantum computing through a series of tools and learning tutorials, like the quantum katas. Developers can also write programs with Q# and the QDK Solve. Microsoft Ignite 2019 organizers have released an 88-page document detailing about all 175 announcements which you can access here. You can also view the conference Keynote delivered by Satya Nadella on YouTube as well as Microsoft Ignite’s official blog. Facebook mandates Visual Studio Code as default development environment and partners with Microsoft for remote development extensions Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist Yubico reveals Biometric YubiKey at Microsoft Ignite Microsoft announces .NET Jupyter Notebooks
Read more
  • 0
  • 0
  • 5090

article-image-4-predictions-by-richard-feldman-on-the-future-of-the-web-typescript-webassembly-and-more
Bhagyashree R
26 Nov 2019
8 min read
Save for later

4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more

Bhagyashree R
26 Nov 2019
8 min read
At ReactiveConf 2019, Richard Feldman, author of Elm in Action and creator of ‘elm-css’ made four predictions about how the future of web development will look like by the end of 2020 and 2025. ReactiveConf 2019 was a three-day functional programming event that happened from October 30 to November 1 at Prague. The event hosted a number of great talks sharing the latest global trends in web and mobile development. This year among the topics covered were PWA, optimizations, security, visualizations, accessibility, and diversity. Predicting the future of the web is about safer bets than about trends Feldman started out by asking a question that developers often come across: “which technology stack to choose for their next project?” Previously, it was often advised to go for technologies that are “boring” or mature instead of the latest and shiniest ones. Going by this advice Feldman and his team chose the technology that had the biggest library ecosystem, most mature in the LAMP stack, and was adopted by many successful companies: Perl. Since then, however, Perl gradually started to lose its popularity. The lesson that Feldman learned here was that “any technology that we choose, no matter how popular, how mainstream, how much traction it got today, you are still making a bet.” He says that predicting how the future for the current technologies will look like, and following that is safer than blindly accepting what everyone else is doing. After setting up the premise, Feldman moved on to sharing his predictions: Prediction 1: "TypeScript takes over the JS world" Back in 2012, Anders Hejlsberg, who is the original designer of C#, Delphi, and Turbo Pascal, came up with another programming language called TypeScript. This language was introduced as a “superset” of JavaScript that will help developers build JavaScript apps that scale. Some of the positives that this language brought to JavaScript development was excellent tooling enabled by static typing, self-documentation of code, continuous feedback from autocomplete, and more. Since its introduction, TypeScript has seen huge adoption. Almost all the big frontend frameworks such as React, Angular and Vue have extensive Typescript support. More and more JavaScript developers and framework authors are taking advantage of the excellent tooling and other benefits it provides. Its latest release, TypeScript 3.7 includes most-awaited features like assert signatures, recursive type aliases, top-level await, null coalescing, and optional chaining. [box type="shadow" align="" class="" width=""] Further Learning If you are interested in building with TypeScript and its latest features, check out our book, Learn TypeScript 3 by Building Web Applications by Sebastien Dubois and Alexis Georges. This book covers the very basics to the more advanced concepts while explaining many design patterns, techniques, frameworks, libraries, and tools along the way. You will learn a ton about modern web frameworks like Angular, Vue.js and React, and build cool web applications using those. It also covers modern front-end development tooling such as Node.js, npm, yarn, Webpack, Parcel, Jest, and many others. [/box] Despite its popularity, not everyone is using TypeScript. Along with verbose code, it is “unsound” by design and gives a false sense of security in some instances, Feldman shared. So, there are people who like TypeScript and there are people who don’t. The most important factor to predict how its future will look like is by seeing how it is affecting the teams actually using it. Feldman said, "I hear a lot of teams saying we are trying Typescript, we have used Typescript, or we are using TypeScript. I hear almost no teams saying we tried Typescript and then went back to JavaScript." [box type="shadow" align="" class="" width=""] Feldman predicted that by the end of 2020, Typescript will be the most common choice for new JS commercial projects. And by the end of 2025, he predicted that there will be more people writing in TypeScript on a daily basis than people writing vanilla JavaScript. [/box] Prediction 2: “WebAssembly is going to expand the web app pie” First announced in 2015, WebAssembly is Assembly for the browser with a compact binary format that runs with near-native execution speed. It is also a compilation target for other high-level languages including C/C++ and Rust. Its “closer to metal” property enables a number of computationally-intensive use cases on the web including games, media editing, speech synthesis, client-side computer vision, among others. Start your WebAssembly journey with our book Hands-On Game Development with WebAssembly by Rick Battagline. This book introduces web and game devs to the world of WebAssembly by walking through the development of a retro arcade game. WebAssembly is designed to work alongside JavaScript, which means you can call WebAssembly modules from JavaScript code. Though it can be used to improve the performance of JavaScript apps and libraries, Feldman doubts that this will be the only major way developers are going to use it in the future. This is because the existing performance of JavaScript is generally accepted and promising some percentage of improvement in speed is not going to be a game-changer for WebAssembly. Instead, Feldman believes that WebAssmebly will enable browsers to compete with apps stores and installers. Getting users to install an app can be a significant obstacle to adoption. WebAssembly can help distribute native code without code signing, app stores, and development kits. Also, the web as a delivery platform provides deep linking and other sharing capabilities. He explained this through the example of Figma, a collaborative interface design tool built in C++, which users can access just going to a URL. However, distributing applications built in Rust, C++, or Go on web does not mean the end of HTML, CSS, and JavaScript. WebAssembly will simply expand what he calls as the “web app pie.” [box type="shadow" align="" class="" width=""]Feldman predicted that by the end of 2020, WebAssembly will not make much difference to the makeup of the web. By the end of 2025, however, we will start to see the niche of heavyweight web apps that are basically native apps distributed through the browser.[/box] Prediction 3: “npm lasts, surviving further problems” In recent years, developers have witnessed and survived quite a few npm disasters. In 2016, a developer unpublished more than 250 npm-managed modules that affected Node, Babel and thousands of other projects. Then in 2018, we saw the event-stream case, in which an ill-intentioned user took ownership of the widely-used package through social engineering and infected it with a malicious package. Another problem with npm is that it can allow the execution of arbitrary code from thousands and thousands of packages through the “postinstall” hook in package.json. Feldman recommends disabling “postinstall” and “preinstall” scripts by using the following command: npm config set ignore-scripts true Also, we are seeing some alternatives to npm. Feldman mentioned about Entropic, a federated package registry with a new CLI introduced by the former CTO of npm, C J Silverio. Feldman believes that despite these alternatives, financial, security, or other problems developers will continue to use npm because of its strong network effects. [box type="shadow" align="" class="" width=""]Drawing from these events, Feldman predicted that by the end of 2020, we can expect one more security incident. By the end of 2025, he predicts that we might see at least one malicious npm package infecting many developers’ machines.[/box] Prediction 4: "JS alternatives stay niche, but age well" When it comes to JavaScript alternatives, we have two options: JS dialects and non-JS dialects. Some of the JS dialects are TypeScript, Dart, Coffeescript, among others. Whereas, non-JS dialects include ClojureScript, ReasonML, and Elm, which provide a different experience than writing JavaScript. Representing the Elm core team at the event, Feldman listed a few reasons why developers should try Elm. It renders faster and generates smaller builds than most top JS frameworks and almost never crashes. It has its own package ecosystem and is often praised for its very detailed error messages. After sharing the benefits of Elm, Feldman concluded that JavaScript alternatives will stay niche, but age well. This essentially means that people who have chosen these alternatives and are happy with them will continue to use them regardless of the popularity of TypeScript. [box type="shadow" align="" class="" width=""]By the end of 2020, compile-to-JS languages will continue to grow, but not as fast as TypeScript. By the end of 2025, non-JavaScript dialects will have aged well, although at that time TypeScript will still be more popular.[/box] Want to add TypeScript to your skillset? Check out our book, Learn TypeScript 3 by Building Web Applications by Sebastien Dubois and Alexis Georges. It is a comprehensive guide that teaches how to wisely use the latest features in TypeScript 3.  You will learn how to build web applications with Angular, Vue.js and React and use modern front-end development tooling such as Node.js, npm, yarn, Webpack, Parcel, Jest, and many others. Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices An introduction to TypeScript types for ASP.NET core [Tutorial]
Read more
  • 0
  • 0
  • 10800