Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-founder-ceo-of-odoo-fabien-pinckaers-discusses-the-new-odoo-13-framework
Vincy Davis
04 Nov 2019
6 min read
Save for later

Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework

Vincy Davis
04 Nov 2019
6 min read
Odoo, formerly known as OpenERP (Enterprise Resource Planning), is a popular open source, business application development software. It comes with many features like a powerful GUI, performance optimization, integrated in-app purchase features and more. It is used by companies to manage and organize their workloads like materials and warehouse management, human resources, finance, accounting, sales, and many other enterprise features. With a fast-growing community, Odoo is being used by companies of all sizes. At the Odoo Experience 2019 event conducted earlier this month, the Odoo team announced the release of Odoo 13, its latest version of the all-in-one business software. This release contains an abundance of major and minor improvements, including new features like sales coupons & promotions module, MRP subcontracting, website form builder, skill management module and more.  At the event, founder & CEO of Odoo, Fabien Pinckaers explained the many concepts behind the new Odoo framework, which he says is one of the best improvements in Odoo 13. New to Odoo? If you are a beginner in Odoo, read our book Working with Odoo 12 - Fourth Edition written by Greg Moss to learn how to start a new company database in Odoo and to understand the basics of Odoo sales management. You can also master customer relationship management in Odoo for setting up a modern business environment. This book will also take you through the OpenChatter feature with notes and messages associated with the Odoo documents. Also, learn how to use Odoo's API to integrate with other applications from our book.   The Odoo 13 framework is also called an In-Memory ORM, because it provides more considerable memory than before. When employed for operational measures, on an average, it runs 4.5 times faster when compared to earlier versions of Odoo. Key features of Odoo 13 framework Simplified Cache process Pinckaers says that in the new framework, they have simplified the cache as the stored fields will now only need a single value. On the other hand, the non-stored fields’ computed value will depend on the keywords present in the context (eg. translatable and context). He added that, in version 12, most fields did not need a cache so it contained only one global cache with an exception for fields that were text-dependent. It also had a new attribute for a multi-line inventory where the projects depend on “way roads”. However, the difficulty in this version is that when creating a field, users had to select the cache value and if the context of the field is changing, then the users had to again specify the new value of cache. This step is made simpler in version 13, as the user now needs to specify the value of the cache only once. “It seems simple but actually in the business code we're passing it to all the fields at the same time,” asserts Pinckaers. This simplified cache process will also reduce the alert memory access of the code. In-memory updates While specifying the various test field values, in the earlier versions, users had to update its validation value each time making it a time consuming process. To overcome this problem, the Odoo team has included all the data transactions in memory in the new version. Consequently,  in Odoo 13, when assigning the field value, the user can put it in the cache each time. Hence, when a field value needs to be read, it is taken from the cache itself. To manage all the dependencies in Python, Pinckaers demonstrated how users should always:  Use the inverse field, instead of SQL query Avoid using SELECT, as the implementation of the compute will read the same object When create(), set one2many to[] Delaying the computing field for faster transactions In order to delay a computing field in the line.product_quantity and the line.discount in the preceding Odoo versions, a user had to compute the dependency value for all the for line in order commands. Once the transaction was completed, the values were then recomputed and written in the code. This process is also made easy in Odoo 13, as the user can now mark all the line commands to recompute and use the self.flush() command to compute the values after the transaction is completed. This makes all the computation transactions to be conducted in memory. According to Pinckaers, this support will help users with more than 100 customers as it will make the process much faster and simpler. Optimize dependency tree to reduce Python and SQL computations Pinckaers takes the ‘change order’ example to demonstrate how version 13 of Odoo has a clean dependency tree. This means that if the price list of the order is changed, the total cost of the order will also change indirectly, thus optimizing the dependency tree. He explains that this indirect change will happen due to the indirect dependency between the pricelist identity and the total cost list of the field in Odoo 13. In the earlier versions, due to the recursive nature of the dependencies, each order of the line entailed the order ID of the field. This required the user to read sometimes even more than 100 lines of the list to get the order ID. In Odoo 13, this prolonged process is altered to get a more optimized dependency tree. This means that the user can now directly get the order ID from the dependent tree, without the Python and SQL computations.  Improvements in browse optimization() The major improvement instilled in Odoo 13 browse optimization() is the mechanism to avoid multiple format cache conversion. In the previous versions, users had to read and convert all the SQL queries to cache format followed by put in cache command. This meant that it required three commands just to read the data, making the process very tedious. With the latest version, the prefetch command will directly save all the similar data formats in the memory. “But if the format is different, then we have to apply everything a color conversion method. As  Python is extremely slow,” Pinckaers says, “applying a dictionary that we see from outside the cache” makes the process faster because a C implementation can be used to directly convert the data in the cache format. You can watch the full video to see Pinckaer’s demonstration of code cleanup and Python optimization. If you want to use Odoo to build enterprise applications and set up the functional requirements for your business, read our book ‘Working with Odoo 12 - Fourth Edition' written by Greg Moss to learn how to use the MRP module to create, process, and schedule the manufacturing and production order. This book will also guide you with in-depth knowledge about the business intelligence required in Odoo, its architecture and will also unveil how to customize Odoo to meet the specific needs of your business.  Creating views in Odoo 12 – List, Form, Search [Tutorial] How to set up Odoo as a system service [Tutorial] Handle Odoo application data with ORM API [Tutorial] Implement an effective CRM system in Odoo 11 [Tutorial] “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company” – An interview with Odoo community hero, Yenthe Van Ginneken
Read more
  • 0
  • 0
  • 4777

article-image-what-is-the-history-behind-c-programming-and-unix
Packt Editorial Staff
17 Oct 2019
9 min read
Save for later

What is the history behind C Programming and Unix?

Packt Editorial Staff
17 Oct 2019
9 min read
If you think C programming and Unix are unrelated, then you are making a big mistake. Back in the 1970s and 1980s, if the Unix engineers at Bell Labs had decided to use another programming language instead of C to develop a new version of Unix, then we would be talking about that language today. The relationship between the two is simple; Unix is the first operating system that is implemented with a high-level C programming language, got its fame and power from Unix. Of course, our statement about C being a high-level programming language is not true in today’s world. This article is an excerpt from the book Extreme C by Kamran Amini. Kamran teaches you to use C’s power. Apply object-oriented design principles to your procedural C code. You will gain new insight into algorithm design, functions, and structures. You’ll also understand how C works with UNIX, how to implement OO principles in C, and what multiprocessing is. In this article, we are going to look at the history of C programming and Unix. Multics OS and Unix Even before having Unix, we had the Multics OS. It was a joint project launched in 1964 as a cooperative project led by MIT, General Electric, and Bell Labs. Multics OS was a huge success because it could introduce the world to a real working and secure operating system. Multics was installed everywhere from universities to government sites. Fast-forward to 2019, and every operating system today is borrowing some ideas from Multics indirectly through Unix. In 1969, because of the various reasons that we will talk about shortly, some people at Bell Labs, especially the pioneers of Unix, such as Ken Thompson and Dennis Ritchie, gave up on Multics and, subsequently, Bell Labs quit the Multics project. But this was not the end for Bell Labs; they had designed their simpler and more efficient operating system, which was called Unix. It is worthwhile to compare the Multics and Unix operating systems. In the following list, you will see similarities and differences found while comparing Multics and Unix: Both follow the onion architecture as their internal structure. We mean that they both have the same rings in their onion architecture, especially kernel and shell rings. Therefore, programmers could write their own programs on top of the shell ring. Also, Unix and Multics expose a list of utility programs, and there are lots of utility programs such as ls and pwd. In the following sections, we will explain the various rings found in the Unix architecture. Multics needed expensive resources and machines to be able to work. It was not possible to install it on ordinary commodity machines, and that was one of the main drawbacks that let Unix thrive and finally made Multics obsolete after about 30 years. Multics was complex by design. This was the reason behind the frustration of Bell Labs employees and, as we said earlier, the reason why they left the project. But Unix tried to remain simple. In the first version, it was not even multitasking or multi-user! You can read more about Unix and Multics online, and follow the events that happened in that era. Both were successful projects, but Unix has been able to thrive and survive to this day. It is worth sharing that Bell Labs has been working on a new distributed operating system called Plan 9, which is based on the Unix project.   Figure 1-1: Plan 9 from Bell Labs Suffice to say that Unix was a simplification of the ideas and innovations that Multics presented; it was not something new, and so, I can quit talking about Unix and Multics history at this point. So far, there are no traces of C in the history because it has not been invented yet. The first versions of Unix were purely written using assembly language. Only in 1973 was Unix version 4 written using C. Now, we are getting close to discussing C itself, but before that, we must talk about BCPL and B because they have been the gateway to C. About BCPL and B BCPL was created by Martin Richards as a programming language invented for the purpose of writing compilers. The people from Bell Labs were introduced to the language when they were working as part of the Multics project. After quitting the Multics project, Bell Labs first started to write Unix using assembly programming language. That’s because, back then, it was an anti-pattern to develop an operating system using a programming language other than assembly. For instance, it was strange that the people at the Multics project were using PL/1 to develop Multics but, by doing that, they showed that operating systems could be successfully written using a higher-level programming language other than assembly. As a result, Multics became the main inspiration for using another language for developing Unix. The attempt to write operating system modules using a programming language other than assembly remained with Ken Thompson and Dennis Ritchie at Bell Labs. They tried to use BCPL, but it turned out that they needed to apply some modifications to the language to be able to use it in minicomputers such as the DEC PDP-7. These changes led to the B programming language. While we won’t go too deep into the properties of the B language here you can read more about it and the way it was developed at the following links: The B Programming Language  The Development of the C Language Dennis Ritchie authored the latter article himself, and it is a good way to explain the development of the C programming language while still sharing valuable information about B and its characteristics. B also had its shortcomings in terms of being a system programming language. B was typeless, which meant that it was only possible to work with a word (not a byte) in each operation. This made it hard to use the language on machines with a different word length. Therefore, over time, further modifications were made to the language until it led to developing the NB (New B) language, which later derived the structures from the B language. These structures were typeless in B, but they became typed in C. And finally, in 1973, the fourth version of Unix could be developed using C, which still had many assembly codes. In the next section, we talk about the differences between B and C, and why C is a top-notch modern system programming language for writing an operating system. The way to C programming and Unix I do not think we can find anyone better than Dennis Ritchie himself to explain why C was invented after the difficulties met with B. In this section, we’re going to list the causes that prompted Dennis Ritchie, Ken Thompson, and others create a new programming language instead of using B for writing Unix. Limitations of the B programming language: B could only work with words in memory: Every single operation should have been performed in terms of words. Back then, having a programming language that was able to work with bytes was a dream. This was because of the available hardware at the time, which addressed the memory in a word-based scheme. B was typeless: More accurately, B was a single-type language. All variables were from the same type: word. So, if you had a string with 20 characters (21 plus the null character at the end), you had to divide it up by words and store it in more than one variable. For example, if a word was 4 bytes, you would have 6 variables to store 21 characters of the string. Being typeless meant that multiple byte-oriented algorithms, such as string manipulation algorithms, were not efficiently written with B: This was because B was using the memory words not bytes, and they could not be used efficiently to manage multi-byte data types such as integers and characters. B didn’t support floating-point operations: At the time, these operations were becoming increasingly available on the new hardware, but there was no support for that in the B language. Through the availability of machines such as PDP-1, which could address memory on a byte basis, B showed that it could be inefficient in addressing bytes of memory: This became even clearer with B pointers, which could only address the words in the memory, and not the bytes. In other words, for a program wanting to access a specific byte or a byte range in the memory, more computations had to be done to calculate the corresponding word index. The difficulties with B, particularly its slow development and execution on machines that were available at the time, forced Dennis Ritchie to develop a new language. This new language was called NB, or New B at first, but it eventually turned out to be C. This newly developed language, C, tried to cover the difficulties and flaws of B and became a de facto programming language for system development, instead of the assembly language. In less than 10 years, newer versions of Unix were completely written in C, and all newer operating systems that were based on Unix got tied with C and its crucial presence in the system. As you can see, C was not born as an ordinary programming language, but instead, it was designed by having a complete set of requirements in mind. You may consider languages such as Java, Python, and Ruby to be higher-level languages, but they cannot be considered as direct competitors as they are different and serve different purposes. For instance, you cannot write a device driver or a kernel module with Java or Python, and they themselves have been built on top of a layer written in C. Unlike some programming languages, C is standardized by ISO, and if it is required to have a certain feature in the future, then the standard can be modified to support the new feature. To summarize In this article, we began with the relationship between Unix and C. Even in non-Unix operating systems, you see some traces of a similar design to Unix systems. We also looked at the history of C and explained how Unix appeared from Multics OS and how C was derived from the B programming language. The book Extreme C, written by Kamran Amini will help you make the most of C's low-level control, flexibility, and high performance. Is Dark an AWS Lambda challenger? Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”
Read more
  • 0
  • 0
  • 11774

article-image-is-serverless-architecture-a-good-choice-for-app-development
Mehul Rajput
11 Oct 2019
6 min read
Save for later

Is serverless architecture a good choice for app development?

Mehul Rajput
11 Oct 2019
6 min read
App development has evolved rapidly in recent years. With new demands and expectations from businesses and users, trends like cloud have helped developers to be more productive and to build faster, more reliable and secure applications. But, there’s no end to evolution - and serverless is arguably the next step for application development. But is a serverless architecture the right choice? What is a Serverless Architecture? When you hear the word sever-less, you might assume that it means no servers. In actual fact it really refers to the elimination of the need to manage the servers. Instead, it shifts that responsibility to your cloud provider. Simply, it means that the constituent parts of an application are divided between multiple servers, with no need for the application owner/manager to create or manage the infrastructure that supports it. Instead of running off a server, with a serverless architecture, it runs off functions. These are essentially actions that are fired off to ensure things happen within the application. This is where the phrase ‘function-as-a-service’, or FaaS, (another way of describing serverless) comes from.  A recent report claims that the FaaS market is projected to grow up to 32.7% by 2021, by 7.72 billion US dollars. Is Serverless Architecture a Good Choice for App Development? Now that we’ve established what the serverless actually means, we must get to business. Is serverless architecture the right choice for app development? Well, it can work either way. It can be positive as well as negative. Here are some reasons: Using serverless for app development: the positives There are many reasons because of which serverless architecture can be good for app development and should be used. Some of them are discussed below: Decreasing costs Easier for service Scalability Third-party services Decreasing costs The most effective use of a serverless architecture in an app development process is that it reduces the costs of the work.It’s typically less expensive a ‘traditional’ server architecture. The reason is that on hardware servers, you have to pay for many different things that might not be  required. For example, you won’t have to pay for regular maintenance, the grounds, the electricity, and staff maintenance. Hence, you can save a considerable amount of money and use that for app quality as well. Easier for service It is a rational thought that when the owner or the app manager will not have to manage the server themselves, and a machine can do this job, then it won’t be as challenging to make the service accessible. As it will make the job more comfortable because it will not require supervision. Second, you will not have to spend time on it. Instead, you can use this time for productive work such as product development. Third, the service by this technology is reliable, and hence you can easily use it without much fear. Scalability Now another interestingly useful advantage of serverless architecture in app development is scalability. So, what is scalability? Well, it is the phenomenon by which a system handles an extra amount of work by adding resources to the system. It is the capability of an app or product to continue to work appropriately without disturbance when it is reformed in size or volume to meet any users need. So, serverless architecture act as the resource that is added to the system to handle any work that has piled up. Third-party services Another essential and useful feature of the serverless architecture is that, going this way you can use third-party services. Hence, your app can use any third-party service it requires other than what you already have. This way, the struggle needed to create the backend architecture of the app reduces. Additionally the third-party might provide us with better services than we already have. Hence, eventually, serverless architecture proves to be better as it provides the extent of a third-party. Serverless for app development: negatives Now we know all the advantages of a serverless architecture, it’s important to note that it can also it  bring some limitations and disadvantages. These are: Time restrictions Vendor lock-in Multi-tenancy Debugging is not possible Time restrictions As mentioned before, serverless architecture works on FaaS rules and has a time limit for running a function. This time limit is 300 seconds exactly. So, when this limit is reached, the function is stopped. Therefore, for more complex functions that require more time to execute, FaaS approach may not be a good choice. Although this problem can be tackled in a way that the problem is solved easily, to do this, we can split a task into several simpler functions if the task allows it. Otherwise, time restrictions like these can cause great difficulty. Vendor lock-in We have discussed that by using serverless architecture, we can utilize with third party services. Well, it can also go in the wrong way and cause vendor lock-in. If, for any reason, you decide to shift to a new service provider, in most cases services will be fulfilled in a different way. That means the productivity gains you expected from serverless will be lost as you will have to adjust and reconfigure the infrastructure to accept the new service. Multi-tenancy Multi-tenancy is an increasing problem in serverless architecture. The data of many tenants are kept quite near to each other. This can create  confusion. Some data might be exchanged, distributed, or probably lost. In turn, this can cause security and reliability issues. A customer could, for example, suddenly produce an extraordinarily high load which would affect other customers' applications. Debugging is not possible Debugging isn’t possible with serverless. As Serverless Architecture is a place where the data is being stored, it doesn’t have a debugging facility where the uploaded code can be debugged. If you want to know the function, run or perform it and wait for the result. The result can crash in the function and you cannot do anything about this. However, there is a way to resolve this problem, as well. You can use extensive logging which with every step being logged, decreases the chances of errors that cause debugging issues. Conclusion Serverless architecture certainly seems impressive in spite of having some limitations. There is no doubt that the viability and success of architectures depend on the business requirements and of course on the technology used. In the same way, serverless can sparkle bright if used in the appropriate case. I hope this blog might have helped you in the understanding of Serverless architecture for mobile apps and might be able to see it's both bright and dark sides. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business.   What is serverless architecture and why should I be interested? Introducing numpywren, a system for linear algebra built on a serverless architecture Serverless Computing 101 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2
Read more
  • 0
  • 0
  • 6106

article-image-how-do-you-become-a-developer-advocate
Packt Editorial Staff
11 Oct 2019
8 min read
Save for later

How do you become a developer advocate?

Packt Editorial Staff
11 Oct 2019
8 min read
Developer advocates are people with a strong technical background, whose job is to help developers be successful with a platform or technology. They act as a bridge between the engineering team and the developer community. A developer advocate does not only fill in the gap between developers and the platform but also looks after the development of developers in terms of traction and progress on their projects. Developer advocacy, is broadly referred to as "developer relations". Those who practice developer advocacy have fallen into in this profession in one way or another. As the processes and theories in the world of programming have evolved over several years, so has the idea of developer advocacy. This is the result of developer advocates who work in the wild using their own initiatives. This article is an excerpt from the book Developer, Advocate! by Geertjan Wielenga. This book serves as a rallying cry to inspire and motivate tech enthusiasts and burgeoning developer advocates to take their first steps within the tech community. The question then arises, how does one become a developer advocate? Here are some experiences shared by some well-known developer advocates on how they started the journey that landed them to this role. Is developer advocacy taught in universities? Bruno Borges, Principal Product Manager at Microsoft says, for most developer advocates or developer relations personnel, it was something that just happened. Developer advocacy is not a discipline that is taught in universities; there's no training specifically for this. Most often, somebody will come to realize that what they already do is developer relations. This is a discipline that is a conjunction of several other roles: software engineering, product management, and marketing. I started as a software engineer and then I became a product manager. As a product manager, I was engaged with marketing divisions and sales divisions directly on a weekly basis. Maybe in some companies, sales, marketing, and product management are pillars that are not needed. I think it might vary. But in my opinion, those pillars are essential for doing a proper developer relations job. Trying to aim for those pillars is a great foundation. Just as in computer science when we go to college for four years, sometimes we don't use some of that background, but it gives us a good foundation. From outsourcing companies that just built business software for companies, I then went to vendor companies. That's where I landed as a person helping users to take full advantage of the software that they needed to build their own solutions. That process is, ideally, what I see happening to others. The journey of a regular tech enthusiast to a developer advocate Ivar Grimstad, a developer advocate at Eclipse foundation, speaks about his journey from being a regular tech enthusiast attending conferences to being there speaking at conferences as an advocate for his company. Ivar Grimstad says, I have attended many different conferences in my professional life and I always really enjoyed going to them. After some years of regularly attending conferences, I came to the point of thinking, "That guy isn't saying anything that I couldn't say. Why am I not up there?" I just wanted to try speaking, so I started submitting abstracts. I already gave talks at meetups locally, but I began feeling comfortable enough to approach conferences. I continued submitting abstracts until I got accepted. As it turned out, while I was becoming interested in speaking, my company was struggling to raise its profile. Nobody, even in Sweden, knew what we did. So, my company was super happy for any publicity it could get. I could provide it with that by just going out and talking about tech. It didn't have to be related to anything we did; I just had to be there with the company name on the slides. That was good enough in the eyes of my company. After a while, about 50% of my time became dedicated to activities such as speaking at conferences and contributing to open source projects. Tables turned from being an engineer to becoming a developer advocate Mark Heckler, a Spring developer and advocate at Pivotal, narrates his experience about how tables turned for him from University to Pivotal Principal Technologist & Developer Advocate. He says, initially, I was doing full-time engineering work and then presenting on the side. I was occasionally taking a few days here and there to travel to present at events and conferences. I think many people realized that I had this public-facing level of activities that I was doing. I was out there enough that they felt I was either doing this full-time or maybe should be. A good friend of mine reached out and said, "I know you're doing this anyway, so how would you like to make this your official role?" That sounded pretty great, so I interviewed, and I was offered a full-time gig doing, essentially, what I was already doing in my spare time. A hobby turned out to be a profession Matt Raible, a developer advocate at Okta has worked as an independent consultant for 20 years. He did advocacy as a side hobby. He talks about his experience as a consultant and walks through the progress and development. I started a blog in 2002 and wrote about Java a lot. This was before Stack Overflow, so I used Struts and Java EE. I posted my questions, which you would now post on Stack Overflow, on that blog with stack traces, and people would find them and help. It was a collaborative community. I've always done the speaking at conferences on the side. I started working for Stormpath two years ago, as a contractor part-time, and I was working at Computer Associates at the same time. I was doing Java in the morning at Stormpath and I was doing JavaScript in the afternoon at Computer Associates. I really liked the people I was working with at Stormpath and they tried to hire me full-time. I told them to make me an offer that I couldn't refuse, and they said, "We don't know what that is!" I wanted to be able to blog and speak at conferences, so I spent a month coming up with my dream job. Stormpath wanted me to be its Java lead. The problem was that I like Java, but it's not my favorite thing. I tend to do more UI work. The opportunity went away for a month and then I said, "There's a way to make this work! Can I do Java and JavaScript?" Stormpath agreed that instead of being more of a technical leader and owning the Java SDK, I could be one of its advocates. There were a few other people on board in the advocacy team. Six months later, Stormpath got bought out by Okta. As an independent consultant, I was used to switching jobs every six months, but I didn't expect that to happen once I went full-time. That's how I ended up at Okta! Developer advocacy can be done by calculating the highs and lows of the tech world Scott Davis, a Principal Engineer at Thoughtworks, was also a classroom instructor, teaching software classes to business professionals before becoming a developer advocate. As per him, tech really is a world of strengths and weaknesses. Advocacy, I think, is where you honestly say, "If we balance out the pluses and the minuses, I'm going to send you down the path where there are more strengths than weaknesses. But I also want to make sure that you are aware of the sharp, pointy edges that might nick you along the way." I spent eight years in the classroom as a software instructor and that has really informed my entire career. It's one thing to sit down and kind of understand how something works when you're cowboy coding on your own. It's another thing altogether when you're standing up in front of an audience of tens, or hundreds, or thousands of people. Discover how developer advocates are putting developer interests at the heart of the software industry in companies including Microsoft and Google with Developer, Advocate! by Geertjan Wielenga. This book is a collection of in-depth conversations with leading developer advocates that reveal the world of developer relations today. 6 reasons why employers should pay for their developers’ training and learning resources “Developers need to say no” – Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast] GitHub has blocked an Iranian software developer’s account How do AWS developers manage Web apps? Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider
Read more
  • 0
  • 0
  • 11273

article-image-bootstrap-vs-material-design-for-your-next-web-or-app-development-project
Guest Contributor
08 Oct 2019
8 min read
Save for later

Should you use Bootstrap or Material Design for your next web or app development project?

Guest Contributor
08 Oct 2019
8 min read
Superior user experience is becoming increasingly important for businesses as it helps them to engage users and boost brand loyalty. Front-end website and app development platforms, namely Bootstrap vs Material Design empower developers to create websites with a robust structure and advanced functionality, thereby delivering outstanding business solutions and unbeatable user experience. Both Twitter’s Bootstrap vs Material Design are used by developers to create functional and high-quality websites and apps. If you are an aspiring front-end developer, here’s a direct comparison between the two, so you can choose the one that’s better suited for your upcoming project. BootStrap Bootstrap is an open-source, intuitive, and powerful framework used for responsive mobile-first solutions on the web. For several years, Bootstrap has helped developers create splendid mobile-ready front-end websites. In fact, Bootstrap is the most popular  CSS framework as it’s easy to learn and offers a consistent design by using re-usable components. Let’s dive deeper into the pros and cons of Bootstrap. Pros High speed of development If you have limited time for the website or app development, Bootstrap is an ideal choice. It offers ready-made blocks of code that can get you started within no time. So, you don’t have to start coding from scratch. Bootstrap also provides ready-made themes, templates, and other resources that can be downloaded and customized to suit your needs, allowing you to create a unique website as quickly as possible. Bootstrap is mobile first Since July 1, 2019, Google started using mobile-friendliness as a critical ranking factor for all websites. This is because users prefer using sites that are compatible with the screen size of the device they are using. In other words, they prefer accessing responsive sites. Bootstrap is an ideal choice for responsive sites as it has an excellent fluid grid system and responsive utility classes that make the task at hand easy and quick. Enjoys  a strong community support Bootstrap has a huge number of resources available on its official website and enjoys immense support from the developers’ community. Consequently, it helps all developers fix issues promptly. At present, Bootstrap is being developed and maintained on GitHub by Mark Otto, currently Principal Design & Brand Architect at GitHub, with nearly 19 thousand commits and 1087 contributors. The team regularly releases updates to fix any new issues and improve the effectiveness of the framework. For instance, currently, the Bootstrap team is working on releasing version 4.3 that will drop jQuery for regular JavaScript. This is primarily because jQuery adds 30KB to the webpage size and is tricky to configure with bundlers like Webpack. Similarly, Flexbox is a new feature added to the Bootstrap 4 framework. In fact, Bootstrap version 4 is rich with features, such as a Flexbox-based grid, responsive sizing and floats, auto margins, vertical centering, and new spacing utilities. Further, you will find plenty of websites offering Bootstrap tutorials, a wide collection of themes, templates, plugins, and user interface kit that can be used as per your taste and nature of the project. Cons All Bootstrap sites look the same The Twitter team introduced Bootstrap with the objective of helping developers use a standardized interface to create websites within a short time. However, one of the major drawbacks of this framework is that all websites created using this framework are highly recognizable as Bootstrap sites. Open Airbnb, Twitter, Apple Music, or Lyft. They all look the same with bold headlines, rounded sans-serif fonts, and lots of negative space. Bootstrap sites can be heavy Bootstrap is notorious for adding unnecessary bloat to websites as the files generated are huge in size. This leads to longer loading time and battery draining issues. Further, if you delete them manually, it defeats the purpose of using the framework. So, if you use this popular front-end UI library in your project, make sure you pay extra attention to page weight and page speed. May not be suitable for simple websites Bootstrap may not be the right front-end framework for all types of websites, especially the ones that don’t need a full-fledged framework. This is because, Bootstrap’s theme packages are incredibly heavy with battery-draining scripts. Also, Bootstrap has CSS weighing in at 126KB and 29KB of JavaScript that can increase the site’s loading time. In such cases, Bootstrap alternatives, namely Foundation, Skeleton, Pure, and Semantic UI adaptable and lightweight frameworks that can meet your developmental needs and improve your site’s user-friendliness. Material Design When compared to Bootstrap vs Material Design is hard to customize and learn. However, this design language was introduced by Google in 2014 with the objective of enhancing Android app’s design and user interface. The language is quite popular among developers as it offers a quick and effective way for web development. It includes responsive transitions and animations, lighting and shadows effects, and grid-based layouts. When developing a website or app using Material Design, designers should play to its strengths but be wary of its cons. Let’s see why. Pros Offers numerous components  Material Design offers numerous components that provide a base design, guidelines, and templates. Developers can work on this to create a suitable website or application for the business. The Material Design concept offers the necessary information on how to use each component. Moreover, Material Design Lite is quite popular for its customization. Many designers are creating customized components to take their projects to the next level. Is compatible across various browsers Both Bootstrap vs Material Design have a sound browser compatibility as they are compatible across most browsers. Material Design supports Angular Material and React Material User Interface. It also uses the SASS preprocessor. Doesn’t require JavaScript frameworks Bootstrap completely depends on JavaScript frameworks. However, Material Design doesn’t need any JavaScript frameworks or libraries to design websites or apps. In fact, the platform provides a material design framework that allows developers to create innovative components such as cards and badges. Cons The animations and vibrant colors can be distracting Material Design extensively uses animated transitions and vibrant colors and images that help bring the interface to life. However, these animations can adversely affect the human brain’s ability to gather information. It is affiliated to Google Since Material Design is a Google-promoted framework, Android is its prominent adopter. Consequently, developers looking to create apps on a platform-independent UX may find it tough to work with Material Design. However, when Google introduced the language, it had broad vision for Material Design that encompasses many platforms, including iOS. The tech giant has several Google Material Design components for iOS that can be used to render interesting effects using a flexible header, standard material colors, typography, and sliding tabs Carries performance overhead Material Design extensively uses animations that carry a lot of overhead. For instance, effects like drop shadow, color fill, and transform/translate transitions can be jerky and unpleasant for regular users. Wrapping up: Should you use Bootstrap vs Material Design for your next web or app development project? Bootstrap is great for responsive, simple, and professional websites. It enjoys immense support and documentation, making it easy for developers to work with it. So, if you are working on a project that needs to be completed within a short time, opt for Bootstrap. The framework is mainly focused on creating responsive, functional, and high-quality websites and apps that enhance the user experience. Notice how these websites have used Bootstrap to build responsive and mobile-first sites. (Source: cssreel) (Source: Awwwards) Material Design, on the other hand, is specific as a design language and great for building websites that focus on appearance, innovative designs, and beautiful animations. You can use Material Design for your portfolio sites, for instance. The framework is pretty detailed and straightforward to use and helps you create websites with striking effects. Check out how these websites and apps use the customized themes, popups, and buttons of Material Design. (Source:  Nimbus 9) (Source: Digital Trends) What do you think? Which framework works better for you? Bootstrap vs Material Design. Let us know in the comments section below. Author Bio Gaurav Belani is a Senior SEO and Content Marketing Analyst at The 20 Media, a Content Marketing agency that specializes in data-driven SEO. He has more than seven years of experience in Digital Marketing and along with that loves to read and write about AI, Machine Learning, Data Science and much more about the emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter and Linkedin. Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more Warp: Rust’s new web framework Learn how to Bootstrap a Spring application [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript How to use Bootstrap grid system for responsive website design?  
Read more
  • 0
  • 0
  • 25606

article-image-why-is-pentaho-8-3-great-for-dataops
Guest Contributor
07 Oct 2019
6 min read
Save for later

Why is Pentaho 8.3 great for DataOps?

Guest Contributor
07 Oct 2019
6 min read
Announced in July, Pentaho 8.3 is the latest version of the data integration and analytics platform software from Hitachi Vantara. Along with new and improved features, this version will support DataOps, a collaborative data management practice that helps customers access the full potential of their data. “DataOps is about having the right data, in the right place, at the right time and the new features in Pentaho 8.3 ensure just that,” said John Magee, vice president, Portfolio Marketing, Hitachi Vantara. “Not only do we want to ensure that data is stored at the lowest cost at the right service level, but that data is searchable, accessible and properly governed so actionable insights can be generated and the full economic value of the data is captured.” How Pentaho prevents the loss of data According to Stewart Bond, research director, Data Integration and Integrity Software, and Chandana Gopal, research director, Business Analytics Solutions from IDC, “A vast majority of data that is generated today is lost. In fact, only about 2.5% of all data is actually analyzed. The biggest challenge to unlocking the potential that is hidden within data is that it is complicated, siloed and distributed. To be effective, decision makers need to have access to the right data at the right time and with context.” The struggle is how to manage all the incoming data in a way that exposes everyone to what’s coming down the pipeline. When data is siloed, there’s no guarantee the right people are seeing it to analyze it. Pentaho Development is a single platform to help businesses keep up with data growth in a way that enables real-time data ingestion. With available data services, you can:   Make data sets immediately available for reports and applications.   Reduce the time needed to create data models.   Improve collaboration between business and IT teams.   Analyze results with embedded machines and deep learning models without knowing how to code them into data pipelines.   Prepare and blend traditional data with big data. Making all the data more accessible across the board is a key feature of Pentaho that this latest release continues to strengthen. What’s new in Pentaho 8.3? Latest version of Pentaho includes new features to support DataOps DataOps limits the overall cycle time of big data analytics. Starting from the initial origin of the ideas to the making of the visualization, the overall data analytics process is transformed with DataOps. Pentaho 8.3 is conceptualized to promote the easy management and collaboration of the data. The data analytics process is much more agile. Therefore, the data teams are able to work in sync. Also, efficiency and effectiveness are increased with DataOps. Businesses are looking for ways to transform the data digitally. They want to get more value from the massive pool of information. And, as data is almost everywhere, and it is distributed more than ever before, therefore, the businesses are looking for ways to get the key insights from the data quickly and easily. This is exactly where the role of Pentaho 8.3 comes into the picture. It accelerates the businesses’ innovation and agility. Plenty of new and exciting time-saving enhancements have been done to make Pentaho a better and more advanced solution for the corporates. It helps the companies to automate their data management techniques.  Key enhancements in Pentaho 8.3 Each enhancement included with Pentaho 8.3 in some way helps organizations modernize their data management practices in ways that assist with removing friction between data and insight, including: Improved drag and drop pipeline capabilities These help access and blend data that are hard to reach to provide deeper insights into the greater analytic value from enterprise integration. Amazon Web Services (AWS) developers can also now ingest and process streaming data through a visual environment rather than having to write code that must blend with other data. Enhanced data visibility Improved integration with Hitachi Content Platform (HCP), a distributed, object storage system designed to support large repositories of content, makes it easier for users to read, write and update HCP customer metadata. They can also more easily query objects with their system metadata, making data more searchable, governable, and applicable for analytics. It’s also now easier to trace real-time data from popular protocols like AMQP, JMS, Kafka, and MQTT. Users can also view lineage data from Pentaho within IBM’s Information Governance Catalog (IGC) to reduce the amount of effort required to govern data. Expanded multi-cloud support AWS Redshift bulk load capabilities now automate the process of loading Redshift. This removes the repetitive SQL scripting to complete bulk loads and allows users to boost productivity and apply policies and schedules for data onboarding. Also included in this category are updates that address Snowflake connectivity. As one of the leading destinations for cloud warehousing, Snowflake’s primary hiccup is when an analytics project wants to include data from other sources. Pentaho 8.3 allows blending, enrichment and the analysis of Snowflake data in conjunction with other sources, including other cloud sources. These include existing Pentaho-supported cloud platforms like AWS and Google Cloud. Pentaho and DataOps Each of the new capabilities and enhancements for this release of Pentaho are important for current users, but the larger benefit to businesses is its association with DataOps. Emerging as a collaborative data management discipline, focused on better communication, integration, and automation of how data flows across an organization, DataOps is becoming a practice embraced more often, yet not without its own setbacks. Pentaho 8.3 helps businesses gain the ability to make DataOps a reality without facing common challenges often associated with data management. According to John Magee, Vice President Portfolio Marketing at Hitachi,  “The new Pentaho 8.3 release provides key capabilities for customers looking to begin their DataOps journey.” Beyond feature enhancements Looking past the improvements and new features of the latest Pentaho release, it’s a good product because of the support it offers its community of users. From forums to webinars to 24/7 support, it not only caters to huge volumes of data on a practical level, but it doesn’t ignore the actual people using the product outside of the data. Author Bio James Warner is a Business Intelligence Analyst with Excellent knowledge on Hadoop/Big data analysis at NexSoftSys.com  New MapR Platform 6.0 powers DataOps DevOps might be the key to your Big Data project success Bridging the gap between data science and DevOps with DataOps
Read more
  • 0
  • 0
  • 2429
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-6-tips-to-prevent-social-engineering
Guest Contributor
03 Oct 2019
10 min read
Save for later

6 Tips to Prevent Social Engineering

Guest Contributor
03 Oct 2019
10 min read
Social engineering is a tactic where the attacker influences the victim to obtain valuable information. Office employees are targeted to reveal confidential data about a corporation while non-specialists can come under the radar to disclose their credit card information. One might also be threatened that the attacker will hack/his her system if he isn’t provided the asked material. In this method, the perpetrator can take any form of disguise, but at most times, he/she poses as tech support or from a bank. However, this isn’t the case always, although the objective is the same. They sniff the information, which you conceal from everybody, by gaining your trust. Social Engineering ends successfully when the wrongdoer gets to know the victim’s weaknesses and then manipulates his trust. Often, the victim shares his private information without paying much heed to the one who contacts him. Later, the victim is blackmailed by providing his sensitive data otherwise he will be charged under unlawful situations. Examples of Social Engineering attacks As defined above, the attacker can take any form of disguise, but the most common ways will be described here. The wrongdoers update themselves daily to penetrate your system, and even you should be extremely wary of your online security. Always stay alert whenever providing someone with your private credentials. The listed examples are variations of the others. There are many others as well, but the most common has been described. The purpose of all of them is to configure you. As the name states, Social Engineering merely is how an individual can be tricked to give up everything to the person who gains his trust. Phishing Attack Phishing is a malicious attempt to access a person’s personal and sensitive information such as financial credentials. The attacker behind a phishing attack pretends as an authentic identity or source to fool an individual. This social engineering technique mainly involves email spoofing or instant messaging to the victim. However, it may steer people to insert their sensitive details into a fraudulent website, which is designed to look exactly like a legitimate site. Unwanted tech support Tech support scams are becoming wide and can have an industry-wide effect. This tactic involves fraudulent attempts to scare people while putting them into the thought that there is something wrong with their device. Attackers behind this scam try to gain money by tricking an individual into paying for the issue which never exists. Offenders usually send you emails or call you to solve issues regarding your system. Mostly, they tell you that there’s an update needed. If you are not wary of this bogus, you can land yourself in danger. The attacker might ask you to run a command on your system which will result in it getting unresponsive. This belongs to the branch of social engineering known as scareware. Scareware uses fear and curiosity against humans to either steal information or sell you useless pieces of software. Sometimes it can be harsher and can keep your data as a hostage unless you pay a hefty amount. Clickbait Technique Term clickbait refers to the technique of trapping individuals via a fraudulent link with tempting headlines. Cybercriminals take advantage of the fact that most legitimate sites or contents also use a similar technique to attract readers or viewers. In this method, the attacker sends you enticing ads related to games, movies, etc. Clickbait is most seen during peer-to-peer networking systems with enticing ads. If you click on a certain Clickbait, an executable command or a suspicious virus can be installed on your system leading it to be hacked. Fake email from a trusted person Another tactic the offender utilizes is by sending you an email from your friend’s or relative’s email address claiming he/she is in danger. That email ID will be hacked, and with this perception, it’s most likely you will fall to this attack. The sent email will have the information you should give so that you can release your contact from the threat. Pretexting Attack Pretexting is also a common form of social engineering which is used for gaining sensitive and non-sensitive information. The attackers pretext themselves as an authentic entity so that they can access the user information. Unlike phishing, pretexting creates a false sense of trust with the victim through making stories, whereas, phishing scams involve fearing and urgency. In some cases, the attack could become intense, such as in the case when the attacker manipulates the victim to carry out a task which enables them to exploit the structural lacks of a firm or organization. An example of this is, the attacker masking himself as an employee from your bank to cross-check your credentials. This is by far, the most frequent tactic used by offenders. Sending content to download The attacker sends you files containing music, movies, games or documents that appear to be just fine. A newbie on the internet will think about how lucky his day is that he got his wanted stuff without asking. Little does he know that the files he just downloaded are virus embedded. Tips to Prevent Social Engineering After understanding the most common examples of social engineering, let us have a look at how you can protect yourself from being manipulated. 1) Don’t give up your private information Will you ever surrender your secret information to a person you don’t know? No, obviously. Therefore, do not spill your sensitive information on the web unnecessarily. If you do not identify the sender of the email, discard it. Nevertheless, if you are buying stuff online, only provide your credit card information over an HTTP secure protocol. When an unknown person calls or emails you, think before you submit your data. Attackers want you to speak first and realize later. Remain skeptical and converse over a conversation regarding when the other is digging into your sensitive information. Therefore, always think of the consequences if you submit your credentials to an authorized person. 2) Enable spam filter Most email service providers come up with spam filters. Any email that is deemed as suspicious shall automatically be thrown away in the spam folder. Credible email services detect any suspicious links and files that might be harmful and warn a user to download them at your own risk. Some files with specific extensions are barred from downloading. By enabling the spam feature, you can ease yourself from categorizing emails. Furthermore, you shall be relieved from the horrendous tasks of detecting mistrustful messages. The perpetrators of social engineering will have no door to reach you, and your sensitive data will be shielded from attackers. 3) Stay cautious of your password A pro tip for you is that you should never use the same password on the platforms you log onto. Keep no traces behind and delete all sessions after you are done with surfing and browsing. Utilize the social media wisely and stay cautious of people you tag and the information you provide since an attacker might loom there. This is necessary in case your social media account gets hacked, and you have the same password for different websites, your data can be breached up to the skin. You will get blackmailed to pay the ransom to prevent your details from being leaked over the internet. Perpetrators can get your passwords pretty quickly but what happens if you get infected with ransomware? All of your files will be encrypted, and you will be forced to pay the ransom with no data back guarantee which is why the best countermeasure against this attack is to prevent it from happening primarily. 4) Keep software up to date Always update your system’s software patch. Maintain the drivers and keep a close look on your network firewall. Stay alert when an unknown person connects to your Wifi network and update your antivirus according to it. Download content from legitimate sources only and be mindful of the dangers. Hacks often take place when the software the victim’s using is out of date. When vulnerabilities are exposed, offenders exploit the system and gain access to it. Regularly updating your software can safeguard you from a ton of dangers. Consequently, there are no backdoors left for hackers to abuse. 5) Pay attention to what you do online Think of the time that you got self-replicating files on your PC after you clicked on a particular ad. Don’t what that to happen again? Train yourself to not click on Clickbait and scam advertisements. Always know that most lotteries you find online are fake. Never provide your financial details there. Carefully inspect the URL of a website you land on. Most scammers make a copy of a website’s front page and change the link slightly. This is done with such efficiency, that the average eye cannot detect a change in the URL and the user opens the website and enters his credentials. Therefore, stay alert. 6) Remain Skeptical The solution to most problems is that one should remain skeptical online. Do not click on spam links, do not open suspicious emails. Furthermore, do not pay heed to messages stating that you have won a lottery or you have been granted a check of a thousand grand. Remain skeptical of the supreme pinnacle. With this strategy, a hacker will have no attraction of reaching you out since you aren’t paying attention to him. Most of the time, this tactic has helped many people from staying safe online and has never been intercepted by hackers digitally. Consequently, as you aren’t getting attracted to suspicious content, you will be saved from social engineering. Final Words All the tips described above summarize that you are doubting, is vital for your digital secrecy. As you are doubtful, of your online presence, you are entirely protected from online manipulation. Not even you, your credit card information and other necessary information will be shielded as well since you never mentioned it to anyone in the first place. All of this was achieved when you were doubtful of what’s occurring online. You inspected the links you visited and discarded suspicious emails, and thus you are secure. With these actions taken, you have prevented social engineering from occurring. Author Bio Peter Buttler is a Cybersecurity Journalist and Tech Reporter, Currently employed as a Senior Editor at PrivacyEnd. He contributes to a number of online publications, including Infosecurity-magazine, SC Magazine UK, Tripwire, Globalsign, and CSO Australia, among others. Peter, covers different topics related to Online Security, Big data, IoT and Artificial Intelligence. With more than seven years of IT experience, he also holds a Master’s degree in cybersecurity and technology. @peter_buttlr Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT How has ethical hacking benefited the software industry 10 times ethical hackers spotted a software vulnerability and averted a crisis
Read more
  • 0
  • 0
  • 7964

article-image-how-hackers-are-using-deepfakes-to-trick-people
Guest Contributor
02 Oct 2019
7 min read
Save for later

How hackers are using Deepfakes to trick people

Guest Contributor
02 Oct 2019
7 min read
Cybersecurity analysts have warned that spoofing using artificial intelligence is within the realm of possibility and that people should be aware of the possibility of getting fooled with such voice or picture-based deepfakes. What is Deepfake? Deepfakes rely on a branch of AI called Generative Adversarial Networks (GANs). It requires two machine learning networks that teach each other with an ongoing feedback loop. The first one takes real content and alters it. Then, the second machine learning network, known as the discriminator, tests the authenticity of the changes. As the machine learning networks keep passing the material back and forth and receiving feedback about it, they get smarter. GANs are still in the early stages, but people expect numerous potential commercial applications. For example, some can convert a single image into different poses. Others can suggest outfits similar to what a celebrity wears in a photo or turn a low-quality picture into a high-resolution snapshot. But, outside of those helpful uses, deepfakes could have sinister purposes. Consider the blowback if a criminal creates a deepfake video of something that would hurt someone's reputation — for instance, a deepfake video of a politician "admitting" to illegal activities, like accepting a bribe. Other instances of this kind of AI that are already possible include cases of misleading spoken dialogue. Then, the lips of someone saying something offensive get placed onto someone else. In one of the best-known examples of Deepfake manipulation, BuzzFeed published a clip now widely known as "ObamaPeele." It combined a video of President Obama with film director Jordan Peele's lips. The result made it seem as if Obama cursed and said things he never would in public. Deepfakes are real enough to cause action The advanced deepfake efforts that cybersecurity analysts warn about rely on AI to create something so real that it causes people to act. For example, in March of 2019, the CEO of a British energy firm received a call from what sounded like his boss. The message was urgent — the executive needed to transfer a large amount of funds to a Hungarian supplier within the hour. Only after the money was sent did it become clear the executive’s boss was never on the line. Instead, cybercriminals had used AI to generate an audio clip that mimicked his boss’s voice. The criminals called the British man and played the clip, convincing him to transfer the funds. The unnamed victim was scammed out of €220,000 — an amount equal to $243,000. Reports indicate it's the first successful hack of its kind, although it's an unusual way for hackers to go about fooling victims. Some analysts point out other hacks like this may have happened but have gone unreported, or perhaps the people involved did not know hackers used this technology. According to Rüdiger Kirsch, a fraud expert at the insurance company that covered the full amount of the claim, this is the first time the insurer dealt with such an instance. The AI technology apparently used to mimic the voice was so authentic that it captured the parent company leader's German accent and the melody of his voice. Deepfakes capitalize on urgency One of the telltale signs of deepfakes and other kinds of spoofing — most of which currently happen online — is a false sense of urgency. For example, lottery scammers emphasize that their victims must send personal details immediately to avoid missing out on their prizes. The deepfake hackers used time constraints to fool this CEO, as well. The AI technology on the other end of the phone told the CEO that he needed to send the money to a Hungarian supplier within the hour, and he complied. Even more frighteningly, the deceiving tech was so advanced that hackers used it for several phone calls to the victim. One of the best ways to avoid scams is to get further verification from outside sources, rather than immediately responding to the person engaging with you. For example, if you're at work and get a call or email from someone in accounting who asks for your Social Security number or bank account details to update their records, the safest thing to do is to contact the accounting department yourself and verify the legitimacy. Many online spoofing attempts have spelling or grammatical errors, too. The challenging thing about voice trickery, though, is that those characteristics don't apply. You can only go by what your ears tell you. Since these kinds of attacks are not yet widespread, the safest thing to do for avoiding disastrous consequences is to ignore the urgency and take the time you need to verify the requests through other sources. Hackers can target deepfake victims indefinitely One of the most impressive things about this AI deepfake case is that it involved more than one phone conversation. The criminals called again after receiving the funds to say that the parent company had sent reimbursement funds to the United Kingdom firm. But, they didn't stop there. The CEO received a third call that impersonated the parent company representative again and requested another payment. That time, though, the CEO became suspicious and didn't agree. As, the promised reimbursed funds had not yet come through. Moreover, the latest call requesting funds originated from an Austrian phone number. Eventually, the CEO called his boss and discovered the fakery by handling calls from both the real person and the imposter simultaneously. Evidence suggests the hackers used commercially available voice generation software to pull off their attack. However, it is not clear if the hackers used bots to respond when the victim asked questions of the caller posing as the parent company representative. Why do deepfakes work so well? This deepfake is undoubtedly more involved than the emails hackers send out in bulk, hoping to fool some unsuspecting victims. Even those that use company logos, fonts and familiar phrases are arguably not as realistic as something that mimics a person's voice so well that the victim can't distinguish the fake from the real thing. The novelty of these incidents also makes individuals less aware that they could happen. Although many people receive training that helps them spot some online scams, the curriculum does not yet extend to these advanced deepfake cases. Making the caller someone in a position of power increases the likelihood of compliance, too. Generally, if a person hears a voice on the other end of the phone that they recognize as their superior, they won't question it. Plus, they might worry that any delays in fulfilling the caller's request might get perceived as them showing a lack of trust in their boss or an unwillingness to follow orders. You've probably heard people say, "I'll believe it when I see it." But, thanks to this emerging deepfake technology, you can't necessarily confirm the authenticity of something by hearing or seeing it. That's an unfortunate development, plus something that highlights how important it is to investigate further before acting. That may mean checking facts or sources or getting in touch with superiors directly to verify what they want you to do. Indeed, those extra steps take more time. But, they could save you from getting fooled. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube Now there is a Deepfake that can animate your face with just your voice and a picture using Temporal GANs
Read more
  • 0
  • 0
  • 5157

article-image-10-times-ethical-hackers-spotted-a-software-vulnerability-and-averted-a-crisis
Savia Lobo
30 Sep 2019
12 min read
Save for later

10 times ethical hackers spotted a software vulnerability and averted a crisis

Savia Lobo
30 Sep 2019
12 min read
A rise in multiple cyber-attacks and the lack of knowledge and defenses to tackle them has made it extremely important for companies to use ethical hacking to combat hackers. While Black Hat hackers use their skills for malicious purposes to defraud high-profile companies or personalities, Ethical Hackers or White Hat hackers use the same techniques (penetration testing, different password cracking methods or social engineering) to break into a company’s cyber defense but to help companies fix these vulnerabilities or loose ends to strengthen their systems. Ethical hackers are employed directly by the company’s CTO or the management with a certain level of secrecy without the knowledge of the staff or other cybersecurity teams. Ethical hacking can also be crowdsourced through bug bounty programs (BBP) and via responsible disclosure (RP). There are multiple examples in just the past couple of years where ethical hackers have come to the rescue of software firms to avert a crisis that would have potentially incurred the organizations huge losses and put their product users in harm’s way. 10 instances where ethical hackers saved the day for companies with software vulnerabilities 1. An ethical hacker accessed Homebrew’s GitHub repo in under 30 minutes On 31st July 2018, Eric Holmes, a security researcher reported that he could easily gain access to Homebrew’s GitHub repo. Homebrew is a popular, free and open-source software package management system with well-known packages like node, git, and many more, and also simplifies the installation of software on macOS. Under 30 minutes, Holmes gained access to an exposed GitHub API token that opened commit access to the core Homebrew repo; thus, exposing the entire Homebrew supply chain. On July 31, Holmes first reported this vulnerability to Homebrew’s developer, Mike McQuaid. Following which, McQuaid publicly disclosed the issue on Homebrew blog on August 5, 2018. After receiving the report, within a few hours the credentials had been revoked, replaced and sanitized within Jenkins so they would not be revealed in the future. In a detailed post about the attack invasion on Medium, Eric mentioned that if he were a malicious actor, he could easily make a small unnoticed change to the openssl formulae, placing a backdoor on any machine that installed it. 2. Zimperium zLabs security researcher disclosed a critical vulnerability in multiple high-privileged Android services to Google In mid-2018, Tamir Zahavi-Brunner, Security Researcher at Zimperium zLabs, informed Google of a critical vulnerability affecting multiple privileged Android services. This vulnerability was found in a library, hidl_memory, introduced specifically as part of Project Treble and does not exist in a previous library which does pretty much the same thing. The vulnerability was in a commonly used library affecting many high-privileged services. The hidl_memory comprises of: mHandle (HIDL object which holds file descriptors, mSize (size of the memory to be shared), mName (represents the type of memory). These structures are transferred through Binder in HIDL, where complex objects (like hidl_handle or hidl_string) have their own custom code for writing and reading the data. Transferring structures via 64-bit processes cause no issues, however, this size gets truncated to 32 bit in 32-bit processes, so only the lower 32 bits are used. So if a 32-bit process receives a hidl_memory whose size is bigger than UINT32_MAX (0xFFFFFFFF), the actually mapped memory region will be much smaller. Google designated this vulnerability as CVE-2018-9411 and patched it in the July security update (2018-07-01 patch level), including additional patches in the September security update (2018-09-01 patch level). Brunner later published a detailed post explaining technical details of the vulnerability and the exploit, in October 2018. 3. A security researcher revealed a vulnerability in a WordPress plugin that leaked the Twitter account information of users Early this year, on January 17, a French security researcher, Baptiste Robert, popularly known by his online handle, Elliot Alderson found a vulnerability in a WordPress plugin called Social Network Tabs. This vulnerability was assigned with the vulnerability ID- CVE-2018-20555  by MITRE. The plugin leaked a user’s Twitter account info thus exposing the personal details to be compromised. The plugin allowed websites to help users share content on social media sites. Elliot informed Twitter of this vulnerability on December 1, 2018, prompting Twitter to revoke the keys, rendering the accounts safe again. Twitter also emailed the affected users of the security lapse of the WordPress plugin but did not comment on the record when reached. 4. A Google vulnerability researcher revealed an unpatched bug in Windows’ cryptographic library that could take down an entire Windows fleet On June 11, 2019, Tavis Ormandy, a vulnerability researcher at Google, revealed a security issue in SymCrypt, the core cryptographic library for Windows. The vulnerability could take down an entire Windows fleet relatively easily, Ormandy said. He reported the vulnerability on March 13 on Google’s Project Zero site and got a response from Microsoft saying that it would issue a security bulletin and fix for this in the June 11 Patch Tuesday run. Further on June 11, he received a message from Microsoft Security Response Center (MSRC) saying “that the patch won’t ship today and wouldn’t be ready until the July release due to issues found in testing”. Ormandy disclosed the vulnerability a day after the 90-day deadline elapsed. This was in line with Google’s 90 days deadline for fixing or publicly disclosing bugs that its researchers find. 5. Oracle’s critical vulnerability in its WebLogic servers On June 17, this year, Oracle published an out-of-band security update that had a patch to a critical code-execution vulnerability in its WebLogic server. The vulnerability was brought to light when it was reported by the security firm, KnownSec404. The vulnerability tracked as CVE-2019-2729, has received a Common Vulnerability Scoring System score of 9.8 out of 10. The vulnerability was a deserialization attack targeting two Web applications that WebLogic appears to expose to the Internet by default—wls9_async_response and wls-wsat.war. 6. Security flaws in Boeing 787 Crew Information System/Maintenance System (CIS/MS) code can be misused by hackers At the Black Hat 2019, Ruben Santamarta, an IOActive Principal Security Consultant in his presentation said that there were vulnerabilities in the Boeing 787 Dreamliner’s components, which could be misused by hackers. The security flaws were in the code for a component known as a Crew Information Service/Maintenance System. Santamarta identified three networks in the 787, the Open Data Network (ODN), the Isolated Data Network (IDN), and the Common Data Network (CDN). Boeing, however, strongly disagreed with Santamarta’s findings saying that such an attack is not possible and rejected Santamarta’s “claim of having discovered a potential path to pull it off.” He further highlighted a white paper released in September 2018 that mentioned that a publicly accessible Boeing server was identified using a simple Google search, exposing multiple files. On further analysis, the exposed files contained parts of the firmware running on the Crew Information System/Maintenance System (CIS/MS) and Onboard Networking System (ONS) for the Boeing 787 and 737 models respectively. These included documents, binaries, and configuration files. Also, a Linux-based Virtual Machine used to allow engineers to access part of the Boeing’s network access was also available. A reader on Bruce Schneier’s (public-interest technologist) blog post argued that Boeing should allow SantaMarta’s team to conduct a test, for the betterment of the passengers, “I really wish Boeing would just let them test against an actual 787 instead of immediately dismissing it. In the long run, it would work out way better for them, and even the short term PR would probably be a better look.” Boeing in a statement said, "Although we do not provide details about our cybersecurity measures and protections for security reasons, Boeing is confident that its airplanes are safe from cyberattack.” Boeing says it also consulted with the Federal Aviation Administration and the Department of Homeland Security about Santamarta's attack. While the DHS didn't respond to a request for comment, an FAA spokesperson wrote in a statement to WIRED that it's "satisfied with the manufac­turer’s assessment of the issue." Santamarta's research, despite Boeing's denials and assurances, should be a reminder that aircraft security is far from a solved area of cybersecurity research. Stefan Savage, a computer science professor at the University of California at San Diego said, "This is a reminder that planes, like cars, depend on increasingly complex networked computer systems. They don't get to escape the vulnerabilities that come with this." Some companies still find it difficult to embrace unknown researchers finding flaws in their networks. Companies might be wary of ethical hackers given these people work as freelancers under no contract, potentially causing issues around confidentiality and whether the company’s security flaws will remain a secret. As hackers do not have a positive impression, the company fails to understand it is for their own betterment. 7. Vulnerability in contactless Visa card that can bypass payment limits On July 29 this year, two security researchers from Positive Technologies, Leigh-Anne Galloway, Cyber Security Resilience Lead and Tim Yunusov, Head of banking security, discovered flaws in Visa contactless cards, that can allow hackers to bypass the payment limits. The researchers added that the attack was tested with “five major UK banks where it successfully bypassed the UK contactless verification limit of £30 on all tested Visa cards, irrespective of the card terminal”. They also warned that this contactless Visa card vulnerability can be possible on cards outside the UK as well. When Forbes asked Visa about this vulnerability, they weren’t alarmed by the situation and said they weren’t planning on updating their systems anytime soon. “One key limitation of this type of attack is that it requires a physically stolen card that has not yet been reported to the card issuer. Likewise, the transaction must pass issuer validations and detection protocols. It is not a scalable fraud approach that we typically see criminals employ in the real world,” a Visa spokesperson told Forbes. 8. Mac Zoom Client vulnerability allowed ethical hackers to enable users’ camera On July 9, this year, a security researcher, Jonathan Leitschuh, publicly disclosed a vulnerability in Mac’s Zoom Client that could allow any malicious website to initiate users’ camera and forcibly join a Zoom call without their authority. Around 750,000 companies around the world who use the video conferencing app on their Macs, to conduct day-to-day business activities, were vulnerable. Leitschuh disclosed the issue on March 26 on Google’s Project Zero blog, with a 90-day disclosure policy. He also suggested a ‘quick fix’ which Zoom could have implemented by simply changing their server logic. Zoom took 10 days to confirm the vulnerability and held a meeting about how the vulnerability would be patched only 18 days before the end of the 90-day public disclosure deadline, i.e. June 11th, 2019. A day before the public disclosure, Zoom had only implemented the quick-fix solution. Apple quickly patched the vulnerable component on the same day when Leitschuh disclosed the vulnerability via Twitter (July 9). 9. Vulnerabilities in the PTP protocol of Canon’s EOS 80D DSLR camera allows injection of ransomware At the DefCon27 held this year, Eyal Itkin, a vulnerability researcher at Check Point Software Technologies, revealed vulnerabilities in the Canon EOS 80D DSLR. He demonstrated how vulnerabilities in the Picture Transfer Protocol (PTP) allowed him to infect the DSLR model with ransomware over a rogue WiFi connection. Itkin highlighted six vulnerabilities in the PTP that could easily allow a hacker to infiltrate the DSLRs and inject ransomware and lock the device. This could lead the users to pay ransom to free up their camera and picture files. Itkin’s team informed Canon about the vulnerabilities in their DSLR on March 31, 2019. On August 6, Canon published a security advisory informing users that, “at this point, there have been no confirmed cases of these vulnerabilities being exploited to cause harm” and asking them to take advised measures to ensure safety. 10. Security researcher at DefCon 27 revealed an old Webmin backdoor that allowed unauthenticated attackers to execute commands with root privileges on servers At the DefCon27, a Turkish security researcher, Özkan Mustafa Akkuş presented a zero-day remote code execution vulnerability in Webmin, a web-based system configuration system for Unix-like systems. This vulnerability, tracked as CVE-2019-15107, was found in the Webmin security feature and was present in the password reset page. It allowed an administrator to enforce a password expiration policy for other users’ accounts. It also allowed a remote, unauthenticated attacker to execute arbitrary commands with root privileges on affected servers by simply adding a pipe command (“|”) in the old password field through POST requests. The Webmin team was informed of the vulnerability on August 17th 2019. In response, the exploit code was removed and Webmin version 1.930 created and released to all users. Jamie Cameron, the author of Webmin, in a blog post talked about how and when this backdoor was injected. He revealed that this backdoor was no accident, and was in fact, injected deliberately in the code by a malicious actor. He wrote, “Neither of these were accidental bugs – rather, the Webmin source code had been maliciously modified to add a non-obvious vulnerability,” he wrote. TD;LR: Companies should welcome ethical hackers for their own good Ethical hackers are an important addition to our cybersecurity ecosystem. They help organizations examine security systems and analyze minor gaps that lead to compromising the entire organization. One way companies can seek their help is by arranging Bug bounty programs that allow ethical hackers to participate and report vulnerabilities to companies in exchange for rewards that can consist of money or, just recognition. Most of the other times, a white hat hacker may report of the vulnerability as a part of their research, which can be misunderstood by organizations as an attempt to break into their system or simply that they are confident of their internal security systems. Organizations should keep their software security upto date by welcoming additional support from these white hat hackers in finding undetected vulnerabilities. Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT How has ethical hacking benefited the software industry 5 pen testing rules of engagement: What to consider while performing Penetration testing Social engineering attacks – things to watch out for while online
Read more
  • 0
  • 0
  • 8778

article-image-how-has-ethical-hacking-benefited-the-software-industry
Fatema Patrawala
27 Sep 2019
8 min read
Save for later

How has ethical hacking benefited the software industry

Fatema Patrawala
27 Sep 2019
8 min read
In an online world infested with hackers, we need more ethical hackers. But all around the world, hackers have long been portrayed by the media and pop culture as the bad guys. Society is taught to see them as cyber-criminals and outliers who seek to destroy systems, steal data, and take down anything that gets in their way. There is no shortage of news, stories, movies, and television shows that outright villainize the hacker. From the 1995 movie Hackers, to the more recent Blackhat, hackers are often portrayed as outsiders who use their computer skills to inflict harm and commit crime. Read this: Did you know hackers could hijack aeroplane systems by spoofing radio signals? While there have been real-world, damaging events created by cyber-criminals that serve as the inspiration for this negative messaging, it is important to understand that this is only one side of the story. The truth is that while there are plenty of criminals with top-notch hacking and coding skills, there is also a growing and largely overlooked community of ethical (commonly known as white-hat) hackers who work endlessly to help make the online world a better and safer place. To put it lightly, these folks use their cyber superpowers for good, not evil. For example, Linus Torvalds, the creator of Linux was a hacker, as was Tim Berners-Lee, the man behind the World Wide Web. The list is long for the same reason the list of hackers turned coders is long – they all saw better ways of doing things. What is ethical hacking? According to the EC-Council, an ethical hacker is “an individual who is usually employed with an organization and who can be trusted to undertake an attempt to penetrate networks and/or computer systems using the same methods and techniques as a malicious hacker.” Listen: We discuss what it means to be a hacker with Adrian Pruteanu [Podcast] The role of an ethical hacker is important since the bad guys will always be there, trying to find cracks, backdoors, and other secret ways to access data they shouldn’t. Ethical hackers not only help expose flaws in systems, but they assist in repairing them before criminals even have a shot at exploiting said vulnerabilities. They are an essential part of the cybersecurity ecosystem and can often unearth serious unknown vulnerabilities in systems better than any security solution ever could. Certified ethical hackers make an average annual income of $99,000, according to Indeed.com. The average starting salary for a certified ethical hacker is $95,000, according to EC-Council senior director Steven Graham. Ways ethical hacking benefits the software industry Nowadays, ethical hacking has become increasingly mainstream and multinational tech giants like Google, Facebook, Microsoft, Mozilla, IBM, etc employ hackers or teams of hackers in order to keep their systems secure. And as a result of the success hackers have shown at discovering critical vulnerabilities, in the last year itself there has been a 26% increase in organizations running bug bounty programs, where they bolster their security defenses with hackers. Other than this there are a number of benefits that ethical hacking has provided to organizations majorly in the software industry. Carry out adequate preventive measures to avoid systems security breach An ethical hacker takes preventive measures to avoid security breaches, for example, they use port scanning tools like Nmap or Nessus to scan one’s own systems and find open ports. The vulnerabilities with each of the ports is studied, and remedial measures are taken by them. An ethical hacker will examine patch installations and make sure that they cannot be exploited. They also engage in social engineering concepts like dumpster diving—rummaging through trash bins for passwords, charts, sticky notes, or anything with crucial information that can be used to generate an attack. They also attempt to evade IDS (Intrusion Detection Systems), IPS (Intrusion Prevention systems), honeypots, and firewalls. They carry out actions like bypassing and cracking wireless encryption, and hijacking web servers and web applications. Perform penetration tests on networks at regular intervals One of the best ways to prevent illegal hacking is to test the network for weak links on a regular basis. Ethical hackers help clean and update systems by discovering new vulnerabilities on an on-going basis. Going a step ahead, ethical hackers also explore the scope of damage that can occur due to the identified vulnerability. This particular process is known as pen testing, which is used to identify network vulnerabilities that an attacker can target. There are many methods of pen testing. The organization may use different methods depending on its requirements. Any of the below pen testing methods can be carried out by an ethical hacker: Targeted testing which involves the organization's people and the hacker. The organization staff will be aware of the hacking being performed. External testing penetrates all externally exposed systems such as web servers and DNS. Internal testing uncovers vulnerabilities open to internal users with access privileges. Blind testing simulates real attacks from hackers. Testers are given limited information about the target, which requires them to perform reconnaissance prior to the attack. Pen testing is the strongest case for hiring ethical hackers. Ethical hackers have built computers and programs for software industry Going back to the early days of the personal computer, many of the members in the Silicon Valley would have been considered hackers in modern terms, that they pulled things apart and put them back together in new and interesting ways. This desire to explore systems and networks to find how it worked made many of the proto-hackers more knowledgeable about the different technologies and it can be safeguarded from malicious attacks. Just as many of the early computer enthusiasts turned out to be great at designing new computers and programs, many people who identify themselves as hackers are also amazing programmers. This trend of the hacker as the innovator has continued with the open-source software movement. Much of the open-source code is produced, tested and improved by hackers – usually during collaborative computer programming events, which are affectionately referred to as "hackathons." Even if you never touch a piece of open-source software, you still benefit from the elegant solutions that hackers come up with that inspire or are outright copied by proprietary software companies. Ethical hackers help safeguard customer information by preventing data breaches The personal information of consumers is the new oil of the digital world. Everything runs on data. But while businesses that collect and process consumer data have become increasingly valuable and powerful, recent events prove that even the world’s biggest brands are vulnerable when they violate their customers’ trust. Hence, it is of utmost importance for software businesses to gain the trust of customers by ensuring the security of their data. With high-profile data breaches seemingly in the news every day, “protecting businesses from hackers” has traditionally dominated the data privacy conversation. Read this: StockX confirms a data breach impacting 6.8 million customers In such a scenario, ethical hackers will prepare you for the worst, they will work in conjunction with the IT-response plan to ensure data security and in patching breaches when they do happen. Otherwise, you risk a disjointed, inconsistent and delayed response to issues or crises. It is also imperative to align how your organization will communicate with stakeholders. This will reduce the need for real-time decision-making in an actual crisis, as well as help limit inappropriate responses. They may also help in running a cybersecurity crisis simulation to identify flaws and gaps in your process, and better prepare your teams for such a pressure-cooker situation when it hits. Information security plan to create security awareness at all levels No matter how large or small your company is, you need to have a plan to ensure the security of your information assets. Such a plan is called a security program which is framed by information security professionals. Primarily the IT security team devises the security program but if done in coordination with the ethical hackers, they can provide the framework for keeping the company at a desired security level. Additionally by assessing the risks the company faces, they can decide how to mitigate them, and plan for how to keep the program and security practices up to date. To summarize… Many white hat hackers, gray hat and reformed black hat hackers have made significant contributions to the advancement of technology and the internet. In truth, hackers are almost in the same situation as motorcycle enthusiasts in that the existence of a few motorcycle gangs with real criminal operations tarnishes the image of the entire subculture. You don’t need to go out and hug the next hacker you meet, but it might be worth remembering that the word hacker doesn’t equal criminal, at least not all the time. Our online ecosystem is made safer, better and more robust by ethical hackers. As Keren Elazari, an ethical hacker herself, put it: “We need hackers, and in fact, they just might be the immune system for the information age. Sometimes they make us sick, but they also find those hidden threats in our world, and they make us fix it.” 3 cybersecurity lessons for e-commerce website administrators Hackers steal bitcoins worth $41M from Binance exchange in a single go! A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes
Read more
  • 0
  • 0
  • 6078
article-image-uk-ncsc-report-reveals-ransomware-phishing-supply-chain-threats-to-businesses
Fatema Patrawala
16 Sep 2019
7 min read
Save for later

UK's NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses

Fatema Patrawala
16 Sep 2019
7 min read
Last week, the UK’s National Cyber Security Centre (NCSC) published a report on cyber incident trends in the UK from October 2018 to April 2019. The U.S. Department of Homeland Security Cybersecurity and Infrastructure Security Agency (CISA) has recommended this report to better understand and know how to defend against most prevalent cyber security threats. The NCSC report reveals five main threats and threat vectors that affected UK organizations: cloud services (Office 365 in particular); ransomware; phishing; vulnerability scanning; and supply chain attacks. The NCSC report examined each of these, presented specific methods used by threat actors and provided tips for preventing and mitigating incidents. NCSC report reveals Cloud services and Office 365 as primary targets The NCSC report highlights the primary target of the attackers as Cloud services, and Office 365. The large scale move to cloud services has put the IT infrastructure of many enterprises within reach of internet-based attacks as these services are only protected by a username and password.  Tools and scripts to try and guess users’ passwords are abundant. And a successful login gives access to corporate data stored in all Office 365 services. For example, both SharePoint and Exchange could be compromised, as well as any third-party services an enterprise has linked to Azure AD. Another common way of attacking Office 365 mentioned in the report is password spraying. In this method the attackers attempt a small number of commonly used passwords against multiple accounts. In most cases, they aren’t after just one specific account as this method can target a large number of accounts in one organisation without raising any suspicions.  Other than this, credential stuffing is another common approach to attack Office 365. Credential stuffing takes pairs of usernames and passwords from leaked data sets and tries them against other services, such as Office 365. According to the report it is difficult to detect the vulnerability in logs as an attacker may only need a single attempt to successfully log in if the stolen details match those of the user's Office 365 account. The report further suggests a few remediation strategies to prevent compromising Office 365 accounts. Ransomware attacks among enterprises continue to rise Since the WannaCry and NotPetya attacks of 2017, ransomware attacks against enterprise networks have continued to rise in number and sophistication. The NCSC report mentions that historically, ransomware were delivered as a standalone attack. But today, attackers are using their network access to maximise the impact of the ransomware attack.  Ransomware tools such as Cybercrime botnets like Emotet, Dridex and Trickbot are commonly used as an initial infection vector, prior to retrieving and installing the ransomware. The report also highlights the use of Pen-testing tools such as Cobalt Strike. Ransomware such as Ryuk, LockerGoga, Bitpaymer and Dharma were seen to be prevalent in recent months. Cases observed in the NCSC report often tend to have resulted from a trojanised document, sent via email. The malware will exploit publicly known vulnerabilities and macros in Microsoft Office documents. Some of the remediation strategies to prevent ransomware include: Reducing the chances of the initial malware reaching devices Considering the use of URL reputation services including those built into a web browser, and Internet service providers. Using email authentication via DMARC and DNS filtering products is highly recommended Making it more difficult for ransomware to run, once it is delivered. Having a tested backup of your data offline, so that it cannot be modified or deleted by ransomware.  Effective network segregation to make it more difficult for malware to spread across a network and thereby limit the impact of ransomware attacks. Phishing is the most prevalent attack delivery method in NCSC report According to the NCSC report, phishing has been the most prevalent attack delivery method over the last few years, and in recent months. Just about anyone with an email address can be a target. Specific methods observed recently by the NCSC include: targeting Office 365 credentials - the approach here is to persuade users to follow links to legitimate-looking login pages, which prompt for O365 credentials. More advanced versions of this attack also prompt the user to use Multi Factor Authentication. sending emails from real, but compromised, email accounts - quite often this approach will exploit an existing email thread or relationship to add a layer of authenticity to a spear phish. fake login pages - these are dynamically generated, and personalised, pulling the real imagery and artwork from the victim’s Office 365 portal. using Microsoft services such as Azure or Office 365 Forms to host fake login pages - these give the address bar an added layer of authenticity. Remediation strategies to prevent phishing attacks include implementing a multi-layered defence against phishing attacks. This will reduce the chances of a phishing email reaching a user and minimises the impact of those that get through. Additionally you can configure Email anti-spoofing controls such as Domain-based Message Authentication, Reporting and Conformance (DMARC), Sender Policy Framework (SPF) and Domain-Keys Identified Mail (DKIM). Vulnerability scanning is a common reconnaissance method NSCS report mentions that vulnerability scanning is a common reconnaissance method used to search for open network ports, identify unpatched, legacy or otherwise vulnerable software and to identify misconfigurations, which could have an effect on security. It further details that attackers identify known weaknesses in Internet-facing service which they then target using tested techniques or 'exploits'. This approach means the attack is more likely to work for the first time, making its detection less likely when using traditional Intrusion prevention systems (IPS) and on-host security monitoring. Once an attacker has a foothold on the edge of your infrastructure, they will then attempt to run more network scans and re-use stolen credentials to pivot through to the core network. For vulnerability remediation NSCS suggests to ensure that all internet-facing servers that an attacker might be able to find should be hardened, and the software running on them must be fully patched. They also recommend penetration test to determine what an attacker scanning for vulnerabilities could find, and potentially attack. Supply chain attacks & threat from external service providers Threats introduced to enterprise networks via their service providers continue to be a major problem according to the report. Outsourcing – particularly of IT – results in external parties and their own networks being able to access and even reconfigure enterprise services. Hence, the network will inherit the risk from these connected networks.  NSCS report also gives several examples of attackers exploiting the connections of service providers to gain access to enterprise networks. For instance, the exploitation of Remote Management and Monitoring (RMM) tooling to deploy ransomware, as reported by ZDNet. And the public disclosure of a “sophisticated intrusion” at a major outsourced IT vendor, as reported by Krebs on Security. Few remediation strategies to prevent supply chain attacks are: Supply chain security should be a consideration when procuring both products and services. Those using outsourced IT providers should ensure that any remote administration interfaces used by those service providers are secured. Ensuring the way IT service provider connects to, or administers the system, meets the organisation’s security standards. Take appropriate steps to segment and segregate the networks. Segmentation and segregation can be achieved physically or logically using access control lists, network and computer virtualisation, firewalls, and network encryption such as Internet Protocol Security. Document the remote interfaces and internal accesses in use by your service provider to ensure that they are fully revoked at the end of the contract. To read the full report, visit the official NSCS website. What’s new in security this week? A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports Lilocked ransomware (Lilu) affects thousands of Linux-based servers Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack  
Read more
  • 0
  • 0
  • 3233

article-image-introducing-woz-a-progressive-webassembly-application-pwa-web-assembly-generator-written-entirely-in-rust
Sugandha Lahoti
04 Sep 2019
5 min read
Save for later

Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust

Sugandha Lahoti
04 Sep 2019
5 min read
Progressive Web Apps are already being deployed at a massive scale evidenced by their presence on most websites now. But what’s next for PWA? Alex Kehayis, developer at Stripe things its the merging of WebAssembly to PWA. According to him, the adoption of WebAssembly and ease of distribution on the web creates compelling new opportunities for application development. He has created what he calls Progressive Webassembly Applications (PWAAs) which is built entirely using Rust. In his talk at WebAssembly San Francisco Meetup, Alex walks through the creation of Woz, a PWA toolchain for Rust. Woz is a progressive WebAssembly app generator (PWAA) for Rust. Woz makes distributing your app as simple as sharing a hyperlink. Read Also: Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Web content has become efficient Alex begins his talk by pointing out how web content has become massively efficient; this is because it solves three problems: Distribution: Actually serving content to your users Unification: Write once and run it everywhere Experience: Consume content in a low friction environment Mobile applications vs Web applications Applications are kind of an elevated form of content. They tend to be more experiential, dynamic, and interactive. Alex points out the definition of ‘application’ from Wikipedia, which states that applications are software that is designed to perform a group of coordinated functions tasks and activities for the benefit of users. Despite all progress, mobile apps are still hugely inefficient to create, distribute, and use. Its distribution is generally in the hands of the duopoly, Apple and Google. The unification is generally handled through third-party frameworks such as React Native, or Xamarin. User experience on mobile apps, although performant leads to high friction as a user has to generally switch between apps, take time for it to install, load etc. Web based applications on the other hand are quite efficient to create, distribute and use. Anybody who's got an internet connection and a browser can go through the web application. For web applications, unification happens through standards, unlike frameworks which is more efficient. User experience is also quite dynamic and fast; you jump right into it and don't have to necessarily install anything. Should everybody just use web apps instead of mobile apps? Although mobile applications are a bit inefficient, they bring certain features: Native application has better performance than web based apps Encapsulation (e.g. home screen, self-contained experience) Mobile apps are offline by default Mobile apps use Hardware/sensors Native apps typically consume less battery than web apps In order to get the best of both worlds, Alex suggests the following steps: Bring web applications to mobile This has already been implemented and are called Progressive web applications Improve the state of performance and providing access. Alex says that WebAssembly is a viable choice for achieving this. WebAssembly is highly performant when it's paired with a language like Rust. Progressive WebAssembly Applications Woz, a Progressive WebAssembly Application generator Alex proceeds to talk about Woz, which is a progressive WebAssembly application generator.  It combines all the good things of a PWA and WebAssembly and works as a toolchain for building and deploying performant mobile apps with Rust. You can distribute your app as simply as sharing a hyperlink. Woz brings distribution via browsers, unification via web standards, and experience via hyperlinks. Woz uses wasm-bindgen to generate the interop calls between WebAssembly and JavaScript. This allows you to write the entire application in Rust—including rendering to the DOM. It will soon be coming with ‘managed charging’ for your apps and even provide multiple copies your users can share all with a hyperlink. Unlike all the things you need for a PWA (SSL certificate, PWA Manifest, Splash screen, Home screen icons, Service worker), PWAAs requires JS bindings to WebAssembly and to fetch, compile, and run wasm. His talks also talked about some popular Rust-based frontend frameworks Yew: “Yew is a modern Rust framework inspired by Elm and React for creating multi-threaded frontend apps with WebAssembly.” Sauron: “Sauron is an html web framework for building web-apps. It is heavily inspired by elm.” Percy: “A modular toolkit for building isomorphic web apps with Rust + WebAssembly” Seed: “A Rust framework for creating web apps” Read Also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer Josh Triplett With Woz, the goal, Alex says, was to stay in Rust and create a PWA that can be installed to your home screen. The sample app that he created only weighs about 300Kb. Alex says, “In order to actually write the app, you really only need one entry point - it’s a public method render that's decorated wasm_bindgen. The rest will kind of figure itself out. You don't necessarily need to go create your own JavaScript file.” He then proceeded to show a quick demo of what it looks like. What’s next? WebAssembly will continue to evolve. More languages and ecosystem can target WebAssembly. Progressive web apps will continue to evolve. PWAAs are an interesting proposition. We should really be liberating mobile apps and bringing them to the web. I think web assembly is kind of a missing link to some of these things. Watch Alex Kehayis’s full talk on YouTube. Slides are available here. https://www.youtube.com/watch?v=0ySua0-c4jg Other news in Tech Wasmer’s first Postgres extension to run WebAssembly is here! Mozilla proposes WebAssembly Interface Types to enable language interoperability Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 9754

article-image-react-js-why-you-should-learn-the-front-end-javascript-library-and-how-to-get-started
Guest Contributor
25 Aug 2019
9 min read
Save for later

React.js: why you should learn the front end JavaScript library and how to get started

Guest Contributor
25 Aug 2019
9 min read
React.JS is one of the most powerful JavaScript libraries. It empowers the interface of major organisations such as Amazon (an e-commerce giant has recently introduced a programming language of its own), PayPal, BBC, CNN, and over a million other websites worldwide. Created by Facebook, React.JS has quickly built a daunting technical reputation and a loyal fan following. Currently React.js is extensively mentioned in job openings - companies want to hire dedicated react.js developer more than Vue.js engineers. In this post, you’ll find out why React.JS is the right framework to start your remote work, despite the library’s steep learning curve and what are the ways to use it more efficiently. 5 Reasons to learn React.JS Developers might be hesitant to learn React as it’s not a full-fledged framework and a developer needs to handle models and controllers on their own. Nevertheless, there are more than a handful of reasons to become a react js developer. Let’s take a closer look at them: 1. It’s functional There’s no need to use classes in React. The platform relies heavily on functional components, allowing developers not to overcomplicate the codebase. While classes offer developers a handful of convenient features (using life cycle hooks, and such), the benefits provided by the functional syntax are loud and clear: Higher readability. Properties like state functions or lifecycle hooks tend to make reading and testing the code a pain in the neck. Plain JS functions are easier to wrap your head around. A developer can achieve the same functionality with less code. The software engineering team will more likely adhere to best practices. Stateless functional components encourage front-end engineers to separate presentational and container components. It takes more time to adjust to a more complex workflow - in the long run, it pays off in a better code structure. ES6 destructuring helps spot bloated components. A developer can see the list of dependencies bound to every component. As a result, you will be able to break up overly complex structures or rethink them altogether. React.JS is the tool that recognizes the power of functional components to their fullest extent (even the glorified Angular 2 can’t compare). As a result, developers can strive for maximum code eloquence and improved performance. 2. It’s declarative Most likely, you are no stranger to CSS and the SQL database programming language, and, as such, are familiar with declarative programming. Still, to recap, here are the differences between declarative and imperative approaches: Imperative programming uses statements to manipulate the state of the program. Declarative programming is a paradigm that changes the system based on the communication logic. While imperative programming gives developers a possibility to design a control flow step-by-step in statements and may come across as easier, it is declarative programming to have more perks in the long run. Higher readability. Low-level details will not clutter the code as the paradigm is not concerned with them. More freedom for reasoning. Instead of outlining the procedure step-by-step, a  successful React JS developer focuses on describing the solution and its logic. Reusability. You can apply a declarative description to various scenarios - that is times more challenging for a step-by-step construct. Efficient in solving specific domain problems. High performance of declarative programming stems from the fact that it adapts to the domain. For databases, for instance, a developer will create a set of operations to handle data, and so on. Capitalizing on the benefits of declarative programming is React’s strong point. You will be able to create transparent, reusable, and highly readable interfaces. 3. Virtual DOM Developers that manage high-load projects often face DOM-related challenges. Bottlenecks tend to appear even after a small change in the document-object-model. Due to the document object model’s tree structure, there’s a high interconnectivity between DOM components. To facilitate maintenance, Facebook has implemented the virtual DOM in React.JS. It allows developers to ensure the project’s error-free performance before updating an actual DOM tree. Virtual DOM provides extra assurance in the app’s performance - in the long run, it significantly improves user satisfaction rates. 4. Downward data binding As opposed to Angular two-way data binding, React.JS uses the downward structure to ensure the changes in child structures will not affect parents. A developer can only transfer data from a parent to a child, not vice versa. The key components of downward data binding include: Passing the state to the child components as well as the view; The view triggers actions; Actions can update the state; State updates are passed on to the view and the child components. Compared to the two-way data binding, the one implemented by React.JS is not as error-prone (a developer controls data to a larger extent), more comfortable to test and debug due to a clearly defined structure. 5. React Developer Tools React.JS developers get to benefit from a wide toolkit that covers all facets of the application performance. There’s a wide array of debugging and design solutions, including a life-saving React Developer Tools extension for Chrome and Firefox. Using this and other tools, you can define child and parent components, examine their state, observe hierarchies, and inspect props. Advantages of React.js React.JS helps developers systemize the interfaces of their projects by introducing the ‘components’ structure. The library allows the creation of modular views that consist of reusable blocks - pop-ups, tables, etc. One of the most significant advantages of using React.js is the way it improves user experience. A textbook example of library usage on Facebook is the possibility to see the changing number of likes in real-time without reloading the page. Originally, React.JS was released back in 2011 by a Facebook engineer as a way to upscale and maintain the complex interface of the Facebook Ads app. The library’s high functionality resulted in its adoption by other SMEs and large corporations - now React JS is one of the most widely used development tools. How to Use React.JS? Depending on your HTML and JavaScript proficiency, it may take anywhere from a few days to months to get the hang of React. For the basic understanding of the library, take a look at React.JS features as well as the setup process. Getting started with React.JS To start working with React, a developer has to import React and React to DOM libraries using a basic HTML file. Now that you have set up a working space, take your time to examine the defining features of React.JS. Components All React.JS elements are components. Depending on the syntax, they are grouped into the class and functional ones. As, in most cases, both lead to equal outcomes, a React.JS beginner should start by learning functional components. Props Props are the way for React.JS developers to pass data from parent to child structures. Keep in mind that, unlike states, props are immutable under any circumstances. They provide developers with high code reusability as the same message will be displayed on all pages. At times, developers do want components to change themselves. That’s when states come in handy. States States are used when a developer wants the application data to change. The most common operations that have to do with states include: Initialization; Modification; Adding event handlers. These were the basic concepts a React.JS developer has to be familiar with to get the most out of the library. React.JS best practices If you’re already using React.JS, be sure to make the most out of it. Keep track of new trends and best practices in all facets of app management - accessibility, performance, security, and others. Here’s a short collection of React.JS development secret tips that’ll improve the maintenance and development efficiency. Performance: Consider using React.Fragment to avoid extra DOM nodes. To load components on-demand, use React.Lazy, along with React.Suspense. Another popular practice among JS developers is taking advantage of shouldComponentUpdate to avoid unnecessary rendering. Try to keep the JS code as clean as possible. For instance, delete the DOM components you don’t use with ComponentDidUnomunt (). For component caching, use React.Memo. Accessibility Pay attention to the casing and reserved word differences in HTML and React.js to avoid bottlenecks. To set up page titles, use the react-handle plugin to set up page titles. Don’t forget to put ALT-tags for any non-text content. Use ref() functions to pinpoint the focus on a given component. External tools like ESLint plugin help developers monitor accessibility. Debugging Use Chrome Dev Tools - there are dozens of features - reduct logger, error messages handler, and so on. Leave the console open while coding to detect errors faster. To have a better understanding of the code you’re dealing with, adopt a table view for objects. Other quick debugging hacks include marking DOM items to find them quickly in a Google Chrome Inspector. View full stack traces for functions. The bottom line Thanks to a powerful team of engineers at work, React.JS has quickly become a powerhouse for front end development. Its huge reliance on JavaScript makes a library easier to get to know. While React.JS pros and cons are extensive - however, the possibility to express UIs declaratively along with the promotion of functional components makes it a favorite framework for many. A wide variety of the projects it empowers and a large number of job openings prove that knowing React is no longer optional for developers. The good news, there’s no lack of learning tools and resources online. Take your time to explore the library - you’ll be amazed by the order and efficiency React brings to applications. Author Bio Anastasia Stefanuk is a passionate writer and a marketing manager at Mobilunity. The company provides professional staffing services, so she is always aware of technology news and wants to share her experience to help tech startups and companies to be up-to-date.   Getting started with React Hooks by building a counter with useState and useEffect React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more 5 Reasons to Learn ReactJS
Read more
  • 0
  • 0
  • 8537
article-image-what-are-the-challenges-of-adopting-ai-powered-tools-in-sales-how-salesforce-can-help
Guest Contributor
24 Aug 2019
8 min read
Save for later

What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help

Guest Contributor
24 Aug 2019
8 min read
Artificial intelligence is a hot topic for many industries. When it comes to sales, the situation gets complicated. According to the latest Salesforce State of Sales report, just 21% of organizations use AI in sales today, while its adoption in sales is expected to grow 155% by 2020. Let’s explore what keeps sales teams from implementing AI and how to overcome these challenges to unlock new opportunities. Why do so few teams adopt AI in Sales There are a few reasons behind such a low rate of AI application in sales. First, some teams don’t feel they are prepared to integrate AI into their existing strategies. Second, AI technologies are often applied in a hectic way: many businesses have high expectations of AI and concentrate mostly on its benefits rather than contemplating possible difficulties upfront. Such an approach rarely results in positive business transformation. Here are some common challenges that businesses need to overcome to turn their sales AI projects into success stories. Businesses don’t know how to apply AI in their workflow Problem: Different industries call for different uses of AI. Still, companies tend to buy AI platforms to use them for the same few popular tasks, like predictions based on historical data or automatic data logging. In reality, the business type and direction should dictate what AI solution will best fit the needs of an organization. For example, in e-commerce, AI can serve dynamic product recommendations on the basis of the customer’s previous purchases or views. Teams relying on email marketing can use AI to serve personalized email content as well as optimize send times. Solution: Let a sales team participate in AI onboarding. Prior to setup, gain insight into your sales reps’ daily routine, needs, and pains. Then, get their feedback continuously during the actual AI implementation. Such a strategy will ensure the sales team benefits from a tailored, rather than a generic, AI system. AI requires data businesses don’t have Problem: AI is most efficient when fed with huge amounts of data. It’s true, a company with a few hundred leads per week will train AI for better predictions than the company with the same amount of leads per month. Frequently, companies assume they don’t have so much data or they cannot present it in a suitable format to train an AI algorithm. Solution: In reality, AI can be trained with incomplete and imperfect data. Instead of trying to integrate the whole set of data prior to implementing AI, it’s possible to use it with data subsets, like historical purchase data or promotional campaign analytics. Plus, AI can improve the quality of data by predicting missing elements or identifying possible errors. Businesses lack skills to manage AI platforms Problem: AI is a sophisticated algorithm that requires special skills to implement and use it. Thus, sales teams need to be augmented with specialized knowledge in data management, software optimization, and integration. Otherwise, AI tools can be used incorrectly and thus provide little value. Solution: There are two ways of solving this problem. First, it’s possible to create a new team of big data, machine learning, and analytics experts to run AI implementation and coordinate it with the sales team. This option is rather time-consuming. Second, it’s possible to buy an AI-driven platform, like Salesforce, for example, that includes both out-of-the-box features as well as plenty of customization opportunities. Instead of hiring new specialists to manage the platform, you can reach out to Salesforce consultants who will help you select the best-fit plan, configure, and implement it. If your requirements go beyond the features available by default, then it’s possible to add custom functionality. How AI can change the sales of tomorrow When you have a clear vision of the AI implementation challenges and understand how to overcome them, it’s time to make use of AI-provided benefits. A core benefit of any AI system is its ability to analyze large amounts of data across multiple platforms and then connect the dots, i.e. draw actionable conclusions. To illustrate these AI opportunities, let’s take Salesforce, one of the most popular solutions in this domain today, and see how its AI technology, Einstein, can enhance a sales workflow. Time-saving and productivity boost Administrative work eats up sales reps’ time that they can spend selling. That’s why many administrative tasks should be automated. Salesforce Einstein can save time usually wasted on manual data entry by: Automating contact creation and update Activity logging Generating lead status reports Syncing emails and calendars Scheduling meetings Efficient lead management When it comes to leads, sales reps tend to base their lead management strategies on gut feeling. In spite of its importance, intuition cannot be the only means of assessing leads. The approach should be more holistic. AI has unmatched abilities to analyze large amounts of information from different sources to help score and prioritize leads. In combination with sales reps’ intuition, such data can bring lead management to a new level. For example, Einstein AI can help with: Scoring leads based on historical data and performance metrics of the best customers Classifying opportunities in terms of their readiness to convert Tracking reengaged opportunities and nurturing them Predictive forecasting AI is well-known for its predictive capabilities that help sales teams make smarter decisions without running endless what-if scenarios. AI forecasting builds sales models using historical data. Such models anticipate possible outcomes of multiple scenarios common in sales reps’ work. Salesforce Einstein, for example, can give the following predictions: Prospects most likely to convert Deals most likely to close Prospects or deals to target New leads Opportunities to upsell or cross-sell The same algorithm can be used for forecasting sales team performance during a specified period of time and taking proactive steps based on those predictions. What’s more, sales intelligence is shifting from predictive to prescriptive, where prescriptive AI does not recommend but prescribes exact actions to be taken by sales reps to achieve a particular outcome. Watching out for pitfalls of AI in sales While AI promises to fulfil sales reps’ advanced requests, there are still some fears and doubts around it. First of all, as a rising technology, AI still carries ethical issues related to its safe and legitimate use in the workplace, such as those of the integrity of autonomous AI-driven decisions and legitimate origin of data fed to algorithms. While the full-fledged legal framework is yet to be worked out, governments have already stepped in. For example, the High-Level Expert Group on AI of the European Commission came up with the Ethics Guidelines for Trustworthy Artificial Intelligence covering every aspect from human oversight and technical robustness to data privacy and non-discrimination. In particular, non-discrimination relates to potential bias,, such as algorithmic bias that comes from human bias when sourcing data, and the one where correlation does not equal causation. Thus, AI-driven analysis should be incorporated in decision-making cautiously as just one of the many sources of insights. AI won’t replace a human mind⁠—the data still needs to be processed critically. When it comes to sales, another common concern is that AI will take sales reps’ jobs. Yes, some tasks that are deemed monotonous and time-consuming are indeed taken over by AI automation. However, it is actually a blessing as AI does not replace jobs but augments them. This way, sales reps can have more time on their hands to complete more creative and critical tasks. It's true, however, that employers would need people who know how to work with AI technologies. It means either ongoing training or new hires, which can be rather costly. The stakes are high, though. To keep up with the fast-changing world, one has to bargain their way to success, finding one’s way around current limitations and challenges. In a nutshell AI is key to boosting sales team performance. However, successful AI integration into sales and marketing strategies requires teams to overcome challenges posed by sophisticated AI technologies. Such popular AI-driven platforms like Salesforce help sales reps get hold of the AI potential as well as enjoy vast opportunities for saving time and increasing productivity. Author Bio Valerie Nechay is MarTech and CX Observer at Iflexion, a Denver-based custom software development provider. Using her writing powers, she's translating complex technologies into fascinating topics and shares them with the world. Now her focus is on Salesforce implementation how-tos, challenges, insights, and shortcuts, as well as broader applications of enterprise tech for business development. IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report. Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library How to create sales analysis app in Qlik Sense using DAR method [Tutorial]
Read more
  • 0
  • 0
  • 4098

article-image-hot-chips-31-ibm-power10-amds-ai-ambitions-intel-nnp-t-cerebras-largest-chip-with-1-2-trillion-transistors-and-more
Fatema Patrawala
23 Aug 2019
7 min read
Save for later

Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more

Fatema Patrawala
23 Aug 2019
7 min read
Hot Chips 31, the premiere event for the biggest semiconductor vendors to highlight their latest architectural developments is held in August every year. The event this year was held at the Memorial Auditorium on the Stanford University Campus in California, from August 18-20, 2019. Since its inception it is co-sponsored by IEEE and ACM SIGARCH. Hot Chips is amazing for the level of depth it provides on the latest technology and the upcoming releases in the IoT, firmware and hardware space. This year the list of presentations for Hot Chips was almost overwhelming with a wide range of technical disclosures on the latest chip logic innovations. Almost all the major chip vendors and IP licensees involved in semiconductor logic designs took part: Intel, AMD, NVIDIA, Arm, Xilinx, IBM, were on the list. But companies like Google, Microsoft, Facebook and Amazon also took part. There are notable absences from the likes of Apple, who despite being on the Committee, last presented at the conference in 1994. Day 1 kicked off with tutorials and sponsor demos. On the cloud side, Amazon AWS covered the evolution of hypervisors and the AWS infrastructure. Microsoft described its acceleration strategy with FPGAs and ASICs, with details on Project Brainwave and Project Zipline. Google covered the architecture of Google Cloud with the TPU v3 chip.  And a 3-part RISC-V tutorial rounded off by afternoon, so the day was spent well with insights into the latest cloud infrastructure and processor architectures. The detailed talks were presented on Day 2 and Day 3, below are some of the important highlights of the event: IBM’s POWER10 Processor expected by 2021 IBM which creates families of processors to address different segments, with different models for tasks like scale-up, scale-out, and now NVLink deployments. The company is adding new custom models that use new acceleration and memory devices, and that was the focus of this year’s talk at Hot Chips. They also announced about POWER10 which is expected to come with these new enhancements in 2021, they additionally announced, core counts of POWER10 and process technology. IBM also spoke about focusing on developing diverse memory and accelerator solutions to differentiate its product stack with heterogeneous systems. IBM aims to reduce the number of PHYs on its chips, so now it has PCIe Gen 4 PHYs while the rest of the SERDES run with the company's own interfaces. This creates a flexible interface that can support many types of accelerators and protocols, like GPUs, ASICs, CAPI, NVLink, and OpenCAPI. AMD wants to become a significant player in Artificial Intelligence AMD does not have an artificial intelligence–focused chip. However, AMD CEO Lisa Su in a keynote address at Hot Chips 31 stated that the company is working toward becoming a more significant player in artificial intelligence. Lisa stated that the company had adopted a CPU/GPU/interconnect strategy to tap artificial intelligence and HPC opportunity. She said that AMD would use all its technology in the Frontier supercomputer. The company plans to fully optimize its EYPC CPU and Radeon Instinct GPU for supercomputing. It would further enhance the system’s performance with its Infinity Fabric and unlock performance with its ROCM (Radeon Open Compute) software tools. Unlike Intel and NVIDIA, AMD does not have a dedicated artificial intelligence chip or application-specific accelerators. Despite this, Su noted, “We’ll absolutely see AMD be a large player in AI.” AMD is considering whether to build a dedicated AI chip or not. This decision will depend on how artificial intelligence evolves. Lisa explained that companies have been improving their CPU (central processing unit) performance by leveraging various elements. These elements are process technology, die size, TDP (thermal design power), power management, microarchitecture, and compilers. Process technology is the biggest contributor, as it boosts performance by 40%. Increasing die size also boosts performance in the double digits, but it is not cost-effective. While AMD used microarchitecture to boost EPYC Rome server CPU IPC (instructions per cycle) by 15% in single-threaded and 23% in multi-threaded workloads. This IPC improvement is above the industry average IPC improvement of around 5%–8%. Intel’s Nervana NNP-T and Lakefield 3D Foveros hybrid processors Intel revealed fine-grained details about its much-anticipated Spring Crest Deep Learning Accelerators at Hot Chips 31. The Nervana Neural Network Processor for Training (NNP-T) comes with 24 processing cores and a new take on data movement that's powered by 32GB of HBM2 memory. The spacious 27 billion transistors are spread across a 688mm2 die. The NNP-T also incorporates leading-edge technology from Intel-rival TSMC. Intel Lakefield 3D Foveros Hybrid Processors Intel in another presentation talked about Lakefield 3D Foveros hybrid processors that are the first to come to market with Intel's new 3D chip-stacking technology. The current design consists of two dies. The lower die houses all of the typical southbridge features, like I/O connections, and is fabbed on the 22FFL process. The upper die is a 10nm CPU that features one large compute core and four smaller Atom-based 'efficiency' cores, similar to an ARM big.LITTLE processor. Intel calls this a "hybrid x86 architecture," and it could denote a fundamental shift in the company's strategy. Finally, the company stacks DRAM atop the 3D processor in a PoP (package-on-Package) implementation. Cerebras largest chip ever with 1.2 trillion transistors California artificial intelligence startup Cerebras Systems introduced its Cerebras Wafer Scale Engine (WSE), the world’s largest-ever chip built for neural network processing. Sean Lie the Co-Founder and Chief Hardware Architect at Cerebras Lie presented the gigantic chip ever at Hot Chips 31. The 16nm WSE is a 46,225 mm2 silicon chip which is slightly larger than a 9.7-inch iPad. It features 1.2 trillion transistors, 400,000 AI optimized cores, 18 Gigabytes of on-chip memory, 9 petabyte/s memory bandwidth, and 100 petabyte/s fabric bandwidth. It is 56.7 times larger than the largest Nvidia graphics processing unit, which accommodates 21.1 billion transistors on a 815 mm2 silicon base. NVIDIA’s multi-chip solution for deep neural networks accelerator NVIDIA which announced about designing a test multi-chip solution for DNN computations at a VLSI conference last year, the company explained chip technology at Hot Chips 31 this year. It is currently a test chip which involves a multi-chip DL inference. It is designed for CNNs and has a RISC-V chip controller. It has 36 small chips, 8 Vector MACs per PE, and each chip has 12 PEs and each package has 6x6 chips. Few other notable talks at Hot Chips 31 Microsoft unveiled its new product Hololens 2.0 silicone. It has a holographic processor and a custom silicone. The application processor runs the app, and the HPU modifies the rendered image and sends to the display. Facebook presented details on Zion, its next generation in-memory unified training platform. Zion which is designed for Facebook sparse workloads, has a unified BFLOAT 16 format with CPU and accelerators. Huawei spoke about its Da Vinci architecture, a single Ascend 310 which can deliver 16 TeraOPS of 8-bit integer performance, support real-time analytics across 16 channels of HD video, and consume less than 8W of power. Xiling Versal AI engine Xilinx, the manufacturer of FPGAs, announced its new Versal AI engine last year as a way of moving FPGAs into the AI domain. This year at Hot Chips they expanded on its technology and more. Ayar Labs, an optical chip making startup, showcased results of its work with DARPA (U.S. Department of Defense's Defense Advanced Research Projects Agency) and Intel on an FPGA chiplet integration platform. The final talk on Day 3 ended with a presentation by Habana, they discussed about an innovative approach to scaling AI Training systems with its GAUDI AI Processor. AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications
Read more
  • 0
  • 0
  • 5490