Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-declarative-ui-programming-faceoff-apples-swiftui-vs-googles-flutter
Guest Contributor
14 Jun 2019
5 min read
Save for later

Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter

Guest Contributor
14 Jun 2019
5 min read
Apple recently announced a new declarative UI framework for its operating system - SwiftUI, at its annual developer conference WWDC 2019. SwiftUI will power all of Apple’s devices (MacBooks, watches, tv’s, iPads and smartphones). You can integrate SwiftUI views with objects from the UIKit, AppKit, and WatchKit frameworks to take further advantage of platform-specific functionality. It's said to be productive for developers and would save effort while writing codes. SwiftUI documentation,  states that, “Declare the content and layout for any state of your view. SwiftUI knows when that state changes, and updates your view’s rendering to match.”   This means that the developers simply have to describe the current UI state to the response of events and leave the in-between transitions to the framework. The UI updates the state automatically as it changes. Benefits of a Declarative UI language Without describing the control flow, the declarative UI language expresses the logic of computation. You describe what elements you need and how they would look like without having to worry about its exact position and its visual style. Some of the benefits of Declarative UI language are: Increased speed of development. Seamless integration between designers and coders. Forces separation between logic and presentation.    Changes in UI don’t require recompilation SwiftUI’s declarative syntax is quite similar to Google’s Flutter which also runs on declarative UI programming. Flutter contains beautiful widgets with captivating logos, fonts, and expressive style. The use of Flutter has significantly increased in 2019 and is among the fastest developing skills in the developer community. Similar to Flutter, SwiftUI provides layout structure, controls, and views for the application’s user interface. This is the first time Apple’s stepping up to the declarative UI programming and has described SwiftUI as a modern way to declare user interfaces. In the imperative method, developers had to manually construct a fully functional UI entity and later change it using methods and setters. In SwiftUI the application layout just needs to be described once, vastly reducing the code complexity. Apart from declarative UI, SwiftUI also features Xcode, which contains software development tools and is an integrated development environment for the OS.  If any code modifications are made inside Xcode, developers now can preview the codes in real-time and tweak parameters. Swift UI also features dark mode, drag and drop building tools by Xcode and interface layout.  Languages such as Hebrew and Arabic are also incorporated. However, one of the drawbacks of SwiftUI is that it will only support apps that will continue to relay forward with iOS13. It’s a sort of limited tool in this sense and the production would take at least a year or two if an older iOS version is to be supported. SwiftUI vs Flutter Development   Apple’s answer to Google is simple here. Flutter is compatible with both Android and iOS whereas SwiftUI is a new member of Apple’s ecosystem. Developers use Flutter for cross-platform apps with a single codebase. It highlights that Flutter is pushing other languages to adopt its simplistic way of developing UI. Now with the introduction of SwiftUI, which works on the same mechanism as Flutter, Apple has announced itself to the world of declarative UI programming. What does it mean for developers who build exclusively for iOS? Well, now they can make Native Apps for their client’s who do not prefer the Flutter way. SwiftUI will probably reduce the incentive for Apple-only developers to adopt Flutter. Many have pointed out that Apple has just introduced a new framework for essentially the same UI experience. We have to wait and see what Swift UI has under its closet for the longer run. Developers in communities like Reddit and others are actively sharing their thoughts on the recent arrival of SwiftUI. Many agree on the fact that “SwiftUI is flutter with no Android support”.   Developers who’d target “Apple only platform” through SwiftUI, will eventually return to Flutter to target all other platforms, which makes Flutter could benefit from SwiftUI and not the other way round. The popularity of the react native is no brainer. Native mobile app development for iOS and Android is always high on cost and companies usually work with 2 different sets of teams. Cross-platform solutions drastically bridge the gaps in terms of developmental costs. One could think of Flutter as React native with the full support of native features (one doesn’t have to depend on native platforms for solutions and Flutter renders similar performance to native). Like React Native, Flutter uses reactive-style views. However, while React Native transpiles to native widgets, Flutter compiles all the way to native code. Conclusion SwiftUI is about making development interactive, faster and easier. The latest inbuilt graphical UI design tool allows designers to assemble a user interface without having to write any code. Once the code is modified, it instantly appears in the visual design tool. Codes can be assembled, redefined and tested in real time with previews that could run on a range of Apple's devices. However, SwiftUI is still under development and will take its time to mature. On the other hand, Flutter app development services continue to deliver scalable solutions for startups/enterprises. Building native apps are not cheap and Flutter with the same feel of native provides cost-effective services. It still remains a competitive cross-platform network with or without SwiftUI’s presence. Author Bio Keval Padia is the CEO of Nimblechapps, a prominent Mobile app development company based in India. He has a good knowledge of Mobile App Design and User Experience Design. He follows different tech blogs and current updates of the field lure him to express his views and thoughts on certain topics.
Read more
  • 0
  • 0
  • 13111

article-image-grover-a-gan-that-fights-neural-fake-news-as-long-as-it-creates-said-news
Vincy Davis
11 Jun 2019
7 min read
Save for later

GROVER: A GAN that fights neural fake news, as long as it creates said news

Vincy Davis
11 Jun 2019
7 min read
Last month, a team of researchers from the University of Washington and the Allen Institute for Artificial Intelligence, published a paper titled ‘Defending Against Neural Fake News’. The goal of this paper is to reliably detect “neural fake news”, so that its harm can be minimized. With this regard, the researchers have built a model named ‘GROVER’. This works as a generator of fake news, which can also spot its own generated fake news articles, as well as those generated by other AI models. GROVER (Generating aRticles by Only Viewing mEtadata Records) models can generate an efficient yet controllable news article, with not only the body, but also the title, news source, publication date, and author list. The researchers affirm that the ‘best models for generating neural disinformation are also the best models at detecting it’. The framework for GROVER represents fake news generation and detection as an adversarial game: Adversary This system will generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must be realistic to read for both human users as well as the verifier. Verifier This system will classify news stories as real or fake. A verifier will have access to unlimited real news stories and few fake news stories from a specific adversary. The dual objective of these two systems suggest an escalating ‘arms race’ between attackers and defenders. It is expected that as the verification systems get better, the adversaries too will follow. Modeling Conditional Generation of Neural Fake News using GROVER GROVER adopts a language modeling framework which allows for flexible decomposition of an article in the order of p(domain, date, authors, headline, body). During inference time, a set of fields are set as ‘F’ for context, with each field ‘f ‘ containing field-specific start and end tokens. During training, the inference is simulated by randomly partitioning an article’s fields into two disjoint sets F1 and F2. The researchers also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation. For Language Modeling, two evaluation modes are considered: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. The researchers evaluate the quality of disinformation generated by their largest model, GROVER-Mega, using p=.96. The articles are classified into four classes: human-written articles from reputable news websites (Human News), GROVER-written articles conditioned on the same metadata (Machine News), human-written articles from known propaganda websites (Human Propaganda), and GROVER-written articles conditioned on the propaganda metadata (Machine Propaganda). Image Source: Defending Against Neural Fake News When rated by qualified workers on Amazon Mechanical Turk, it was found that though the quality of GROVER-written news is not as high as human-written news, it is very skilled at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by GROVER. Neural Fake News Detection using GROVER The role of the Verifier is to mitigate the harm of neural fake news by classifying articles as Human or Machine written. The neural fake news detection is framed in a semi-supervised method. The neural verifier (or discriminator) will have access to many human-written news articles from March 2019 and before, i.e., the entire RealNews training set. However, it will   have limited access to generations, and more recent news articles. For example, using 10k news articles from April 2019, for generating article body text; another 10k articles are used as a set of human-written news articles, it is split in a balanced way, with 10k for training, 2k for validation, and 8k for testing. It is evaluated using two modes: In the unpaired setting, a verifier is provided single news articles, which must be classified independently as Human or Machine.  In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The verifier must assign the machine-written article a higher Machine probability than the human-written article. Both the modes are evaluated in terms of accuracy. Image Source: Defending Against Neural Fake News It was found that the paired setting appears significantly easier than the unpaired setting across the board, suggesting that it is often difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using GROVER to discriminate GROVER’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Lastly, other discriminators perform worse than GROVER overall. This suggests that effective discrimination requires having a similar inductive bias, as the generator. Thus it has been found that GROVER can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, GROVER can also defend these models. The researchers are of the opinion that an ensemble of deep generative model, such as GROVER should be used to analyze the content of a text. Obviously the working of the GROVER model has caught many people’s attention. https://twitter.com/str_t5/status/1137108356588605440 https://twitter.com/currencyat/status/1137420508092391424 While some are finding this to be an interesting mechanism to combat fake news, others point out that, it doesn't matter if GROVER can identify its own texts, if it can't identify the texts generated by other models. Releasing a model like GROVERcan turn out to be extremely irresponsible rather than defensive. A user on Reddit says that “These techniques for detecting fake news are fundamentally misguided. You cannot just train a statistical model on a bunch of news messages and expect it to be useful in detecting fake news. The reason for this should be obvious: there is no real information about the label ('fake' vs 'real' news) encoded in the data. Whether or not a piece of news is fake or real depends on the state of the external world, which is simply not present in the data. The label is practically independent of the data.” Another user on Hacker News comments that “Generative neural networks these days are both fascinating and depressing - feels like we're finally tapping into how subsets of human thinking & creativity work. But that knocks us off our pedestal, and threatens to make even the creative tasks we thought were strictly a human specialty irrelevant; I know we're a long way off from generalized AI, but we seem to be making rapid progress, and I'm not sure society's mature enough or ready for it. Especially if the cutting edge tools are in the service of AdTech and such, endlessly optimizing how to absorb everybody's spare attention. Perhaps there's some bright future where we all just relax and computers and robots take care of everything for us, but can't help feeling like some part of the human spirit is dying.” Few users feel that this ‘generating and detecting its own fake news’, kind of model is going to be unnecessary in the future. It’s just a matter of time that the text written by algorithms will be exactly similar to a human written text. At that point, there will be no way to distinguish between such articles. A user suggests that “I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.” For more details about the GROVER model, head over to the research paper. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 5275

article-image-shoshana-zuboff-on-21st-century-solutions-for-tackling-the-unique-complexities-of-surveillance-capitalism
Savia Lobo
05 Jun 2019
4 min read
Save for later

Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism

Savia Lobo
05 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Shoshana Zuboff’s take on how to tackle the complexities of surveillance capitalism. She has also provided 21st-century solutions to help tackle the same. Shoshana Zuboff, Author of 'The Age of Surveillance Capitalism', talks about economic imperatives within surveillance capitalism. Zuboff says that the unilateral claiming of private human experience, its translation into behavioral data. These predictions are sold in a new kind of marketplace that trades exclusively in human futures. When we deconstruct the competitive dynamics of these markets we get to understand what the new imperatives are, which are, Scale: as they need a lot of data in order to make good predictions economies of scale; secondly, scope: they need a variety of data to make good predictions. She shared a brief quote from a data scientist, which says, “We can engineer the context around a particular behavior and force change. That way we are learning how to rate the music and then we let the music make them dance.” This behavioral modification is systemically institutionalized on a global scale and mediated by a now ubiquitous digital infrastructure. She further explains the kind of law and regulation needed today will be 21st century solutions aimed at the unique 21st century complexities of surveillance capitalism. She mentioned three arenas in which legislative and regulatory strategies can effectively align with the structure and consequences of surveillance capitalism briefly: We need lawmakers to devise strategies that interrupt and in many cases outlaw surveillance capitalism's foundational mechanisms. This includes the unilateral taking of private human experience as a free source of raw material and its translation into data. It includes the extreme information asymmetries necessary for predicting human behavior. It includes the manufacture of computational prediction products based on the unilateral and secret capture of human experience. It includes the operation of prediction markets that trade in human futures. From the point of view of supply and demand, surveillance capitalism can be understood as a market failure. Every piece of research over the last decades has shown that when users are informed of the backstage operations of surveillance capitalism they want no part of it, they want protection, they reject it, they want alternatives. We need laws and regulatory frameworks designed to advantage companies that want to break with the surveillance capitalist paradigm. Forging an alternative trajectory to the digital future will require alliances of new competitors who can summon and institutionalize an alternative ecosystem. True competitors that align themselves with the actual needs of people and the norms of market democracy are likely to attract just about every person on earth as their customers. Lawmakers will need to support new forms of citizen action, collective action just as nearly a century ago workers won legal protection for their rights to organize to bargain and to and to strike. New forms of citizen solidarity are already emerging in municipalities that seek an alternative to the Google-owned Smart City future. In communities that want to resist the social cost of so-called disruption imposed for the sake of others gained and among workers who seek fair wages and reasonable security in the precarious conditions of the so-called gig economy. She says, “Citizens need your help but you need citizens because ultimately they will be the wind behind your wings, they will be the sea change in public opinion and public awareness that supports your political initiatives.” “If together we aim to shift the trajectory of the digital future back toward its emancipatory promise, we resurrect the possibility that the future can be a place that all of us might call home,” she concludes. To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience  
Read more
  • 0
  • 0
  • 3393

article-image-max-fatouretchi-explains-the-3-main-pillars-of-effective-customer-relationship-management
Packt Editorial Staff
03 Jun 2019
6 min read
Save for later

Max Fatouretchi explains the 3 main pillars of effective Customer Relationship Management

Packt Editorial Staff
03 Jun 2019
6 min read
Customer Relationship Management (CRM) is about process efficiency, reducing operational costs, and improving customer interactions and experience. The never-ending CRM journey could be beautiful and exciting, and it's something that matters to all the stakeholders in a company. One important saying is that CRM matters to all roles in a company and everyone needs to feel the sense of ownership right from the beginning of the journey. In this article we will look at 3 main pillars for effective customer relationship management. This article is an excerpt taken from the book The Art of CRM, written by Max Fatouretchi. Max, founder of Academy4CRM institute, draws on his experience over 20 years and 200 CRM implementations worldwide.The book covers modern CRM opportunities and challenges based on the author’s years of experience including AI, machine learning, cloud hosting, and GDPR compliance. Three key pillars of CRM The main role of the architect is to design a solution that can not only satisfy the needs and requirements of all the different business users but at the same time have the agility and structure for a good foundation to support future applications and extensions. Having understood the drivers and the requirements, you are ready to establish the critical quality properties the system will have to exhibit in order to identify scenarios to characterize each one of them. The output of the process is a tree of attributes, a so-called quality attribute tree including usability, availability, performance, and evolution. You always need to consider that the CRM rollout in the company will affect everyone, and above all, it needs to support the business strategies while improving operational efficiencies, enabling business orchestration, and improving customer experience over all the channels. Technically speaking, there are three main pillars for any CRM implementation; these enable value to the business: Operational CRM The operational CRM is all about marketing, sales, and services functionalities. We will cover some case studies later in this book from different projects I've personally engaged with across a wide area of applications. Analytical CRM The analytical CRM will use the data "collected" from the operational CRM and provide the users and business leaders with individual KPIs, dashboards, and analytical tools in order to enable them to slice and dice the data about their business performance as they need. This foundation is for the business orchestration. Collaboration CRM The collaboration CRM will provide the technology to integrate all kinds of communication channels and front-ends with core CRM for both internal and external users, for employees, partners, and for customers so-called bring your own device. This includes support for different types of devices that could integrate with the CRM core platform and be administered with the same tools, leverage the same infrastructure including security, and maintenance. It's using the same platform, same authentication procedures, same workflow engine and fully leveraging the core entities and data. With these three pillars in place, you'll be able to create a comprehensive view of your business and manage client's communication over all your channels. Through this, you'll have the ingredients for predictive client insights, business intelligence, marketing, sales, and services automation. But before we move on, Figure 1.1 is an illustration of the three pillars of a CRM solution and related modules, which should help you visualize what we've just talked about: Figure 1.1: The three pillars of CRM It's also important to remember that any CRM journey always begins with either a business strategy and/or a business pain-point. All of the stakeholders must have a clear understanding of where the company is heading to, and what the business drivers for the CRM investment are. It's also important for all CRM team members to remember that the potential success or failure of CRM projects remains primarily on business stakeholders and not on the IT staff. Role-based ownership in CRM Typically, the business decision makers are the ones bringing up the need and sponsoring the CRM solution. Often but not always, the IT department is tasked with the selection of the platform and conducting the due diligence with a number of vendors. More importantly, while different business users may have different roles and expectations from the system, everyone needs to have a common understanding of the company's vision, while the team members need to support the same business strategies at the highest level. The team will work together towards the success of the project for the company as a whole while having individual expectations. In addition to that, you will notice that the focus and the level of engagement of people involved in the project (project team) will vary during the lifecycle of the project as time goes on. It also helps to categorize the characteristics of team members from visionary to leadership, stakeholders, and owners. While key sponsors are more visionary, and usually, the first players to actively support and advocate for a CRM strategy, they will define the tactics, and end users will ultimately take more ownership during the deployment and operation phase. In the Figure 1.2 we see the engagement level of stakeholders, key-users, and end-users in a CRM implementation project. The visionaries are here to set the company's vision and strategies for the CRM, the key users (department leads) are the key-sponsors who promote the solution, and the end-users are to engage in reviews and provide feedback. Figure 1.2: CRM role based ownership Before we start the development, we must have identified the stakeholders, and have a crystal-clear vision of the functional requirements based on the business requirements. Furthermore, we must also ensure we have converted these to a detail specification. All this is done by business analysts, project managers, solution specialists, and architects, with the level of IT engagement being driven by the outcome of this process. This will also help to define your metrics for business Key Performance Indicators (KPI) figures and for TCO/ROI (Total-Cost-of-Ownership and Return-on-Investment) of the project. These metrics are a compass and a measurement tool for the success of your CRM project and will help enable your need to justify your investment but also allow you to measure the improvements you've made. You will also use these metrics as a design guide for an efficient solution that not only provides the functionalities supporting the business requirements and justification of your investment but something that also delivers data for your CRM dashboards. This data can then help fine tune the business processes for higher efficiencies going forward. In this article, we've looked at all the important elements of a CRM system, including operational CRM, analytical CRM, and collaboration CRM. Bringing CRM up to date, The Art of CRM shows how to add AI and machine learning, ensure compliance with GDPR, and choose between on-premise, cloud, and hybrid hosting solutions. What can Artificial Intelligence do for the Aviation industry 8 programming languages to learn in 2019 Packt and Humble Bundle partner for a new set of artificial intelligence eBooks and videos
Read more
  • 0
  • 0
  • 3315

article-image-time-for-data-privacy-duckduckgo-ceo-gabe-weinberg-in-an-interview-with-kara-swisher
Vincy Davis
28 May 2019
8 min read
Save for later

Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher

Vincy Davis
28 May 2019
8 min read
On the latest Recode Decode episode, Kara Swisher (co-founder) interviewed DuckDuckGo CEO, Gabriel Weinberg on data tracking and why it’s time for Congress to act now as federal legislation is necessary in the current scenario of constant surveillance. DuckDuckGo is an Internet search engine that emphasizes on protecting searchers' privacy. Its market share in the U.S. is about 1%, as compared to more than 88% share owned by Google. Given below are some of the key highlights of the interview. On how DuckDuckGo is different from Google DuckDuckGo which is a internet privacy company, helps users’ to “escape the creepiness and tracking on the internet”. DuckDuckGo has been an alternative to Google since 11 years. It has about a billion searches a month and is the fourth-largest search engine in the U.S. Weinberg states that “Google and Facebook are the largest traders of trackers”, and claims that his company blocks trackers from hundreds of companies. DuckDuckGo also enables more encryption as they force users to go to the unencrypted version of a website. This prevents Internet Service Providers(ISPs)  from tracking the user. When asked the reason for settling into the ‘search business’, Weinberg replied that being from a tech background (tech policy from MIT), he has always been interested in search. After developing this business, he got many privacy queries. It's then that he realized that, “One, searches are essentially the most private thing on the internet. You just type in all your deepest, darkest secrets and search, right? The second thing is, you don’t need to actually track people to make money on search,” so he realized that this would be a “better user experience, and just made the decision not to track people.” Read More: DuckDuckGo chooses to improve its products without sacrificing user privacy The switch from contextual advertising to behavioral advertising From the time internet started working till mid-2000s, the kind of advertising used is called as contextual advertising. It had a very simple routine, “sites used to sell their own ads, they would put advertising based on the content of the article”. Post mid-2000, the working shifted to behavioral advertising. It includes the “creepy ads, the ones that kind of follow you around the internet.” Weinberg added that when website publishers in the Google Network of content sites used to sell their biggest inventory, banner advertising was done at the top of the page. To explore more money, the bottom of the pages was sold to ad networks, to target the site content and audience. These advertisements are administered, sorted, and maintained by Google, under the name AdSense. This helped Google to get all the behavioral data. So if a user searched for something, Google can follow them around with that search. As these advertisements became more lucrative, publishers ceded most of their page over to this behavioral advertising. There has been “no real regulation in tech” to prevent this. Through these trackers, companies like Google and Facebook and many others get user information and browsing history, including purchase history, location history, browsing history, search history, and even user location. Read More: Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services Read More: Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices Weinberg informs that, “when you go to, now, a website that has advertising from one of these networks, there’s a real-time bidding against you, as a person. There’s an auction to sell you an ad based on all this creepy information you didn’t even realize people captured” People do ‘care about privacy’ Weinberg says that “before you knew about it, you were okay with it because you didn’t realize it was so invasive, but after Cambridge Analytica and all the stories about the tracking, that number just keeps going up and up and up.” He also explained about the setting “do not track”, which is available in most of the privacy settings of the browser. He says “People are like, ‘No one ever goes into settings and looks at privacy.’ That’s not true. Literally, tens of millions of Americans have gone into their browser settings and checked this thing. So, people do care!”. Weinberg believes ‘do not track’ is a better mechanism for privacy laws, because once the user makes the setting, no more popups will be allowed i.e., no more sites can track you. He also hopes that the ‘do not track’ mechanism is passed by Congress as it will allow all the people in the country to not being tracked. On challenging Google One main issue faced by DuckDuckGo is that not many people are aware of it. Weinberg says, “There’s 20 percent of people that we think would be interested in switching to DuckDuckGo, but it’s hard to convey all these privacy concepts.” He also claimed that companies like Google are altering people’s searches through ‘filter bubble’. As an example, he added, “when you search, you expect to get the results right? But we found that it varies a lot by location”. Last year, DuckDuckGo had accused Google, that their search personalization contributes to “filter bubbles”. In 2012, DuckDuckGo ran a study showing Google's filter bubble may have significantly influenced the 2012 U.S. Presidential election by inserting tens of millions of more links for Obama than for Romney in the run-up to that election. Read More: DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ How to prevent online tracking Other than using DuckDuckGo and not using say, any of Google’s internet home devices, Swisher asked Weinberg, what are other ways to protect ourselves from being tracked online. To this, Weinberg says there are plenty of other options available. He suggested, “For Google, there are actually alternatives in every category.” For emails, he suggested ProtonMail, FastMail as options. When asked about Facebook, he admitted that “there aren’t great alternatives to it” and added cheekily, “Just leave it”. He further added that there are a bunch of privacy settings available in the devices themselves. He also mentioned about DuckDuckGo blog spreadprivacy.com which provides advice tips. Also there are things which users can do, like turning off ad tracking in the device or to use an end-to-end encryption. On Facial recognition system Weinberg says “Facial recognition is hard”. A person can wear any minor thing to avoid getting caught on the camera. He admits, “you’re going to need laws” to regulate the use of it and thinks San Francisco started a great trend in banning the technology. Many other points were also discussed by Swisher and Weinberg, which included the Communications Decency Act 230 to control sensitive data on the internet. Weinberg also asserted that there’s a need for a national bill like GDPR in the U.S. There were also questions raised on Amazon’s growing advertisements through Google and Facebook. Weinberg also dismissed the probability of having a DuckDuckGo for YouTube anytime soon. Many users agree with Gabriel Weinberg that we should opt into data tracking and it is time to make ‘Do not track’ the norm. A user on Hacker News commented, “Discounting Internet by axing privacy is a nasty idea. Privacy should be available by default without any added price tags.” Another user added, “In addition to not stalking you across the web, DDG also does not store data on you even when using their products directly. For me that is still cause for my use of DDG.” However, as mentioned by Weinberg, there are still people who do not mind being tracked online. It can be because they are not aware of the big trades that takes place behind a user’s one click. A user on Reddit has given an apt basis for this,  “Privacy matters to people at home, but not online, for some reason. I think because it hasn't been transparent, and isn't as obvious as a person looking in your windows. That slowly seems to be changing as more of these concerns are making the news, more breaches, more scandals. You can argue the internet is "wandering outside", which is true to some degree, but it doesn't feel that way. It feels private, just you and your computer/phone, but it's not. What we experience is not matching up with reality. That is what's dangerous/insidious about the whole thing. People should be able to choose when to make themselves "public", and you largely can't because it's complicated and obfuscated.” For more details about their conversation, check out the full interview. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee GDPR complaint in EU claim billions of personal data leaked via online advertising bids
Read more
  • 0
  • 0
  • 3470

article-image-why-does-oculus-cto-prefer-2d-vr-interfaces-over-3d-virtual-reality-interfaces
Sugandha Lahoti
23 May 2019
6 min read
Save for later

Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces?

Sugandha Lahoti
23 May 2019
6 min read
Creative immersive 3D experiences in Virtual reality setup is the new norm. Tech companies around the world are attempting to perfect these 3D experiences to make them as natural, immersive, and realistic as possible. However, a certain portion of Virtual Reality creators still believe that creating a new interaction paradigm in 3D is actually worse than 2D. One of them is John Carmack, CTO of Oculus VR, the popular Virtual Reality headgear. He has penned a Facebook post highlighting why he thinks 3D interfaces are usually worse than 2D interfaces. Carmack details a number of points to justify his assertion and says that the majority of browsing, configuring, and selecting interactions benefit from designing in 2D. He wrote an internal post in 2017 clarifying his views. Recently, he was reviewing a VR development job description before an interview last week, where he saw that one of the responsibilities for the open Product Management Leader position was: “Create a new interaction paradigm that is 3D instead of 2D based” which made him write this post. Splitting information across multiple depths is harmful Carmack says splitting information across multiple depths makes our eyes re-verge and re-focus. He explains this point with an analogy. “If you have a convenient poster across the room in your visual field above your monitor – switch back and forth between reading your monitor and the poster, then contrast with just switching back and forth with the icon bar at the bottom of your monitor.” Static HMD optics should have their focus point at the UI distance. If we want to be able to scan information as quickly and comfortably as possible, says Carmack, it should all be the same distance from the viewer and it should not be too close. As Carmack observes, you don't see in 3D. You see two 2D planes that your brain extracts a certain amount of depth information from. A Hacker news user points out, “As a UI goes, you can't actually freely use that third dimension, because as soon as one element obscures another, either the front element is too opaque to see through, in which case the second might as well not be there, or the opacity is not 100% in which case it just gets confusing fast. So you're not removing a dimension, you're acknowledging it doesn't exist. To truly "see in 3D" would require a fourth-dimension perspective. A 4D person could use a 3D display arbitrarily, because they can freely see the entire 3D space, including seeing things inside opaque spheres, etc, just like we can look at a 2D display and see the inside of circles and boxes freely.” However, a user critiqued also Carmack’s statement of splitting information across multiple depths being harmful. He says, “Frequently jumping between dissimilar depths is harmful. Less frequent, sliding, and similar depths, can be wonderful, allowing the much denser and easily accessible presentation of information. A general takeaway is that “most of the current commentary about "VR", is coming from a community focused on a particular niche, current VR gaming. One with particular and severe, constraints and priorities that don't characterize the entirety of a much larger design space.” Visualize 3D environment as a pair of 2D projections Camack says that unless we move significantly relative to the environment, they stay essentially the same 2D projections. He further adds, “even on designing a truly 3D UI, developers would have to consider this notion to keep the 3D elements from overlapping each other when projected onto the view.” It can also be difficult for 2D UX/product designers to transfer their thinking over to designing immersive products. https://twitter.com/SuzanneBorders/status/1130231236243337216 However, building in 3D is important for things which are naturally intuitive in 3D. This, as Carmack mentions is "true 3D" content, for which you get a 3D interface whether you like it or not. A user on Hacker News points out, “Sometimes things which we struggle to decode in 2D are just intuitive in 3D like knots or the run of wires or pipes.” Use 3D elements for efficient UI design Carmack says that 3D may have a small place for efficient UI design as a “treatment” for UI elements. He gives examples such as using slightly protruding 3D buttons sticking out of the UI surface in places where we would otherwise use color changes or faux-3D effects like bevels or drop shadows. He says, “the visual scanning and interaction is still fundamentally 2D, but it is another channel of information that your eye will naturally pick up on.” This doesn’t mean that VR interfaces should just be “floating screens”. The core advantage of VR from a UI standpoint is the ability to use the entire field of view, and allow it to be extended by “glancing” to the sides. Content selection, Carmack says, should go off the sides of the screens and have a size/count that leaves half of a tile visible at each edge when looking straight ahead. Explaining his statement he adds, “actually interacting with UI elements at the angles well away from the center is not good for the user, because if they haven’t rotated their entire body, it is a stress on their neck to focus there long, so the idea is to glance, then scroll. He also advises putting less frequently used UI elements off to the sides or back. A Twitter user agreed to Carmack’s floating screens comment. https://twitter.com/SuzanneBorders/status/1130233108073144320 Most users agreed to Carmack’s assertion, sharing their own experiences. A comment on reddit reads, “He makes a lot of good points. There are plenty examples of 'real life' instances where the existence and perception of depth isn't needed to make useful choices or to interact with something, and that in fact, as he points out, it's actually a nuisance to have to focus on multiple planes, back and forth', to get something done.” https://twitter.com/feiss/status/1130524764261552128 https://twitter.com/SculptrVR/status/1130542662681939968 https://twitter.com/jeffchangart/status/1130568914247856128 However, some users point out that this can also be because the tools for doing full 3D designs are nowhere near as mature as the tools for doing 2D designs. https://twitter.com/haltor/status/1130600718287683584 A Twitter user aptly observes: “3D is not inherently superior to 2D.” https://twitter.com/Clarice07825084/status/1130726318763462656 Read the full text of John’s article on Facebook. More insights on this Twitter thread. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset Oculus Rift S: A new VR with inside-out tracking, improved resolution and more! What’s new in VR Haptics?
Read more
  • 0
  • 0
  • 4938
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deoldify-colorising-and-restoring-bw-images-and-videos-using-a-nogan-approach
Savia Lobo
17 May 2019
5 min read
Save for later

DeOldify: Colorising and restoring B&W images and videos using a NoGAN approach

Savia Lobo
17 May 2019
5 min read
Wouldn’t it be magical if we could watch old black and white movie footages and images in color? Deep learning, more precisely, GANs can help here. A recent approach by a software researcher Jason Antic tagged as ‘DeOldify’ is a deep learning based project for colorizing and restoring old images and film footages. https://twitter.com/johnbreslin/status/1127690102560448513 https://twitter.com/johnbreslin/status/1129360541955366913 In one of the sessions at the recent Facebook Developer Conference held from April 30 - May 1, 2019, Antic, along with Jeremy Howard, and Uri Manor talked about how by using GANs one can reconstruct images and videos, such as increasing their resolution or adding color to a black and white film. However, they also pointed out that GANs can be slow, and difficult and expensive to train. They demonstrated how to colorize old black & white movies and drastically increase the resolution of microscopy images using new PyTorch-based tools from fast.ai, the Salk Institute, and DeOldify that can be trained in just a few hours on a single GPU. https://twitter.com/citnaj/status/1123748626965114880 DeOldify makes use of a NoGAN training, which combines the benefits of GAN training (wonderful colorization) while eliminating the nasty side effects (like flickering objects in the video). NoGAN training is crucial while getting some images or videos stable and colorful. An example of DeOldify trying to achieve a stable video is as follows: Source: GitHub Antic said, “the video is rendered using isolated image generation without any sort of temporal modeling tacked on. The process performs 30-60 minutes of the GAN portion of "NoGAN" training, using 1% to 3% of Imagenet data once. Then, as with still image colorization, we "DeOldify" individual frames before rebuilding the video.” The three models in DeOldify DeOldify includes three models including video, stable and artistic. Each of the models has its strengths and weaknesses, and their own use cases. The Video model is for video and the other two are for images. Stable https://twitter.com/johnbreslin/status/1126733668347564034 This model achieves the best results with landscapes and portraits and produces fewer zombies (where faces or limbs stay gray rather than being colored in properly). It generally has less unusual miscolorations than artistic, but it's also less colorful in general. This model uses a resnet101 backbone on a UNet with an emphasis on width of layers on the decoder side. This model was trained with 3 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 7% of Imagenet data trained once (3 hours of direct GAN training). Artistic https://twitter.com/johnbreslin/status/1129364635730272256 This model achieves the highest quality results in image coloration, with respect to interesting details and vibrance. However, in order to achieve this, one has to adjust the rendering resolution or render_factor. Additionally, the model does not do as well as ‘stable’ in a few key common scenarios- nature scenes and portraits. Artistic model uses a resnet34 backbone on a UNet with an emphasis on depth of layers on the decoder side. This model was trained with 5 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 32% of Imagenet data trained once (12.5 hours of direct GAN training). Video https://twitter.com/citnaj/status/1124719757997907968 The Video model is optimized for smooth, consistent and flicker-free video. This would definitely be the least colorful of the three models; while being almost close to the ‘stable’ model. In terms of architecture, this model is the same as "stable"; however, differs in training. It's trained for a mere 2.2% of Imagenet data once at 192px, using only the initial generator/critic pretrain/GAN NoGAN training (1 hour of direct GAN training). DeOldify was achieved by combining certain approaches including: Self-Attention Generative Adversarial Network: Here, Antic has modified the generator, a pre-trained U-Net, to have the spectral normalization and self-attention. Two Time-Scale Update Rule: It’s just one to one generator/critic iterations and higher critic learning rate. This is modified to incorporate a "threshold" critic loss that makes sure that the critic is "caught up" before moving on to generator training. This is particularly useful for the "NoGAN" method. NoGAN doesn’t have a separate research paper. This, in fact, is a new type of GAN training developed to solve some key problems in the previous DeOldify model. NoGAN includes the benefits of GAN training while spending minimal time doing direct GAN training. Antic says, “I'm looking to make old photos and film look reeeeaaally good with GANs, and more importantly, make the project useful.” “I'll be actively updating and improving the code over the foreseeable future. I'll try to make this as user-friendly as possible, but I'm sure there's going to be hiccups along the way”, he further added. To further know about the hardware components and other details head over to Jason Antic’s GitHub page. Training Deep Convolutional GANs to generate Anime Characters [Tutorial] Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Using deep learning methods to detect malware in Android Applications
Read more
  • 0
  • 0
  • 7255

article-image-what-can-artificial-intelligence-do-for-the-aviation-industry
Guest Contributor
14 May 2019
6 min read
Save for later

What can Artificial Intelligence do for the Aviation industry

Guest Contributor
14 May 2019
6 min read
The use of AI (Artificial Intelligence) technology in commercial aviation has brought some significant changes in the ways flights are being operated today. World’s leading airliner service providers are now using AI tools and technologies to deliver a more personalized traveling experience to their customers. From building AI-powered airport kiosks to using it for automating airline operations and security checking, AI will play even more critical roles in the aviation industry. Engineers have found AI can help the aviation industry with machine vision, machine learning, robotics, and natural language processing. Artificial intelligence has been found to be highly potent and various researches have shown how the use of artificial intelligence can bring significant changes in aviation. Few airlines now use artificial intelligence for predictive analytics, pattern recognition, auto scheduling, targeted advertising, and customer feedback analysis showing promising results for better flight experience. A recent report shows that aviation professionals are thinking to use artificial intelligence to monitor pilot voices for a hassle-free flying experience of the passengers. This technology is to bring huge changes in the world of aviation. Identification of the Passengers There’s no need to explain how modern inventions are contributing towards the betterment of mankind and AI can help in air transportation in numerous ways. Check-in before boarding is a vital task for an airline and they can simply take the help of artificial intelligence to do it easily, the same technology can be also used for identifying the passengers as well. American airline company Delta Airlines took the initiative in 2017. Their online check-in via Delta mobile app and ticketing kiosks have shown promising results and nowadays you can see many airlines taking similar features to the whole new level. The Transportation Security Administration of the United States has introduced new AI technology to identify potential threats at the John F. Kennedy, Los Angeles International Airport and Phoenix airports. Likewise, Hartsfield-Jackson Airport is planning to launch America’s first biometric terminal. Once installed, “the AI technology will make the process of passenger identification fast and easy for officials. Security scanners, biometric identification”, and machine learning are some of the AI technologies that will make a number of jobs easy for us. In this way, AI helps us predict disruption in airline services. Baggage Screening Baggage screening is another tedious but important task that needs to be done at the airport. However, AI has simplified the process of baggage screening. The American airlines once conducted a competition on app development on artificial intelligence and Team Avatar became the winner of the competition for making an app that would allow the users to determine the size of their baggage at the airport. Osaka Airport in Japan is planning to install the Syntech ONE 200, which is an AI technology developed to screen baggage for multiple passenger lanes. Such tools will not only automate the process of baggage screening but also help authorities detect illegal items effectively. Syntech ONE 200is compatible with the X-ray security system and it increases the probability of identification of potential threats. Assisting Customers AI can be used to assist customers in the airport and it can help a company reduce its operational costs and labor costs at the same time. Airlines companies are now using AI technologies to help their customers to resolve issues quickly by getting accurate information on future flights trips on their internet-enabled devices. More than 52% of airlines companies across the world have planned to install AI-based tools to improve their customer service functions in the next five years. Artificial Intelligence can answer various common questions of the customers, assisting them for check-in requests, the status of the flight and more. Nowadays artificial intelligence is also used in air cargo for different purposes such as revenue management, safety, and maintenance and it has shown impressive results till date. Maintenance Prediction Airlines companies are planning to implement AI technology to predict potential failures of maintenance on aircraft. Leading aircraft manufacturer Airbus is taking measures to improve the reliability of aircraft maintenance. They are using Skywise, a cloud-based data storing system. It helps the fleet to collect and record a huge amount of real-time data. The use of AI in the predictive maintenance analytics will pave the way for a systematic approach on how and when the aircraft maintenance should be done.  Nowadays you can see how top-rated airlines use artificial intelligence to make the process of maintenance easy and improve the user experience at the same time. Pitfalls of using AI in Aviation Despite being considered as a future of the aviation industry,  AI has some pitfalls. For instance, it takes time for implementation and it cannot be used as an ideal tool for customer service. The recent incident of Ethiopian Airlines Boeing 737 was an eye-opener for us and it clearly represents the drawback of AI technology in the aviation sector. The Boeing 737 crashed a few minutes after it took off from the capital of Ethiopia. The failure of the MCAC system was the key reasons behind the fatal accident. Also, AI is quite expensive; for example, if an airline company is planning to deploy a chatbot, it will have to invest more than $15,000. Thus, it would be a hard thing for small companies to invest for the same and this could create a barrier between small and big airlines in the future. As the market is becoming highly competitive, big airlines will conquer the market and the small airlines might face an existential threat due to this reason.   Conclusion The use of artificial intelligence in aviation has made many tasks easy for airlines and airport authorities across the world. From identifying passengers to screening the bags and providing fast and efficient customer care solutions. Unlike the software industry, the risks of real life harms are exponentially higher in the aviation industry. While other industries have started using this technology long back, the adoption of AI in aviation has been one of caution, and rightly so. As the aviation industry embraces the benefits of artificial intelligence and machine learning, it must also invest in putting in place checks and balances to identify, reduce and eliminate harmful consequences of AI, whether intended or otherwise.  As Silicon Valley reels in ethical dilemmas, the aviation industry will do well to learn from Silicon Valley while making a transition to a smart future. The aviation industry known for its rigorous safety measures and processes may, in fact, have a thing or two to teach Silicon Valley when it comes to designing, adopting and deploying AI systems into live systems that have high-risk profiles. Author Bio Maria Brown is Content Writer, Blogger and maintaining Social Media Optimization for 21Twelve Interactive. She believes in sharing her solid knowledge base with a focus on entrepreneurship and business. You can find her on Twitter.
Read more
  • 0
  • 0
  • 12800

article-image-how-much-does-it-cost-to-build-an-iot-app
Guest Contributor
13 May 2019
6 min read
Save for later

How much does it cost to build an IoT app?

Guest Contributor
13 May 2019
6 min read
According to a Gartner study, the estimated approximate amount to be spent on the connected things (IoT related services) for the year 2017 was ($235 billion) and it is predicted to reach a level of 14.2 billion by the end of 2019. The number of connected devices across the globe will also be increased by around 10 billion by the end of the year 2020. Research by IDC (International Data Corporation) shows that market transformation due to IoT escalation has scaled up to approx 1.9 trillion in 2013 and will reach 7.1 trillion by the year 2020. These stats draw a clear picture that the Internet of Things is making businesses agile, fast, user-engaging and most importantly connected with each other. The areas where IoT is predicted to be used are exponentially growing. However, with the expansion comes with a burgeoning question “What is the cost of building an IoT Solution?” Before estimating the costs of developing an IoT app, you should have a clear answer to the following questions: What is the main idea or goal of your IoT app? Who will be the users of your upcoming IoT app? What benefits will you provide to the users through the app? What hardware are you going to use for the app development? What type of features will your IoT app have? What might be the possible challenges and issues of your IoT app? It’s important to answer these questions as more details you provide to your IoT development partner, the better your app result will be. Getting an insight into each IoT app development phase provides the developer with a clear picture of the future app. It also saves a lot of time by eliminating the chances of making unnecessary corrections. So, it’s essential to give significant consideration to the above-mentioned questions. Next, let's move to the various factors that help in estimating the cost of developing an IoT app. The time required to develop an IoT app Development phase eats most of the time when it comes to creating an IoT app for business purposes. The process starts with app information analysis and proceeds to prototype development and visual design creation. The phases include features and functionality research, UI/UX design, interface design, logo, and icon selection. Your IoT app development time also depends on the project size, use of new technologies and tools, uncertain integration requirements, a growing number of visual elements and complex UI and UX feature integration. Every aspect which consumes time leads an app towards cost increment. Thus, you can expect high-cost for your IoT app if you wish to incorporate all the above features in your connected environment. Integrating advanced features in your IoT app Often your app may require advanced feature integration such as Payment Gateway, Geo-location, Data Encryption, Third-party API Integration, All-across device synchronization, Auto-learning feed, CMS Integration, etc. Integrating advanced features like social media and geo-location functionality take much effort and time as compared to other simple features. This ultimately increases the app’s cost. You can hire programmers for integrating these advanced features. Generally, hourly rates of professional designers and programmers depend on the region the developers reside, such as: The cost in Eastern Europe is $30-50/hour The cost in Western Europe is $60-130/hour The cost in North America is $50-150/hour The cost in India is $20-50/hour Choose IoT developers accordingly by knowing the development cost of your region. Remember, the cost is just a rough idea and may vary with the app development requisites. The team required for building an IoT app Like any normal app, IoT app development also requires a team of diligent and skilled developers, who possess ample know-how of the latest technologies and development trends. Hiring experienced developers would unquestionably cost higher and lead your IoT app development process towards price expansion. Your IoT app development team (with cost) may consist of Front-end developer - $29.20 per hour Back-end developer - $29.59 per hour UI Designer - $41.93 per hour QA Engineer - $45 per hour Project Manager - $53.85 per hour Business Analyst - $39 per hour The cost mentioned above for each professional is gathered on an average basis. Calculating the total cost will give you the overall cost of IoT development. Don’t consider the aforementioned cost the final app investment as it may vary according to the project size, requisites, and other parameters. Post app development support and maintenance The development of IoT app doesn’t end at deployment, rather the real phase starts just after it. This is the post-production phase where the development company is supposed to provide after deployment support for the delivered project. If you have hired developers for your IoT app development make sure that they are ready to offer you the best post-deployment support for your app. It can be related to adding new features to the app or resolving the issues found during app performance. Also, make sure that they provide your app with a clear code so that anyone with the same skills can easily interpret and modify it to make future changes. Cost based on the size of project or app Generally, projects are categorized based on three sizes: small, middle and large. As obvious, a small project or less complicated app costs less than a complex one. For example, the development of IoT applications for modern home appliances like a refrigerator or home theatre is much easy and cost-effective. On the contrary, if you wish to develop a self-driven vehicle, it would be an expensive plan to proceed. Similarly, developing IoT application for ECG monitors incurs less cost approx 3000$ – 4000$ whereas the IoT system created for fitness machines requires around 30,000$ – 35,000$. This might not be the final cost of apps and you may also discover some hidden costs later on. Conclusion It is recommended to take the assistance of an IoT app development company, which has talented professionals to establish an in-depth IoT app development cost structure. Remember, the more complex your app is the more cost it will incur. So make a clear plan by understanding the needs of your customers while also thinking about the type of features your IoT app will have. About The Author Tom Hardy is a senior technology developer in Sparx IT Solutions. He always stays updated with the growing technology trends and also makes others apprised through his detailed and informative technology write-ups.
Read more
  • 0
  • 0
  • 3300

article-image-why-ruby-developers-like-elixir
Guest Contributor
26 Apr 2019
7 min read
Save for later

Why Ruby developers like Elixir

Guest Contributor
26 Apr 2019
7 min read
Learning a new technology stack requires time and effort, and some developers prefer to stick with their habitual ways. This is one of the major reasons why developers stick with Ruby. Ruby libraries are very mature making it a very productive language, used by many developers worldwide. However, more and more experienced Ruby coders are turning to Elixir. Why is it so? Let’s find out all the ins and outs about Elixir and what makes it so special for Ruby developers. What is Elixir? Elixir is a vibrant and practical functional programming language created for developing scalable and maintainable applications. This programming language leverages the Erlang VM. The latter is famous for running low-latency, as well as distributed and fault-tolerant systems. Elixir is currently being used in web development successfully. This general-purpose programming language first appeared back in 2011. It is created by José Valim, one of the major authors of Ruby on Rails. Elixir became a result of Valim’s efforts to solve problems with concurrency that Ruby on Rails has. Phoenix Framework If you are familiar with Elixir, you have probably heard of Phoenix as well. Phoenix is an Elixir-powered web framework, most frequently used by Elixir developers. This framework incorporates some of the best Ruby solutions while taking them to the next level. This framework allows the developers to enjoy speed and maintainability at the same time. Core features of Elixir Over time, Elixir evolved into a dynamic language that numerous programmers around the world use for their projects. Below are its core features that make Elixir so appealing to web developers. Scalability. Elixir code is executed within the frames of small isolated processes. Any information is transferred via messages. If an application has many users or is growing actively, Elixir is a perfect choice because it can cope with high loads without the need for extra servers. Functionality. Elixir is built to make coding easier and faster. This language is well-designed for writing fast and shortcode that can be maintained easily. Extensibility and DSLs. Elixir is an extensible language that allows coders to extend it naturally to special domains. This way, they can increase their productivity significantly. Interactivity. With tools like IEx, Elixir’s interactive shell, developers can use auto-complete, debug, reload code, and format their documentation well. Error resistance. Elixir is one of the strongest systems in terms of fault tolerance. Elixir supervisors assist developers by describing how to take the needed action when a failure occurs to achieve complete recovery. Elixir supervisors carry different strategies to create a hierarchical process structure, also referred to as a supervision tree. This guarantees the smooth performance of applications, tolerant of errors. The handiness of Elixir Tools. Elixir gives the developers working with it an opportunity to use a wide range of handy tools like Hex and Mix. These tools help programmers to improve the software resources in terms of discovery, quality, and sustainability. Compatibility with Erlang. Elixir developers have full access to the Erlang ecosystem. It is so because Elixir code executes on the Erlang VM. Disadvantages of Elixir The Elixir ecosystem isn’t perfect and complete yet. Chances are, there isn’t a library to integrate with a service you are working on. When coding in Elixir, you may have to build your own libraries sometimes. The reason behind it is that the Elixir community isn’t as huge as the communities of well-established popular coding languages like Ruby. Some developers believe that Elixir is a niche language and is difficult to get used to. Functional programming. This feature of Elixir is both an advantage and a disadvantage at the same time. Most coding languages are object-oriented. For this reason, it might be hard for a developer to switch to a functional-oriented language. Limited talent pool. Elixir is still quite new, and it’s harder to find professional coders who have a lot of experience with this language compared to others. Yet, as the language gets more and more traction, companies and individual developers show more interest in it. As you can see, there are some downsides to using Elixir as your programming language. However, due to the advantages it offers, some Ruby developers think that it is worth a try. Let's find out why. Why Elixir is popular among Ruby developers As you probably know, Ruby and Ruby on Rails are the technologies that contribute to programmers' happiness a lot. There are many reasons for developers to love them but are there any with respect to Elixir? If you analyze what makes programmers happy, you will make a list of a few important points. Let's name them and analyze whether Elixir comes within them. Productive technologies Elixir is extremely productive. With it, it is possible to grow and scale apps quickly. Having many helpful frameworks, tools, and services Though there are not many libraries in Elixir, their number is continuously growing due to the work of its team and contributors. However, Phoenix and Elixir's extensive toolset is its strong side for now. Speed of building new features Due to the clean syntax of Elixir, features can be implemented by fewer lines of code. Active community Though Elixir community is still not massive, it is very friendly, active and growing at a fast pace. Comfort and satisfaction from development Elixir programmers enjoy the fact that this programming language is good at performance and development speed. They don't need to compromise on any of these important aspects. As you can see, Elixir still has room for improvement but it is progressing swiftly. In addition to the overall experience, there are other technical reasons that make Ruby developers hooked to Elixir programming. Elixir solves the concurrency issue that Ruby currently has. As Elixir runs on Erlang VM, it can handle the distributed systems much more effectively compared to Ruby. Elixir runs fast. In fact, it is faster than Ruby in terms of response and compilation times. Fits decentralized systems perfectly. Unlike Ruby, Elixir uses message passing to convey the commands. This way, it is perfect for building fault-tolerant decentralized systems. Scalability. Applications can be scaled with Elixir easily. If you expect the code of your project to be very large, and the website you are building to get a lot of traffic, it’s a good idea to choose Elixir for it. Thanks to its incorporated tools like umbrella projects, you can easily break the code in chunks to deal with it easier. Elixir is the first programming language after Ruby that considers code aesthetics and language UX. It also cares about the libraries and the whole ecosystem. Elixir is one of the most practical functional programming languages. In addition to being efficient, it has a modern-looking syntax similar to Ruby. Clear and direct code representation. This programming language is nearly homoiconic. Open Telecom Platform (OTP). OTP gives Elixir fault tolerance and concurrency capabilities. Quick response. Elixir response time is under 100ms. So, there’s no waste of time, and you can handle numerous requests with the same hardware. Zero downtime. With Elixir, you can reach 100% up-time without having to stop for updates. You can deliver the updates to the production without interfering with its performance. No reinventing the wheel. With Elixir, developers can use existing coding patterns and libraries for their projects. Exhaustive documentation. Elixir has pretty instructive documentation that is easy to comprehend. Being quite a young programming language, Elixir has already attracted a lot of devoted followers thanks to all the above-described features. It has the potential to make programming easier, more fun, and in line with the demands of modern businesses. Choosing Elixir is definitely worth it for all the benefits the language offers. We believe that clean and comprehensible syntax, fast performance, high stability, and error tolerance gives Elixir a successful future. Technological giants like Discord, Bleacher Report, Pinterest and Moz have been using Elixir for a while now, enjoying all the competitive advantages it has to offer. Author Bio Maria Redka is a Technology Writer at MLSDev, a web and mobile app development company in Ukraine. She has been writing content professionally for more than 3 years.
Read more
  • 0
  • 0
  • 5246
article-image-streamline-your-application-development-process-in-5-simple-steps
Guest Contributor
23 Apr 2019
7 min read
Save for later

Streamline your application development process in 5 simple steps

Guest Contributor
23 Apr 2019
7 min read
Chief Information Officers (CIOs) are under constant pressure to deliver substantial results that meet business goals. Planning a project and seeing it through to the end is a critical requirement of an effective development process. In the fast-paced world of software development, getting results is an essential key for businesses to flourish. There is a certain pleasure you get from ticking off tasks from your to-do lists. However, this becomes a burden when you are drowning with a lot of tasks on your head. Signs of inefficient processes are prevalent in every business. Unhappy customers, stressed out colleagues, disappointing code reviews, missed deadlines, and increases in costs are just some of the examples that are the direct result of dysfunctional processes. By streamlining your workflow you will be able to compete with modern technologies like Machine Learning and Artificial Intelligence. Gaining access to such technologies will also help you to automate the workflow, making your daily processes even smoother. Listed below are 5 steps that can help you in streamlining your development process. Step 1: Creating a Workflow This is a preliminary step for companies who have not considered creating a better workflow. A task is not just something you can write down, complete, and tick-off. Complex, software related tasks are not like the “do-the-dishes” type of tasks. Usually, there are many stages in software development tasks like planning, organizing, reviewing, and releasing. Regardless of the niche of your tasks, the workflow should be clear. You can always use software tools such as Zapier, Nintex, and ProcessMaker, etc. to customize your workflow and assign levels-of-importance to particular tasks. This might appear as micro-management at first, but once it becomes a part of the daily routine, it starts to get easier. Creating a workflow is probably the most important factor to consider when you are preparing to streamline your software development processes. There are several steps involved when creating a workflow: Mapping the Process Process mapping mainly focuses on the visualization of the current development process which allows a top-down view of how things are working. You can do process mapping via tools such as Draw.io, LucidCharts, and Microsoft Visio, etc. Analyze the Process Once you have a flowchart or a swim lane diagram setup, use it to investigate the problems within the process. The problems can range from costs, time, employee motivation, and other bottlenecks. Redesign the Process When you have identified the problems, you should try to solve them step by step. Working with people who are directly involved in the process (e.g Software Developers) and gaining an on-the-ground insight can prove very useful when redesigning the processes. Acquire Resources You now need to secure the resources that are required to implement the new processes. With regards to our topic, it can range from buying licensed software, faster computers, etc. Implementing Change It is highly likely that your business processes change with existing systems, teams, and processes. Allocate your time to solving these problems, while keeping the regular operations in the process. Process Review This phase might seem the easiest, but it is not. Once the changes are in place, you need to review them accordingly so that they do not rise up again Once the workflow is set in place, all you have to do is to identify the bugs in your workflow plan. The bugs can range anywhere from slow tasks, re-opening of finished tasks, to dead tasks. What we have observed about workflows is that you do not get it right the first time. You need to take your time to edit and review the workflow while still being in the loop of the workflow. The more transparent and active your process is, the easier it gets to spot problems and figure out solutions. Step 2: Backlog Maintenance Many times you assume all the tasks in your backlog to be important. They might have, however, this makes the backlog a little too jam-packed. Well, your backlog will not serve a purpose unless you are actively taking part in keeping it organized. A backlog, while being a good place to store tasks, is also home to tasks that will never see the light of day. A good practice, therefore, would be to either clean up your backlog of dead tasks or combine them with tasks that have more importance in your overall workflow. If some of the tasks are relatively low-priority, we would recommend creating a separate backlog altogether. Backlogs are meant to be a database of tasks but do not let that fact get over your head. You should not worry about deleting something important from your backlog, if the task is important, it will come back. You can use sites like Trello or Slack to create and maintain a backlog. Step 3: Standardized Procedure for Tasks You should have an accurate definition of “done”. With respect to software development, there are several things you need to consider before actually accomplishing a task. These include: Ensure all the features have been applied The unit tests are finished Software information is up-to-date Quality assurance tests have been carried out The code is in the master branch The code is deployed in the production This is simply a template of what you can consider “done” with respect to a software development project. Like any template, it gets even better when you include your additions and subtractions to it. Having a standardized definition of “done” helps remove confusion from the project so that every employee has an understanding of every stage until they are finished. and also gives you time to think about what you are trying to achieve. Lastly, it is always wise to spend a little extra time completing a task phase, so that you do not have to revisit it several times. Step 4: Work in Progress (WIP) Control The ultimate factor that kills workflow is multi-tasking. Overloading your employees with constant tasks results in an overall decline in output. Therefore, it is important that you do not exert your employees with multiple tasks, which only increases their work in progress. In order to fight the problem of multitasking, you need to reduce your cycle times by having fewer tasks at one time. Consider setting a WIP limit inside your workflow by introducing limits for daily and weekly tasks. This helps to keep control of the employee tasks and reduces their burden. Step 5: Progress Visualization When you have everything set up in your workflow, it is time to represent that data to present and potential stakeholders. You need to make it clear that all of the features are completed and the ones you are currently working on. And if you will be releasing the product on time or no? A good way to represent data to senior management is through visualizations. With visualizations, you can use tools like Jira or Trello to make your data shine even more. In terms of data representation, you can use various free online tools, or buy software like Microsoft PowerPoint or Excel. Whatever tools you might use, your end-goal should be to make the information as simple as possible to the stakeholders. You need to avoid clutter and too much technical information. However, these are not the only methods you can use. Look around your company and see where you are lacking in your current processes. Take note of all of them, and research on how you can change them for the better. Author Bio Shawn Mike has been working with writing challenging clients for over five years. He provides ghostwriting, and copywriting services. His educational background in the technical field and business studies has given him the edge to write on many topics. He occasionally writes blogs for Dynamologic Solutions. Microsoft Store updates its app developer agreement, to give developers up to 95% of app revenue React Native Vs Ionic: Which one is the better mobile app development framework? 9 reasons to choose Agile Methodology for Mobile App Development
Read more
  • 0
  • 0
  • 6799

article-image-is-it-actually-possible-to-have-a-free-and-fair-election-ever-again-pulitzer-finalist-carole-cadwalladr-on-facebooks-role-in-brexit
Bhagyashree R
18 Apr 2019
6 min read
Save for later

“Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit

Bhagyashree R
18 Apr 2019
6 min read
On Monday, Carole Cadwalladr, a British journalist and Pulitzer award finalist, in her TED talk revealed how Facebook impacted the Brexit voting by enabling the spreading of calculated disinformation. Brexit, short for “British exit”, refers to UK’s withdrawal from the European Union (EU). Back in June 2016, when the United Kingdom European Union membership referendum happened, 51.9% of the voters supported leaving the EU. The final conclusion was set to come out on 29 March 2019, but it is now extended to 31 October 2019. Cadwalladr was asked by the editor of The Observer, the newspaper she was working at the time, to visit South Wales to investigate why so many voters there had elected to leave EU. So, she decided to visit Ebbw Vale, a town at the head of the valley formed by the Ebbw Fawr tributary of the Ebbw River in Wales. She wanted to find out why this town had the highest percentage of ‘Leave’ votes (62%). Brexit in South Wales: The reel and the real After reaching the town, Cadwalladr recalls that she was “taken aback” when she saw how this town has evolved over the years. The town was gleaming with new infrastructures including entrepreneurship center, sports center, better roads, and more, all funded by the EU. After seeing this development, she felt “a weird sense of unreality” when a young man stated his reason for voting to leave the EU was that it has failed to do anything for him. Not only this young man but people all over the town also stated the same reason for voting to leave the EU. “They said that they wanted to take back control,” adds Cadwalladr. Another major reason behind Brexit was immigration. However, Cadwalladr adds that she barely saw any immigrants and was unable to relate to the immigration problem the citizens of the town were talking about. So, she verified her observation with the actual records and was surprised to find that Ebbw Vale, in fact, has one of the lowest immigration rates. “So I was just a bit baffled because I couldn’t really understand where people were getting their information from,” she adds. So, after her story got published, a reader reached out to her regarding some Facebook posts and ads, which she described to her as “quite scary stuff about immigration, and especially about Turkey.” These posts were misinforming people that Turkey was going to join the EU and its 76 million population will promptly emigrate to current member states. “What happens on Facebook, stays on Facebook” After getting informed about these ads, when Cadwalladr checked Facebook to look for herself, she could not find even a trace of them because there is no archive of ads that are shown to people on Facebook. She said,  “This referendum that will have this profound effect on Britain forever and it already had a profound effect. The Japanese car manufacturers that came to Wales and the North-East people who replaced the mining jobs are already going because of Brexit. And, this entire referendum took place in darkness because it took place on Facebook.” And, this is why the British parliament has called Mark Zuckerberg several times to get answers to their questions, but each time he refused. Nobody has a definitive answer to questions like what ads were shown to people, how these ads impacted them, how much money was spent on these ads, or what data was analyzed to target these people, but Facebook. Cadwalladr adds that she and other journalists observed that during the referendum multiple crimes happened. In Britain, there is a limited amount of budget that you are allowed to spend on election campaigns to prevent politicians from buying the votes. But, in the last few days before the Brexit vote the  “biggest electoral fraud in Britain” happened. It was found that the official Vote Leave campaign laundered £750,000 from another campaign entity that was ruled illegal by the electoral commission. This money was spent, as you can guess, on the online disinformation campaigns. She adds, “And you can spend any amount of money on Facebook or on Google or on YouTube ads and nobody will know, because they're black boxes. And this is what happened.” The law was also broken by a group named “Leave.EU”. This group was led by Nigel Farage, a British politician, whose Brexit Party is doing quite well in the European elections. The campaign was funded by Arron Banks, who is being referred to the National Crime Agency because the electoral commission was not able to figure out from where he was able to provide the money. Going further into the details, she adds, “And I'm not even going to go into the lies that Arron Banks has told about his covert relationship with the Russian government. Or the weird timing of Nigel Farage's meetings with Julian Assange and with Trump's buddy, Roger Stone, now indicted, immediately before two massive WikiLeaks dumps, both of which happened to benefit Donald Trump.” While looking into Trump’s relationship to Farage, she came across Cambridge Analytica. She tracked down one of its ex-employees, Christopher Wiley, who was brave enough to reveal that this company has worked for Trump and Brexit. It used data from 87 million people from Facebook to understand their individual fears and better target them with Facebook ads. Cadwalladr’s investigation involved so many big names, that it was quite expected to get some threats. The owner of Cambridge Analytica, Robert Mercer threatened to sue them multiple times. Later on, one day ahead of publishing, they received a legal threat from Facebook. But, this did not stop them from publishing their findings in the Observer. A challenge to the “gods of Silicon Valley” Addressing the leaders of the tech giants, Cadwalladr said, “Facebook, you were on the wrong side of history in that. And you were on the wrong side of history in this -- in refusing to give us the answers that we need. And that is why I am here. To address you directly, the gods of Silicon Valley: Mark Zuckerberg and Sheryl Sandberg and Larry Page and Sergey Brin and Jack Dorsey, and your employees and your investors, too.” These tech giants can’t get away by just saying that they will do better in the future. They need to first give us the long-overdue answers so that these type of crimes are stopped from happening again. Comparing the technology they created to a crime scene, she now calls for fixing the broken laws. “It's about whether it's actually possible to have a free and fair election ever again. Because as it stands, I don't think it is,” she adds. To watch her full talk, visit TED.com. Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson Facebook AI introduces Aroma, a new code recommendation tool for developers Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs
Read more
  • 0
  • 0
  • 2600

article-image-building-a-scalable-postgresql-solution
Natasha Mathur
14 Apr 2019
12 min read
Save for later

Building a scalable PostgreSQL solution 

Natasha Mathur
14 Apr 2019
12 min read
The term  Scalability means the ability of a software system to grow as the business using it grows. PostgreSQL provides some features that help you to build a scalable solution but, strictly speaking, PostgreSQL itself is not scalable. It can effectively utilize the following resources of a single machine: It uses multiple CPU cores to execute a single query faster with the parallel query feature When configured properly, it can use all available memory for caching The size of the database is not limited; PostgreSQL can utilize multiple hard disks when multiple tablespaces are created; with partitioning, the hard disks could be accessed simultaneously, which makes data processing faster However, when it comes to spreading a database solution to multiple machines, it can be quite problematic because a standard PostgreSQL server can only run on a single machine.  In this article, we will look at different scaling scenarios and their implementation in PostgreSQL. The requirement for a system to be scalable means that a system that supports a business now, should also be able to support the same business with the same quality of service as it grows. This article is an excerpt taken from the book  'Learning PostgreSQL 11 - Third Edition' written by Andrey Volkov and Salahadin Juba. The book explores the concepts of relational databases and their core principles.  You’ll get to grips with using data warehousing in analytical solutions and reports and scaling the database for high availability and performance. Let's say a database can store 1 GB of data and effectively process 100 queries per second. What if with the development of the business, the amount of data being processed grows 100 times? Will it be able to support 10,000 queries per second and process 100 GB of data? Maybe not now, and not in the same installation. However, a scalable solution should be ready to be expanded to be able to handle the load as soon as it is needed. In scenarios where it is required to achieve better performance, it is quite common to set up more servers that would handle additional load and copy the same data to them from a master server. In scenarios where high availability is required, this is also a typical solution to continuously copy the data to a standby server so that it could take over in case the master server crashes.  Scalable PostgreSQL solution Replication can be used in many scaling scenarios. Its primary purpose is to create and maintain a backup database in case of system failure. This is especially true for physical replication. However, replication can also be used to improve the performance of a solution based on PostgreSQL. Sometimes, third-party tools can be used to implement complex scaling scenarios. Scaling for heavy querying Imagine there's a system that's supposed to handle a lot of read requests. For example, there could be an application that implements an HTTP API endpoint that supports the auto-completion functionality on a website. Each time a user enters a character in a web form, the system searches in the database for objects whose name starts with the string the user has entered. The number of queries can be very big because of the large number of users, and also because several requests are processed for every user session. To handle large numbers of requests, the database should be able to utilize multiple CPU cores. In case the number of simultaneous requests is really large, the number of cores required to process them can be greater than a single machine could have. The same applies to a system that is supposed to handle multiple heavy queries at the same time. You don't need a lot of queries, but when the queries themselves are big, using as many CPUs as possible would offer a performance benefit—especially when parallel query execution is used. In such scenarios, where one database cannot handle the load, it's possible to set up multiple databases, set up replication from one master database to all of them, making each them work as a hot standby, and then let the application query different databases for different requests. The application itself can be smart and query a different database each time, but that would require a special implementation of the data-access component of the application, which could look as follows: Another option is to use a tool called Pgpool-II, which can work as a load-balancer in front of several PostgreSQL databases. The tool exposes a SQL interface, and applications can connect there as if it were a real PostgreSQL server. Then Pgpool-II will redirect the queries to the databases that are executing the fewest queries at that moment; in other words, it will perform load-balancing: Yet another option is to scale the application together with the databases so that one instance of the application will connect to one instance of the database. In that case, the users of the application should connect to one of the many instances. This can be achieved with HTTP load-balancing: Data sharding When the problem is not the number of concurrent queries, but the size of the database and the speed of a single query, a different approach can be implemented. The data can be separated into several servers, which will be queried in parallel, and then the result of the queries will be consolidated outside of those databases. This is called data sharding. PostgreSQL provides a way to implement sharding based on table partitioning, where partitions are located on different servers and another one, the master server, uses them as foreign tables. When performing a query on a parent table defined on the master server, depending on the WHERE clause and the definitions of the partitions, PostgreSQL can recognize which partitions contain the data that is requested and would query only these partitions. Depending on the query, sometimes joins, grouping and aggregation could be performed on the remote servers. PostgreSQL can query different partitions in parallel, which will effectively utilize the resources of several machines. Having all this, it's possible to build a solution when applications would connect to a single database that would physically execute their queries on different database servers depending on the data that is being queried. It's also possible to build sharding algorithms into the applications that use PostgreSQL. In short, applications would be expected to know what data is located in which database, write it only there, and read it only from there. This would add a lot of complexity to the applications. Another option is to use one of the PostgreSQL-based sharding solutions available on the market or open source solutions. They have their own pros and cons, but the common problem is that they are based on previous releases of PostgreSQL and don't use the most recent features (sometimes providing their own features instead). One of the most popular sharding solutions is Postgres-XL, which implements a shared-nothing architecture using multiple servers running PostgreSQL. The system has several components: Multiple data nodes: Store the data A single global transaction monitor (GTM): Manages the cluster, provides global transaction consistency Multiple coordinator nodes: Supports user connections, builds query-execution plans, and interacts with the GTM and the data nodes Postgres-XL implements the same API as PostgreSQL, therefore the applications don't need to treat the server in any special way. It is ACID-compliant, meaning it supports transactions and integrity constraints. The COPY command is also supported. The main benefits of using Postgres-XL are as follows: It can scale to support more reading operations by adding more data nodes It can scale for to support more writing operations by adding more coordinator nodes The current release of Postgres-XL (at the time of writing) is based on PostgreSQL 10, which is relatively new The main downside of Postgres-XL is that it does not provide any high-availability features out of the box. When more servers are added to a cluster, the probability of the failure of any of them increases. That's why you should take care with backups or implement replication of the data nodes themselves. Postgres-XL is open source, but commercial support is available. Another solution worth mentioning is Greenplum. It's positioned as an implementation of a massive parallel-processing database, specifically designed for data warehouses. It has the following components: Master node: Manages user connections, builds query execution plans, manages transactions Data nodes: Store the data and perform queries Greenplum also implements the PostgreSQL API, and applications can connect to a Greenplum database without any changes. It supports transactions, but support for integrity constraints is limited. The COPY command is supported. The main benefits of Greenplum are as follows: It can scale to support more reading operations by adding more data nodes. It supports column-oriented table organization, which can be useful for data-warehousing solutions. Data compression is supported. High-availability features are supported out of the box. It's possible (and recommended) to add a secondary master that would take over in case a primary master crashes. It's also possible to add mirrors to the data nodes to prevent data loss. The drawbacks are as follows: It doesn't scale to support more writing operations. Everything goes through the single master node and adding more data nodes does not make writing faster. However, it's possible to import data from files directly on the data nodes. It uses PostgreSQL 8.4 in its core. Greenplum has a lot of improvements and new features added to the base PostgreSQL code, but it's still based on a very old release; however, the system is being actively developed. Greenplum doesn't support foreign keys, and support for unique constraints is limited. There are commercial and open source editions of Greenplum. Scaling for many numbers of connections Yet another use case related to scalability is when the number of database connections is great.  However, when a single database is used in an environment with a lot of microservices and each has its own connection pool, even if they don't perform too many queries, it's possible that hundreds or even thousands of connections are opened in the database. Each connection consumes server resources and just the requirement to handle a great number of connections can already be a problem, without even performing any queries. If applications don't use connection pooling and open connections only when they need to query the database and close them afterwards, another problem could occur. Establishing a database connection takes time—not too much, but when the number of operations is great, the total overhead will be significant. There is a tool, named PgBouncer, that implements a connection-pool functionality. It can accept connections from many applications as if it were a PostgreSQL server and then open a limited number of connections towards the database. It would reuse the same database connections for multiple applications' connections. The process of establishing a connection from an application to PgBouncer is much faster than connecting to a real database because PgBouncer doesn't need to initialize a database backend process for the session. PgBouncer can create multiple connection pools that work in one of the three modes: Session mode: A connection to a PostgreSQL server is used for the lifetime of a client connection to PgBouncer. Such a setup could be used to speed up the connection process on the application side. This is the default mode. Transaction mode: A connection to PostgreSQL is used for a single transaction that a client performs. That could be used to reduce the number of connections at the PostgreSQL side when only a few translations are performed simultaneously. Statement mode: A database connection is used for a single statement. Then it is returned to the pool and a different connection is used for the next statement. This mode is similar to the transaction mode, though more aggressive. Note that multi-statement transactions are not possible when statement mode is used. Different pools can be set up to work in different modes. It's possible to let PgBouncer connect to multiple PostgreSQL servers, thus working as a reverse proxy. The way PgBouncer could be used is represented in the following diagram: PgBouncer establishes several connections to the database. When an application connects to PgBouncer and starts a transaction, PgBouncer assigns an existing database connection to that application, forwards all SQL commands to the database, and delivers the results back. When the transaction is finished, PgBouncer will dissociate the connections, but not close them. If another application starts a transaction, the same database connection could be used. Such a setup requires configuring PgBouncer to work in transaction mode. PostgreSQL provides several ways to implement replication that would maintain a copy of the data from a database on another server or servers. This can be used as a backup or a standby solution that would take over in case the main server crashes. Replication can also be used to improve the performance of a software system by making it possible to distribute the load on several database servers. In this article, we discussed the problem of building scalable solutions based on PostgreSQL utilizing the resources of several servers. We looked at scaling for querying, data sharding, as well as scaling for many numbers of connections.  If you enjoyed reading this article and want to explore other topics, be sure to check out the book 'Learning PostgreSQL 11 - Third Edition'. Handling backup and recovery in PostgreSQL 10 [Tutorial] Understanding SQL Server recovery models to effectively backup and restore your database Saving backups on cloud services with ElasticSearch plugins
Read more
  • 0
  • 0
  • 17385
article-image-a-five-level-learning-roadmap-for-functional-programmers
Sugandha Lahoti
12 Apr 2019
4 min read
Save for later

A five-level learning roadmap for Functional Programmers

Sugandha Lahoti
12 Apr 2019
4 min read
The following guide serves as an excellent learning roadmap for functional programming. It can be used to track our level of knowledge regarding functional programming. This guide was developed for the Fantasyland institute of learning for the LambdaConf conference. It was designed for statically-typed functional programming languages that implement category theory. This post is extracted from the book Hands-On Functional Programming with TypeScript by Remo H. Jansen. In this book, you will understand the pros, cons, and core principles of functional programming in TypeScript. This roadmap talks about five levels of difficulty: Beginner, Advanced Beginner, Intermediate, Proficient, and Expert. Languages such as Haskell support category theory natively, but, we can take advantage of category theory in TypeScript by implementing it or using some third-party libraries. Not all the items in the list are 100% applicable to TypeScript due to language differences, but most of them are 100% applicable. Beginner To reach the beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Immutable data Second-order functions Constructing and destructuring Function composition First-class functions and lambdas Use second-order functions (map, filter, fold) on immutable data structures Destructure values to access their components Use data types to represent optionality Read basic type signatures Pass lambdas to second-order functions Advanced beginner To reach the advanced beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Algebraic data types Pattern matching Parametric polymorphism General recursion Type classes, instances, and laws Lower-order abstractions (equal, semigroup, monoid, and so on) Referential transparency and totality Higher-order functions Partial application, currying, and point-free style Solve problems without nulls, exceptions, or type casts Process and transform recursive data structures using recursion Able to use functional programming in the small Write basic monadic code for a concrete monad Create type class instances for custom data types Model a business domain with abstract data types (ADTs) Write functions that take and return functions Reliably identify and isolate pure code from an impure code Avoid introducing unnecessary lambdas and named parameters Intermediate To reach the intermediate level, you will need to master the following concepts and skills: CONCEPTS SKILLS Generalized algebraic data type Higher-kinded types Rank-N types Folds and unfolds Higher-order abstractions (category, functor, monad) Basic optics Implement efficient persistent data structures Existential types Embedded DSLs using combinators Able to implement large functional programming applications Test code using generators and properties Write imperative code in a purely functional way through monads Use popular purely functional libraries to solve business problems Separate decision from effects Write a simple custom lawful monad Write production medium-sized projects Use lenses and prisms to manipulate data Simplify types by hiding irrelevant data with existential Proficient To reach the proficient level, you will need to master the following concepts and skills: CONCEPTS SKILLS Codata (Co)recursion schemes Advanced optics Dual abstractions (comonad) Monad transformers Free monads and extensible effects Functional architecture Advanced functors (exponential, profunctors, contravariant) Embedded domain-specific languages (DSLs) using generalized algebraic datatypes (GADTs) Advanced monads (continuation, logic) Type families, functional dependencies (FDs) Design a minimally powerful monad transformer stack Write concurrent and streaming programs Use purely functional mocking in tests. Use type classes to modularly model different effects Recognize type patterns and abstract over them Use functional libraries in novel ways Use optics to manipulate state Write custom lawful monad transformers Use free monads/extensible effects to separate concerns Encode invariants at the type level. Effectively use FDs/type families to create safer code Expert To reach the expert level, you will need to master the following concepts and skills: CONCEPTS SKILLS High performance Kind polymorphism Generic programming Type-level programming Dependent-types, singleton types Category theory Graph reduction Higher-order abstract syntax Compiler design for functional languages Profunctor optics Design a generic, lawful library with broad appeal Prove properties manually using equational reasoning Design and implement a new functional programming language Create novel abstractions with laws Write distributed systems with certain guarantees Use proof systems to formally prove properties of code Create libraries that do not permit invalid states. Use dependent typing to prove more properties at compile time Understand deep relationships between different concepts Profile, debug, and optimize purely functional code with minimal sacrifices Summary This guide should be a good resource to guide you in your future functional-programming learning efforts. Read more on this in our book Hands-On Functional Programming with TypeScript. What makes functional programming a viable choice for artificial intelligence projects? Why functional programming in Python matters: Interview with best selling author, Steven Lott Introducing Coconut for making functional programming in Python simpler
Read more
  • 0
  • 0
  • 6001

article-image-why-should-your-e-commerce-site-opt-for-headless-magento-2
Guest Contributor
09 Apr 2019
5 min read
Save for later

Why should your e-commerce site opt for Headless Magento 2?

Guest Contributor
09 Apr 2019
5 min read
Over the past few years, headless e-commerce has been pretty much in talks often being touted as the ‘future of e-commerce’. Last year, in a joint webinar conducted by Magento on ten B2B eCommerce trends for 2018, BORN and Magento predicted that headless CMS commerce will become a popular type of website architecture in the coming year and beyond, for both B2B and B2C businesses. Magento Headless is one such headless CMS which is gaining rapid popularity. Those who have recently jumped on the Magento bandwagon might find it new and hard to grasp. So, in this the post we will give a brief on what is headless Magento, its benefits, and why should you opt for Headless Magento 2. What is a Headless Browser? Headless browsers are basically software-enabled browsers offering a separate user interface. You can automate various actions of your website and monitor the performance under different circumstances. If you’re working under command line instructions in a headless browser, then there is no GUI. With the help of a headless browser, one can actually view several things such as the dimensions of a web layout, font family and other design elements used in a particular website. Headless browsers are mainly used to test web pages. Earlier, e-commerce websites like Magento, Shopify used to have their back-end and front-end tightly integrated. After the introduction of the headless architecture, front-end separated from the back-end. As a result, both parts work independently. There are various e-Commerce platforms that support a headless approach such as Magento, BigCommerce, Shopify, and many more. With Shopify, if you already have a website you can take the headless approach and use Shopify for your sales with links to it from your main website. Like Shopify, BigCommerce also offers a good range of themes and templates to make sure stores look professional and get up-and-running fast. The platform incorporates a full-featured CMS that allows you to run an entire website, not just your online store. Going headless with browsers means that you are running the output in a non-graphical environment such as Linux terms, without X-Windows or Wayland. For Google search algorithms, headless browsers play quite a crucial role. The search engine strongly recommends using Headless architecture because it helps Google to create Ajax websites on the web. So, websites which integrate the headless browser on the web server get easy access by the search engine. This is because Google can access the Ajax website program on the server before making it available for search engine rendering. What are the benefits of using a Headless browser? Going headless has a variety of benefits. If you are a web designer, the markup in HTML would be simple to understand. This is because PHP code, complex JavaScript, widgets are no longer in use – just a plain HTML with some kind of additional placeholder-syntax is required. This also means that the HTML page could be served statically, bringing the application load time down by a significant amount. From the e-commerce perspective, it acts as a viable channel for sales, where a majority of traffic comes from mobile. With the advent of disruptive technologies such as Headless browser for Magento, the path to purchase has expanded. Today it not only includes mobile traffic but even features a complex matrix of buyer touchpoints. Why use a headless browser for Magento? Opting for Magento featuring a headless browser has benefits of its own. Because in Magento, JavaScript coded parts are loosely coupled but the scope of flexibility widens when it comes to choosing any of frameworks such as AngularJs, VueJs, and others. In other words, one does not need to be a Magento developer to build on Magento. All you have to do is focus on a REST API. For instance, if a single page is loading 15 different resources of course from different URLs, the HTML document would become optimal. Magento is a flexible framework that can be used to create your own logic such as pricing, logins, checkout, etc. With a headless browser, Magento gets a clear performance boost. All static parts of the pages are loaded quickly, and the dynamic parts of the pages are loaded lazily through Ajax. In Magento 2, you have the additional support of full page cache. It may even interest you to know that Magento 2 offers Private Content which is equipped to handle lazy loading more efficiently. Isn't Magento 2 headless already? One of the common misconceptions that I have come across is people believing that Magento 2 is already headless. For Magento to be headless, a JavaScript developer requires knowing KnockoutJS ViewModels to make use of the Magento logic. If the ViewModels do not suffice, a Magento developer should add backend logic to this JavaScript layer. However, this also applies to go 100% headless: Whenever a REST resource won’t be available, professionals must make headless for Magento 2 available. To be or not to be headless will always be in talks because an HTML document not only contains the templating part but also static content that shouldn’t be changed. For me, a headless approach is best suited for businesses who have a CMS website for B2C and B2B storefront. It is also popular among those who are looking to put their JavaScript teams back to work. Author Bio Olivia Diaz is working in eTatvaSoft. Being a tech geek, she keeps a close watch over the industry focusing on the latest technology news and gadgets.
Read more
  • 0
  • 0
  • 6443