Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Data

281 Articles
article-image-is-it-actually-possible-to-have-a-free-and-fair-election-ever-again-pulitzer-finalist-carole-cadwalladr-on-facebooks-role-in-brexit
Bhagyashree R
18 Apr 2019
6 min read
Save for later

“Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit

Bhagyashree R
18 Apr 2019
6 min read
On Monday, Carole Cadwalladr, a British journalist and Pulitzer award finalist, in her TED talk revealed how Facebook impacted the Brexit voting by enabling the spreading of calculated disinformation. Brexit, short for “British exit”, refers to UK’s withdrawal from the European Union (EU). Back in June 2016, when the United Kingdom European Union membership referendum happened, 51.9% of the voters supported leaving the EU. The final conclusion was set to come out on 29 March 2019, but it is now extended to 31 October 2019. Cadwalladr was asked by the editor of The Observer, the newspaper she was working at the time, to visit South Wales to investigate why so many voters there had elected to leave EU. So, she decided to visit Ebbw Vale, a town at the head of the valley formed by the Ebbw Fawr tributary of the Ebbw River in Wales. She wanted to find out why this town had the highest percentage of ‘Leave’ votes (62%). Brexit in South Wales: The reel and the real After reaching the town, Cadwalladr recalls that she was “taken aback” when she saw how this town has evolved over the years. The town was gleaming with new infrastructures including entrepreneurship center, sports center, better roads, and more, all funded by the EU. After seeing this development, she felt “a weird sense of unreality” when a young man stated his reason for voting to leave the EU was that it has failed to do anything for him. Not only this young man but people all over the town also stated the same reason for voting to leave the EU. “They said that they wanted to take back control,” adds Cadwalladr. Another major reason behind Brexit was immigration. However, Cadwalladr adds that she barely saw any immigrants and was unable to relate to the immigration problem the citizens of the town were talking about. So, she verified her observation with the actual records and was surprised to find that Ebbw Vale, in fact, has one of the lowest immigration rates. “So I was just a bit baffled because I couldn’t really understand where people were getting their information from,” she adds. So, after her story got published, a reader reached out to her regarding some Facebook posts and ads, which she described to her as “quite scary stuff about immigration, and especially about Turkey.” These posts were misinforming people that Turkey was going to join the EU and its 76 million population will promptly emigrate to current member states. “What happens on Facebook, stays on Facebook” After getting informed about these ads, when Cadwalladr checked Facebook to look for herself, she could not find even a trace of them because there is no archive of ads that are shown to people on Facebook. She said,  “This referendum that will have this profound effect on Britain forever and it already had a profound effect. The Japanese car manufacturers that came to Wales and the North-East people who replaced the mining jobs are already going because of Brexit. And, this entire referendum took place in darkness because it took place on Facebook.” And, this is why the British parliament has called Mark Zuckerberg several times to get answers to their questions, but each time he refused. Nobody has a definitive answer to questions like what ads were shown to people, how these ads impacted them, how much money was spent on these ads, or what data was analyzed to target these people, but Facebook. Cadwalladr adds that she and other journalists observed that during the referendum multiple crimes happened. In Britain, there is a limited amount of budget that you are allowed to spend on election campaigns to prevent politicians from buying the votes. But, in the last few days before the Brexit vote the  “biggest electoral fraud in Britain” happened. It was found that the official Vote Leave campaign laundered £750,000 from another campaign entity that was ruled illegal by the electoral commission. This money was spent, as you can guess, on the online disinformation campaigns. She adds, “And you can spend any amount of money on Facebook or on Google or on YouTube ads and nobody will know, because they're black boxes. And this is what happened.” The law was also broken by a group named “Leave.EU”. This group was led by Nigel Farage, a British politician, whose Brexit Party is doing quite well in the European elections. The campaign was funded by Arron Banks, who is being referred to the National Crime Agency because the electoral commission was not able to figure out from where he was able to provide the money. Going further into the details, she adds, “And I'm not even going to go into the lies that Arron Banks has told about his covert relationship with the Russian government. Or the weird timing of Nigel Farage's meetings with Julian Assange and with Trump's buddy, Roger Stone, now indicted, immediately before two massive WikiLeaks dumps, both of which happened to benefit Donald Trump.” While looking into Trump’s relationship to Farage, she came across Cambridge Analytica. She tracked down one of its ex-employees, Christopher Wiley, who was brave enough to reveal that this company has worked for Trump and Brexit. It used data from 87 million people from Facebook to understand their individual fears and better target them with Facebook ads. Cadwalladr’s investigation involved so many big names, that it was quite expected to get some threats. The owner of Cambridge Analytica, Robert Mercer threatened to sue them multiple times. Later on, one day ahead of publishing, they received a legal threat from Facebook. But, this did not stop them from publishing their findings in the Observer. A challenge to the “gods of Silicon Valley” Addressing the leaders of the tech giants, Cadwalladr said, “Facebook, you were on the wrong side of history in that. And you were on the wrong side of history in this -- in refusing to give us the answers that we need. And that is why I am here. To address you directly, the gods of Silicon Valley: Mark Zuckerberg and Sheryl Sandberg and Larry Page and Sergey Brin and Jack Dorsey, and your employees and your investors, too.” These tech giants can’t get away by just saying that they will do better in the future. They need to first give us the long-overdue answers so that these type of crimes are stopped from happening again. Comparing the technology they created to a crime scene, she now calls for fixing the broken laws. “It's about whether it's actually possible to have a free and fair election ever again. Because as it stands, I don't think it is,” she adds. To watch her full talk, visit TED.com. Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson Facebook AI introduces Aroma, a new code recommendation tool for developers Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs
Read more
  • 0
  • 0
  • 2622

article-image-building-a-scalable-postgresql-solution
Natasha Mathur
14 Apr 2019
12 min read
Save for later

Building a scalable PostgreSQL solution 

Natasha Mathur
14 Apr 2019
12 min read
The term  Scalability means the ability of a software system to grow as the business using it grows. PostgreSQL provides some features that help you to build a scalable solution but, strictly speaking, PostgreSQL itself is not scalable. It can effectively utilize the following resources of a single machine: It uses multiple CPU cores to execute a single query faster with the parallel query feature When configured properly, it can use all available memory for caching The size of the database is not limited; PostgreSQL can utilize multiple hard disks when multiple tablespaces are created; with partitioning, the hard disks could be accessed simultaneously, which makes data processing faster However, when it comes to spreading a database solution to multiple machines, it can be quite problematic because a standard PostgreSQL server can only run on a single machine.  In this article, we will look at different scaling scenarios and their implementation in PostgreSQL. The requirement for a system to be scalable means that a system that supports a business now, should also be able to support the same business with the same quality of service as it grows. This article is an excerpt taken from the book  'Learning PostgreSQL 11 - Third Edition' written by Andrey Volkov and Salahadin Juba. The book explores the concepts of relational databases and their core principles.  You’ll get to grips with using data warehousing in analytical solutions and reports and scaling the database for high availability and performance. Let's say a database can store 1 GB of data and effectively process 100 queries per second. What if with the development of the business, the amount of data being processed grows 100 times? Will it be able to support 10,000 queries per second and process 100 GB of data? Maybe not now, and not in the same installation. However, a scalable solution should be ready to be expanded to be able to handle the load as soon as it is needed. In scenarios where it is required to achieve better performance, it is quite common to set up more servers that would handle additional load and copy the same data to them from a master server. In scenarios where high availability is required, this is also a typical solution to continuously copy the data to a standby server so that it could take over in case the master server crashes.  Scalable PostgreSQL solution Replication can be used in many scaling scenarios. Its primary purpose is to create and maintain a backup database in case of system failure. This is especially true for physical replication. However, replication can also be used to improve the performance of a solution based on PostgreSQL. Sometimes, third-party tools can be used to implement complex scaling scenarios. Scaling for heavy querying Imagine there's a system that's supposed to handle a lot of read requests. For example, there could be an application that implements an HTTP API endpoint that supports the auto-completion functionality on a website. Each time a user enters a character in a web form, the system searches in the database for objects whose name starts with the string the user has entered. The number of queries can be very big because of the large number of users, and also because several requests are processed for every user session. To handle large numbers of requests, the database should be able to utilize multiple CPU cores. In case the number of simultaneous requests is really large, the number of cores required to process them can be greater than a single machine could have. The same applies to a system that is supposed to handle multiple heavy queries at the same time. You don't need a lot of queries, but when the queries themselves are big, using as many CPUs as possible would offer a performance benefit—especially when parallel query execution is used. In such scenarios, where one database cannot handle the load, it's possible to set up multiple databases, set up replication from one master database to all of them, making each them work as a hot standby, and then let the application query different databases for different requests. The application itself can be smart and query a different database each time, but that would require a special implementation of the data-access component of the application, which could look as follows: Another option is to use a tool called Pgpool-II, which can work as a load-balancer in front of several PostgreSQL databases. The tool exposes a SQL interface, and applications can connect there as if it were a real PostgreSQL server. Then Pgpool-II will redirect the queries to the databases that are executing the fewest queries at that moment; in other words, it will perform load-balancing: Yet another option is to scale the application together with the databases so that one instance of the application will connect to one instance of the database. In that case, the users of the application should connect to one of the many instances. This can be achieved with HTTP load-balancing: Data sharding When the problem is not the number of concurrent queries, but the size of the database and the speed of a single query, a different approach can be implemented. The data can be separated into several servers, which will be queried in parallel, and then the result of the queries will be consolidated outside of those databases. This is called data sharding. PostgreSQL provides a way to implement sharding based on table partitioning, where partitions are located on different servers and another one, the master server, uses them as foreign tables. When performing a query on a parent table defined on the master server, depending on the WHERE clause and the definitions of the partitions, PostgreSQL can recognize which partitions contain the data that is requested and would query only these partitions. Depending on the query, sometimes joins, grouping and aggregation could be performed on the remote servers. PostgreSQL can query different partitions in parallel, which will effectively utilize the resources of several machines. Having all this, it's possible to build a solution when applications would connect to a single database that would physically execute their queries on different database servers depending on the data that is being queried. It's also possible to build sharding algorithms into the applications that use PostgreSQL. In short, applications would be expected to know what data is located in which database, write it only there, and read it only from there. This would add a lot of complexity to the applications. Another option is to use one of the PostgreSQL-based sharding solutions available on the market or open source solutions. They have their own pros and cons, but the common problem is that they are based on previous releases of PostgreSQL and don't use the most recent features (sometimes providing their own features instead). One of the most popular sharding solutions is Postgres-XL, which implements a shared-nothing architecture using multiple servers running PostgreSQL. The system has several components: Multiple data nodes: Store the data A single global transaction monitor (GTM): Manages the cluster, provides global transaction consistency Multiple coordinator nodes: Supports user connections, builds query-execution plans, and interacts with the GTM and the data nodes Postgres-XL implements the same API as PostgreSQL, therefore the applications don't need to treat the server in any special way. It is ACID-compliant, meaning it supports transactions and integrity constraints. The COPY command is also supported. The main benefits of using Postgres-XL are as follows: It can scale to support more reading operations by adding more data nodes It can scale for to support more writing operations by adding more coordinator nodes The current release of Postgres-XL (at the time of writing) is based on PostgreSQL 10, which is relatively new The main downside of Postgres-XL is that it does not provide any high-availability features out of the box. When more servers are added to a cluster, the probability of the failure of any of them increases. That's why you should take care with backups or implement replication of the data nodes themselves. Postgres-XL is open source, but commercial support is available. Another solution worth mentioning is Greenplum. It's positioned as an implementation of a massive parallel-processing database, specifically designed for data warehouses. It has the following components: Master node: Manages user connections, builds query execution plans, manages transactions Data nodes: Store the data and perform queries Greenplum also implements the PostgreSQL API, and applications can connect to a Greenplum database without any changes. It supports transactions, but support for integrity constraints is limited. The COPY command is supported. The main benefits of Greenplum are as follows: It can scale to support more reading operations by adding more data nodes. It supports column-oriented table organization, which can be useful for data-warehousing solutions. Data compression is supported. High-availability features are supported out of the box. It's possible (and recommended) to add a secondary master that would take over in case a primary master crashes. It's also possible to add mirrors to the data nodes to prevent data loss. The drawbacks are as follows: It doesn't scale to support more writing operations. Everything goes through the single master node and adding more data nodes does not make writing faster. However, it's possible to import data from files directly on the data nodes. It uses PostgreSQL 8.4 in its core. Greenplum has a lot of improvements and new features added to the base PostgreSQL code, but it's still based on a very old release; however, the system is being actively developed. Greenplum doesn't support foreign keys, and support for unique constraints is limited. There are commercial and open source editions of Greenplum. Scaling for many numbers of connections Yet another use case related to scalability is when the number of database connections is great.  However, when a single database is used in an environment with a lot of microservices and each has its own connection pool, even if they don't perform too many queries, it's possible that hundreds or even thousands of connections are opened in the database. Each connection consumes server resources and just the requirement to handle a great number of connections can already be a problem, without even performing any queries. If applications don't use connection pooling and open connections only when they need to query the database and close them afterwards, another problem could occur. Establishing a database connection takes time—not too much, but when the number of operations is great, the total overhead will be significant. There is a tool, named PgBouncer, that implements a connection-pool functionality. It can accept connections from many applications as if it were a PostgreSQL server and then open a limited number of connections towards the database. It would reuse the same database connections for multiple applications' connections. The process of establishing a connection from an application to PgBouncer is much faster than connecting to a real database because PgBouncer doesn't need to initialize a database backend process for the session. PgBouncer can create multiple connection pools that work in one of the three modes: Session mode: A connection to a PostgreSQL server is used for the lifetime of a client connection to PgBouncer. Such a setup could be used to speed up the connection process on the application side. This is the default mode. Transaction mode: A connection to PostgreSQL is used for a single transaction that a client performs. That could be used to reduce the number of connections at the PostgreSQL side when only a few translations are performed simultaneously. Statement mode: A database connection is used for a single statement. Then it is returned to the pool and a different connection is used for the next statement. This mode is similar to the transaction mode, though more aggressive. Note that multi-statement transactions are not possible when statement mode is used. Different pools can be set up to work in different modes. It's possible to let PgBouncer connect to multiple PostgreSQL servers, thus working as a reverse proxy. The way PgBouncer could be used is represented in the following diagram: PgBouncer establishes several connections to the database. When an application connects to PgBouncer and starts a transaction, PgBouncer assigns an existing database connection to that application, forwards all SQL commands to the database, and delivers the results back. When the transaction is finished, PgBouncer will dissociate the connections, but not close them. If another application starts a transaction, the same database connection could be used. Such a setup requires configuring PgBouncer to work in transaction mode. PostgreSQL provides several ways to implement replication that would maintain a copy of the data from a database on another server or servers. This can be used as a backup or a standby solution that would take over in case the main server crashes. Replication can also be used to improve the performance of a software system by making it possible to distribute the load on several database servers. In this article, we discussed the problem of building scalable solutions based on PostgreSQL utilizing the resources of several servers. We looked at scaling for querying, data sharding, as well as scaling for many numbers of connections.  If you enjoyed reading this article and want to explore other topics, be sure to check out the book 'Learning PostgreSQL 11 - Third Edition'. Handling backup and recovery in PostgreSQL 10 [Tutorial] Understanding SQL Server recovery models to effectively backup and restore your database Saving backups on cloud services with ElasticSearch plugins
Read more
  • 0
  • 0
  • 17415

article-image-open-data-institute-jacob-ohrvik-on-digital-regulation-internet-regulators-and-office-for-responsible-technology
Natasha Mathur
02 Apr 2019
6 min read
Save for later

Open Data Institute: Jacob Ohrvik on digital regulation, internet regulators, and office for responsible technology

Natasha Mathur
02 Apr 2019
6 min read
Open Data Institute posted a video titled “Regulating for responsible technology – is the UK getting it right?”, as a part of its ODI Fridays series last week. Jacob Ohrvik Scott, a researcher at Think-tank Doteveryone, a UK based organization that promotes ideas on responsible tech. In the video, Ohrvik talks about the state of digital regulation, systemic challenges faced by independent regulators and the need for an Office for responsible tech, an independent regulatory body, in the UK. Let’s look at the key takeaways from the video. Ohrvik started off the video talking about responsible tech and three main factors that fall under responsible tech. The factors include: unintended consequences of its applications kind of value that flows to and fro the technology kind of societal context in which it operates Ohrvik states that many people in the UK have been calling for an internet regulator to carry out different digital-safety related responsibilities. For instance, the NSPCC, National Society for the Prevention of Cruelty to Children, called for an internet regulator to make sure that children are safe online. Similarly, media and Sport Committee is called out to implement an ethical code of practice for social media platforms and big search engines. Given the fact that many people were talking about the independent internet regulatory body, Doteveryone decided to come out with their own set of proposals. It had previously carried out a survey that observed the public attitude and understanding of digital technologies. As per the survey results, one of the main things that people emphasized was greater accountability from tech companies. Also, people were supportive of the idea of an independent internet regulator. “We spoke to lots of people, we did some of our own thinking and we were trying to imagine what this independent internet regulator might look like. But..we uncovered some more sort of deep-rooted systemic challenges that a single internet regulator couldn't really tackle” said Ohrvik. Systemic Challenges faced by Independent Internet Regulator The systemic challenges presented by Ohrvik are the need for better digital capabilities, society needs an agency and the need for evidence. Better digital capabilities Ohrvik cites the example of Christopher Wiley, a “whistleblower” in the Cambridge Analytica scandal.  As per Wiley, one of the weak points of the system is the lack of tech knowledge. The fact that he was asked a lot of basic questions by the Information Commissioner’s Office (UK’s data regulator) that wouldn’t be normally asked by a database engineer is indicative of the overall challenges faced by the regulatory system. Tech awareness among the public is important The second challenge is that society needs an agency that can help bring back their trust in tech. Ohrvik states that as part of the survey that Doteveryone conducted, they observed that when people were asked to give their views on reading terms and conditions, 58 percent said that they don't read terms and conditions. 47% of people feel that they have no choice but to accept the terms and conditions on the internet. While 43% of people said that there's no point in reading terms and conditions because tech companies will do what they want anyway. This last area of voters especially signals towards a wider kind of trend today where the public feel disempowered and cynical towards tech. This is also one of the main reasons why Ohrvik believes that a regulatory system is needed to “re-energize” the public and give them “more power”. Everybody needs evidence Ohrvik states that it’s hard to get evidence around online harms and some of the opportunities that arise from digital technologies. This is because: a) you need a rigorous and kind of longitudinal evidence base b)  getting access to the data for the evidence is quite difficult (esp. from a large private multinational company not wanting to engage with government) and c) hard to look under the bonnet of digital technologies, meaning, dealing with thousands of algorithms and complexities that makes it hard to make sense of  what’s really happening. Ohrvik then discussed the importance of having a separate office for responsible technology if we want to counteract the systemic challenges listed above. Having an Office for responsible technology Ohrvik states that the office for responsible tech would do three broad things namely, empowering regulators, informing policymakers and public, and supporting people to seek redress. Empowering regulators This would include analyzing the processes that regulators have in-place to ensure they are up-to-date. Also, recommending the necessary changes required to the government to effectively put the right plan in action. Another main requirement is building up the digital capabilities of regulators. This would be done in a way where the regulators are able to pay for the tech talent across the whole regulatory system, which in turn, would help them understand the challenges related to digital technologies.                                         ODI: Regulating for responsible technology Empowering regulators would also help shift the role of regulators from being kind of reactive and slow towards being more proactive and fast moving. Informing policymakers and public This would involve communicating with the public and policymakers about certain developments related to tech regulation. This would further offer guidance and make longer-term engagements to promote positive long term change in the public relationship with digital technologies.                                                                              ODI: Regulating for responsible technology For instance, a long term campaign centered around media literacy can be conducted to tackle misinformation. Similarly, a long-term campaign around helping people better understand their data rights can also be implemented. Supporting people to seek redress This is aimed at addressing the power imbalance between the public and tech companies. This can be done by auditing the processes, procedures, and technologies that tech companies have in place, to protect the public from harms.                                                    ODI: Regulating for responsible technology For instance, a spot check can be carried out on algorithms or artificial intelligence to spot harmful content. While spot checking, handling processes and moderation processes can also be checked to make sure they’re working well. So, in case, certain processes for the public don't work, then this can be easily redressed. This approach of spotting harms at an early stage can further help people and make the regulatory system stronger. In all, an office for responsible tech is quite indispensable to promote the responsible design of technologies and to predict their digital impact on society. By working with regulators to come out with approaches that support responsible innovation, an office for responsible tech can foster healthy digital space for everyone.     Microsoft, Adobe, and SAP share new details about the Open Data Initiative Congress passes ‘OPEN Government Data Act’ to make open data part of the US Code Open Government Data Act makes non-sensitive public data publicly available in open and machine readable formats
Read more
  • 0
  • 0
  • 1976
Banner background image

article-image-alteryx-vs-tableau-choosing-the-right-data-analytics-tool-for-your-business
Guest Contributor
04 Mar 2019
6 min read
Save for later

Alteryx vs. Tableau: Choosing the right data analytics tool for your business

Guest Contributor
04 Mar 2019
6 min read
Data Visualization is commonly used in the modern world, where most business decisions are taken into consideration by analyzing the data. One of the most significant benefits of data visualization is that it enables us to visually access huge amounts of data in easily understandable visuals. There are many areas where data visualization is being used. Some of the data visualization tools include Tableau, Alteryx, Infogram, ChartBlocks, Datawrapper, Plotly, Visual.ly, etc. Tableau and Alteryx are industry standard tools and have dominated the data analytics market for a few years now and still running strong without any strong competition. In this article, we will understand the core differences between Alteryx tool and Tableau. This will help us in deciding which tool to use for what purposes. Tableau is one of the top-rated tools which helps the analysts to carry out business intelligence and data visualization activities. Using Tableau, the users will be able to generate compelling dashboards and stunning data visualizations. Tableau’s interactive user interface helps users to quickly generate reports where they can drill down the information to a granular level. Alteryx is a powerful tool widely used in data analytics and also provides meaningful insights to the executive level personnel. With the user-friendly interface, the user will be able to extract the data, transform the data, and load the data within the Alteryx tool. Why use Alteryx with Tableau? The use of Alteryx with Tableau is a powerful combination when it comes to getting value-added data decisions. With Alteryx, businesses can manipulate their data and provide input to the Tableau platform, which in return will be able to showcase strong data visualizations. This will help the businesses to take appropriate actions which are backed up with data analysis. Alteryx and Tableau tools are widely used within organizations where the decisions can be taken into consideration based on the insights obtained from data analysis. Talking about data handling, Alteryx is a powerful ETL platform where data can be analyzed in different formats. When it comes to data representation, Tableau is a perfect match. Further, using Tableau the reports can be shared across team members. Nowadays, most of the businesses want to see real-time data and want to understand business trends. The combination of Alteryx and Tableau allows the data analysts to analyze the data, and generate meaningful insights to the users, on-the-fly. Here, data analysis can be executed within the Alteryx tool where the raw data is handled, and then the data representation or visualization is done in Tableau, so both of these tools go hand in hand. Tableau vs Alteryx The table below lists the differences between the tools. Alteryx Tableau This tool is known as a smart data analytics platform. This tool is known for its data visualization capabilities. 2. Can connect with different data sources and can synthesize the raw data. A standard ETL process is possible. 2. Can connect with different data sources and provide data visualization within minutes from the gathered data. 3. Helps in terms of the data analysis 3. Helps in terms of building appealing graphs. 4. The GUI is okay and widely accepted. 4. The GUI is one of the best features where graphs can be easily built by using drag and drop options. 5. Technical knowledge is necessary because it involves in data sources integrations, and also data blending activity. 5. Technical knowledge is not necessary, because all the data will be polished and only the user has to build graphs/visualization. 6.  Once the data blending activity is completed, the users will be able to share the file which can be consumed by Tableau. 6. Once the graphs are prepared, the reports can be easily shared among team members without any hassle. 7. A lot of flexibility while using this tool for data blending activity. 7. Flexibility while using the tool for data visualization. 8. Using this tool, the users will be able to do spatial and predictive analysis 8. Possible by representing the data in an appropriate format. 9.  One of the best tools when it comes to data preparations. 9. Not feasible to prepare the data in Tableau when it is compared to Alteryx. 10. Data representation cannot be done accurately. 10. It is a wonderful tool for data representation. 11. Has one time feeds- Annual fees 11. Has an option to pay monthly as well. 12. Has a drag and drop interface where the user can develop a workflow easily. 12. Has a drag and drop interface where the user will be able to build a visualization in no time. Alteryx and Tableau Integration As discussed earlier, these two tools have their own advantages and disadvantages, but when integrated together, they can do wonders with the data. This integration between Tableau and Alteryx makes the task of visualizing the Alteryx generated answers quite simple. The data is first loaded into the Alteryx tool and is then extracted in the form of .tde files (i.e. Tableau Data Extracted Files). These .tde files will be consumed by Tableau tool to do the data visualization part. On a regular basis, the data extracted file from Alteryx tool (i.e. .tde files) will be generated and will replace the old .tde files. Thus, by integrating Alteryx and Tableau, we can: Cleanse, combine, as well as collect all the data sources that are relevant and enrich them with the help of third-party data - everything in one workflow. Give analytical context to your data by providing predictive, location-based, and deep spatial analytics. Publish your analytic workflows’ results to Tableau for intuitive, rich visualizations that help you in making decisions more quickly. Tableau and Alteryx do not require any advanced skill-set as both tools have simple drag and drop interfaces. You can create a workflow in Alteryx that can process data in a sequential manner. In a similar way, Tableau enables you to build charts by dragging various fields to be utilized, to specified areas. The companies which have a lot of data to analyze, and can spend large amounts of money on analytics, can use these two tools. There doesn’t exist any significant challenges during Tableau, Alteryx integration. Conclusion When Tableau and Alteryx are used together, it is really useful for the businesses so that the senior management can take decisions based on the data insights provided by these tools. These two tools compliment each other and provide high-quality service to businesses. Author Bio Savaram Ravindra is a Senior Content Contributor at Mindmajix.com. His passion lies in writing articles on different niches, which include some of the most innovative and emerging software technologies, digital marketing, businesses, and so on. By being a guest blogger, he helps his company acquire quality traffic to its website and build its domain name and search engine authority. Before devoting his work full time to the writing profession, he was a programmer analyst at Cognizant Technology Solutions. Follow him on LinkedIn and Twitter. How to share insights using Alteryx Server How to do data storytelling well with Tableau [Video] A tale of two tools: Tableau and Power BI  
Read more
  • 0
  • 0
  • 11034

article-image-neurips-invited-talk-reproducible-reusable-and-robust-reinforcement-learning
Prasad Ramesh
25 Feb 2019
6 min read
Save for later

NeurIPS Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning

Prasad Ramesh
25 Feb 2019
6 min read
On the second day of NeurIPS conference held in Montreal, Canada last year, Dr. Joelle Pineau presented a talk on reproducibility in reinforcement learning. She is an Associate Professor at McGill University and Research Scientist for Facebook, Montreal, and the talk is ‘Reproducible, Reusable, and Robust Reinforcement Learning’. Reproducibility and crisis Dr. Pineau starts by stating a quote from Bollen et. al in National Science Foundation: “Reproducibility refers to the ability of a researcher to duplicate the results of a prior study, using the same materials as were used by the original investigator. Reproducibility is a minimum necessary condition for a finding to be believable and informative.” Reproducibility is not a new concept and has appeared across various fields. In a 2016 The Nature journal survey of 1576 scientists, 52% said that there is a significant reproducibility crisis, 38% agreed to a slight crisis. Reinforcement learning is a very general framework for decision making. About 20,000 papers are published in this area alone in 2018 and the year is not even over yet, compared to just about 2,000 papers in the year 2000. The focus of the talk is a class of reinforcement learning that has gotten the most attention and has shown a lot of promise for practical applications—policy gradients. In this method, the idea is that the policy/strategy is learned as a function and this function can be represented by a neural network. Pineau picks four research papers in the class of policy gradients that come across literature most often. They use the Mujocu simulator to compare the four algorithms. It is not important to know which algorithm is which but the approach to empirically compare these algorithms is the intention. The results were different in different environments (Hopper, Swimmer) but the variance was also drastically different for an algorithm. Even on using different code and policies the results were very different for a given algorithm in different environments. It was observed that people writing papers may not be always motivated to find the best possible hyperparameters and very often use the default hyperparameters. On using the best hyperparameters possible for two algorithms compared fairly, the results were pretty clean, distinguishable. Where n=5, five different random seeds. Picking n influences the size of the confidence interval (CI). n=5 here as most papers used 5 trials at the most. Some people were also run “n” runs where n was not specified and would report the top 5 results. It is a good way to show good results but there’s a strong positive bias, the variance appears to be small. Source: NeurIPS website Some people argue that the field of reinforcement learning is broken. Pineau stresses that this is not her message and notes that sometimes fair comparisons don’t have to give the cleanest results. Different methods may have a very distinct set of hyperparameters in number, value, and variable sensitivity. Most importantly the best method to choose heavily depends on the data and computation budget you can spare. An important point to get the said reproducibility when using algorithms to your problem. Pineau and her team surveyed 50 RL papers from 2018 and found that significance testing was applied only on 5% of the papers. Graphs and shading is seen in many papers but without information on what the shading area is, confidence interval or standard deviation cannot be known. Pineau says: “Shading is good but shading is not knowledge unless you define it properly.” A reproducibility checklist For people publishing papers Pineau presents a checklist created in consultation with her colleagues. It says for algorithms the things included should be a clear description, an analysis of complexity, and a link to source code and dependencies. For theoretical claims, a statement of the result, a clear explanation of any assumptions, and a complete proof of the claim should be included. There are also other items presented in the checklist for figures and tables. Here is the complete checklist: Source: NeurIPS website Role of infrastructure on reproducibility People can think that since the experiments are run on computers results will be more predictable than those of other sciences. But even in hardware, there is room for variability. Hence, specifying it can be useful. For example the properties of CUDA operations. On some myths “Reinforcement Learning is the only case of ML where it is acceptable to test on your training set.” Do you have to train and test on the same task? Pineau says that you really don’t have to after presenting three examples. The first one is where the agent moves around in four directions on an image then identifies what the image is, on higher n, the variance is greatly reduced. The second one is of an Atari game where the black background is replaced with videos which are a source of noise, a better representation of the real world as compared to a simulated limited environment where external real-world factors are not present. She then talks about multi-task RL in photorealistic simulators to incorporate noise. The simulator is an emulator built from images videos taken from real homes. Environments created are completely photorealistic but have properties of the real world, for example, mirror reflection. Working in the real world is very different than a limited simulation. For one, a lot more data is required to represent the real world as compared to a simulation. The talk ends with a message that science is not a competitive sport but is a collective institution that aims to understand and explain. There is an ICLR reproducibility challenge where you can join. The goal is to get community members to try and reproduce the empirical results presented in a paper, it is on an open review basis. Last year, 80% changed their paper with the feedback given by contributors who tested a given paper. Head over to NeurIPS facebook page for the entire lecture and other sessions from the conference. How NeurIPS 2018 is taking on its diversity and inclusion challenges NeurIPS 2018: Rethinking transparency and accountability in machine learning Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 3442

article-image-what-can-happen-when-artificial-intelligence-decides-on-your-loan-request
Guest Contributor
23 Feb 2019
5 min read
Save for later

What can happen when artificial intelligence decides on your loan request

Guest Contributor
23 Feb 2019
5 min read
As the number of potential borrowers continues to rapidly grow, loan companies and banks are having a bad time trying to figure out how likely their customers are to pay back. Probably, getting information on clients’ creditworthiness is the greatest challenge for most financial companies, and it especially concerns those clients who don’t have any credit history yet. There is no denying that the alternative lending business has become one of the most influential financial branches both in the USA and Europe. Debt is a huge business of our days that needs a lot of resources. In such a challenging situation, any means that can improve productivity and reduce the risk of mistake while performing financial activities are warmly welcomed. This is actually how Artificial Intelligence became the redemption for loan providers. Fortunately for lenders, AI successfully deals with this task by following the borrowers’ digital footprint. For example, some applications for digital lending collect and analyze an individual’s web browsing history (upon receiving their personal agreement on the use of this information). In some countries such as China and Africa, they may also look through their social network profiles, geolocation data, and the messages sent to friends and family, counting the number of punctuation mistakes. The collected information helps loan providers make the right decision on their clients’ creditworthiness and avoid long loan processes. When AI Overfits Unfortunately, there is the other side of the coin. There’s a theory which states that people who pay for their gas inside the petrol station, not at the pump, are usually smokers. And that is the group whose creditworthiness is estimated to be low. But what if this poor guy simply wanted to buy a Snickers? This example shows that if a lender leaves without checking the information carefully gathered by AI software, they may easily end up with making bad mistakes and misinterpretations. Artificial Intelligence in the financial sector may significantly reduce costs, efforts, and further financial complications, but there are hidden social costs such as the above. A robust analysis, design, implementation and feedback framework is necessary to meaningfully counter AI bias. Other Use Cases for AI in Finances Of course, there are also enough examples of how AI helps to improve customer experience in the financial sector. Some startups use AI software to help clients find the company that is the best at providing them with the required service. They juxtapose the clients’ requirements with the companies’ services finding perfect matches. Even though this technology reminds us of how dating apps work, such applications can drastically save time for both parties and help borrowers pay faster. AI can also be used for streamlining finances. AI helps banks and alternative lending companies in automating some of their working processes such as basic customer service, contract management, or transactions monitoring. A good example is Upstart, the pet project of two former Google employees. The startup was originally aimed to help young people lacking the credit history, to get a loan or any other kind of financial support. For this purpose, the company uses the clients’ educational background and experience, taking into account things such as their attained degrees and school/university attendance. However, such approach to lending may end up being a little snobbish: it can simply overlook large groups of population who can’t afford higher education. As a result of insufficient educational background, these people can become deprived of the opportunity to get their loan. Nonetheless, one of the main goals of the company was automating as many of its operating procedures as possible. By 2018, more than 60% of all their loans had been fully automated with more to come. We cannot automate fairness and opportunity, yet The implementation of machine learning in providing loans by checking the digital footprint of people may lead to ethical and legal disputes. Even today some people state that the use of AI in the financial sector encouraged inequality in the number of loans provided to the black and white population of the USA. They believe that AI continues the bias against minorities and make the black people “underbanked.” Both lending companies and banks should remember that the quality of work done these days with the help of machine learning methods highly depends on people—both employees who use the software and AI developers who create and fine-tune it. So we should see AI in loan management as a useful tool—but not as a replacement for humans. Author Bio Darya Shmat is a business development representative at Iflexion, where Darya expertly applies 10+ years of practical experience to help banking and financial industry clients find the right development or QA solution. Blockchain governance and uses beyond finance – Carnegie Mellon university podcast Why Retailers need to prioritize eCommerce Automation in 2019 Glancing at the Fintech growth story – Powered by ML, AI & APIs
Read more
  • 0
  • 0
  • 3335
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-artificial-general-intelligence-did-it-gain-traction-in-research-in-2018
Prasad Ramesh
21 Feb 2019
4 min read
Save for later

Artificial General Intelligence, did it gain traction in research in 2018?

Prasad Ramesh
21 Feb 2019
4 min read
In 2017, we predicted that artificial general intelligence will gain traction in research and certain areas will aid towards AGI systems. The prediction was made in a set of other AI predictions in an article titled 18 striking AI Trends to watch in 2018. Let’s see how 2018 went for AGI research. Artificial general intelligence or AGI is an area of AI in which efforts are made to make machines have intelligence closer to the complex nature of human intelligence. Such a system could possibly, in theory, perform tasks that a human can with the ability to learn as it progresses through tasks, collects data/sensory input. Human intelligence also involves learning a skill and applying it to other areas. For example, if a human learns Dota 2, they can apply the same learned experience to other similar strategy games, only the UI and characters in the game that can be adopted will be different. A machine cannot do this, AI systems are trained for a specific area and the skills cannot really be transferred to another task with complete efficiency and the fear of causing technical debt. That is, a machine cannot generalize skills as a human can. Come 2018, we saw Deepmind’s AlphaZero, something that is at least beginning to show what an idea of AGI could look like. But even this is not really AGI, an AlphaZero like system may excel at playing a variety of games or even understand the rules of novel games but cannot deal with the real world and its challenges. Some groundwork and basic ideas for AGI were set in a paper by the US Air Force. Dr. Paul Yaworsky, in the paper, says that artificial general intelligence is an effort to cover the gap between lower and higher level work in AI. So to speak, try and make sense of the abstract nature of intelligence. The paper also shows an organized hierarchical model for intelligence considering the external world. One of Packt’s authors, Sudharsan Ravichandiran thinks that: “Great things are happening around RL research each and every day. Deep Meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI). Instead of creating different models to perform different tasks, with AGI, a single model can master a wide variety of tasks and mimics the human intelligence.” Honda came up with a program called Curious Minded Machine in association with MIT, University of Pennsylvania, and the University of Washington. The idea sounds simple at first - it is to build a model on how children ‘learn to learn’. But something like this which children do instinctively is a very complex task for a machine/computer with artificial intelligence. The teams will showcase their work in various fields they are working on at the end of three years since the inception of the program. There was another effort by SingularityNET and Mindfire to explore AI and “cracking the brain code”. The effort is to better understand the functioning of the human brain. Together these two companies will focus on three key areas—talent, AI services, and AI education. Mindfire Mission 2 will take place in early 2019, Switzerland. These were the areas of work we saw on AGI in 2018. There were only small steps taken towards the research direction and nothing noteworthy that gained mainstream traction. On an average, experts think AGI would take at least a 100 more years to be a reality, as per Martin Ford’s interviews with machine learning experts for his best selling book, ‘Architects of Intelligence’. OpenAI released a new language model called GPT-2 in February 2019. With just one line of words, the model can generate whole articles. The results are good enough to pass as something written by a human. This does not mean that the machine actually understands human language, it’s merely generating sentences by associating words. This development has triggered passionate discussions within the community on not just the technical merits of the findings, but also the dangers and implications of applications of such research on the larger society. Get ready to see more tangible research in AGI in the next few decades. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Unity and Deepmind partner to develop Virtual worlds for advancing Artificial Intelligence
Read more
  • 0
  • 0
  • 3766

article-image-the-rise-of-machine-learning-in-the-investment-industry
Natasha Mathur
15 Feb 2019
13 min read
Save for later

The rise of machine learning in the investment industry

Natasha Mathur
15 Feb 2019
13 min read
The investment industry has evolved dramatically over the last several decades and continues to do so amid increased competition, technological advances, and a challenging economic environment. In this article, we will review several key trends that have shaped the investment environment in general, and the context for algorithmic trading more specifically. This article is an excerpt taken from the book 'Hands on Machine Learning for algorithmic trading' written by Stefan Jansen. The book explores the strategic perspective, conceptual understanding, and practical tools to add value from applying ML to the trading and investment process. The trends that have propelled algorithmic trading and ML to current prominence include: Changes in the market microstructure, such as the spread of electronic trading and the integration of markets across asset classes and geographies The development of investment strategies framed in terms of risk-factor exposure, as opposed to asset classes The revolutions in computing power, data-generation and management, and analytic methods The outperformance of the pioneers in algorithmic traders relative to human, discretionary investors In addition, the financial crises of 2001 and 2008 have affected how investors approach diversification and risk management and have given rise to low-cost passive investment vehicles in the form of exchange-traded funds (ETFs). Amid low yield and low volatility after the 2008 crisis, cost-conscious investors shifted $2 trillion from actively-managed mutual funds into passively managed ETFs. Competitive pressure is also reflected in lower hedge fund fees that dropped from the traditional 2% annual management fee and 20% take of profits to an average of 1.48% and 17.4%, respectively, in 2017. Let's have a look at how ML has come to play a strategic role in algorithmic trading. Factor investing and smart beta funds The return provided by an asset is a function of the uncertainty or risk associated with financial investment. An equity investment implies, for example, assuming a company's business risk, and a bond investment implies assuming default risk. To the extent that specific risk characteristics predict returns, identifying and forecasting the behavior of these risk factors becomes a primary focus when designing an investment strategy. It yields valuable trading signals and is the key to superior active-management results. The industry's understanding of risk factors has evolved very substantially over time and has impacted how ML is used for algorithmic trading. Modern Portfolio Theory (MPT) introduced the distinction between idiosyncratic and systematic sources of risk for a given asset. Idiosyncratic risk can be eliminated through diversification, but systematic risk cannot. In the early 1960s, the Capital Asset Pricing Model (CAPM) identified a single factor driving all asset returns: the return on the market portfolio in excess of T-bills. The market portfolio consisted of all tradable securities, weighted by their market value. The systematic exposure of an asset to the market is measured by beta, which is the correlation between the returns of the asset and the market portfolio. The recognition that the risk of an asset does not depend on the asset in isolation, but rather how it moves relative to other assets, and the market as a whole, was a major conceptual breakthrough. In other words, assets do not earn a risk premium because of their specific, idiosyncratic characteristics, but because of their exposure to underlying factor risks. However, a large body of academic literature and long investing experience have disproved the CAPM prediction that asset risk premiums depend only on their exposure to a single factor measured by the asset's beta. Instead, numerous additional risk factors have since been discovered. A factor is a quantifiable signal, attribute, or any variable that has historically correlated with future stock returns and is expected to remain correlated in future. These risk factors were labeled anomalies since they contradicted the Efficient Market Hypothesis (EMH), which sustained that market equilibrium would always price securities according to the CAPM so that no other factors should have predictive power. The economic theory behind factors can be either rational, where factor risk premiums compensate for low returns during bad times, or behavioral, where agents fail to arbitrage away excess returns. Well-known anomalies include the value, size, and momentum effects that help predict returns while controlling for the CAPM market factor. The size effect rests on small firms systematically outperforming large firms, discovered by Banz (1981) and Reinganum (1981). The value effect (Basu 1982) states that firms with low valuation metrics outperform. It suggests that firms with low price multiples, such as the price-to-earnings or the price-to-book ratios, perform better than their more expensive peers (as suggested by the inventors of value investing, Benjamin Graham and David Dodd, and popularized by Warren Buffet). The momentum effect, discovered in the late 1980s by, among others, Clifford Asness, the founding partner of AQR, states that stocks with good momentum, in terms of recent 6-12 month returns, have higher returns going forward than poor momentum stocks with similar market risk. Researchers also found that value and momentum factors explain returns for stocks outside the US, as well as for other asset classes, such as bonds, currencies, and commodities, and additional risk factors. In fixed income, the value strategy is called riding the yield curve and is a form of the duration premium. In commodities, it is called the roll return, with a positive return for an upward-sloping futures curve and a negative return otherwise. In foreign exchange, the value strategy is called carry. There is also an illiquidity premium. Securities that are more illiquid trade at low prices and have high average excess returns, relative to their more liquid counterparts. Bonds with higher default risk tend to have higher returns on average, reflecting a credit risk premium. Since investors are willing to pay for insurance against high volatility when returns tend to crash, sellers of volatility protection in options markets tend to earn high returns. Multifactor models define risks in broader and more diverse terms than just the market portfolio. In 1976, Stephen Ross proposed arbitrage pricing theory, which asserted that investors are compensated for multiple systematic sources of risk that cannot be diversified away. The three most important macro factors are growth, inflation, and volatility, in addition to productivity, demographic, and political risk. In 1992, Eugene Fama and Kenneth French combined the equity risk factors' size and value with a market factor into a single model that better explained cross-sectional stock returns. They later added a model that also included bond risk factors to simultaneously explain returns for both asset classes. A particularly attractive aspect of risk factors is their low or negative correlation. Value and momentum risk factors, for instance, are negatively correlated, reducing the risk and increasing risk-adjusted returns above and beyond the benefit implied by the risk factors. Furthermore, using leverage and long-short strategies, factor strategies can be combined into market-neutral approaches. The combination of long positions in securities exposed to positive risks with underweight or short positions in the securities exposed to negative risks allows for the collection of dynamic risk premiums. As a result, the factors that explained returns above and beyond the CAPM were incorporated into investment styles that tilt portfolios in favor of one or more factors, and assets began to migrate into factor-based portfolios. The 2008 financial crisis underlined how asset-class labels could be highly misleading and create a false sense of diversification when investors do not look at the underlying factor risks, as asset classes came crashing down together. Over the past several decades, quantitative factor investing has evolved from a simple approach based on two or three styles to multifactor smart or exotic beta products. Smart beta funds have crossed $1 trillion AUM in 2017, testifying to the popularity of the hybrid investment strategy that combines active and passive management. Smart beta funds take a passive strategy but modify it according to one or more factors, such as cheaper stocks or screening them according to dividend payouts, to generate better returns. This growth has coincided with increasing criticism of the high fees charged by traditional active managers as well as heightened scrutiny of their performance. The ongoing discovery and successful forecasting of risk factors that, either individually or in combination with other risk factors, significantly impact future asset returns across asset classes is a key driver of the surge in ML in the investment industry. Algorithmic pioneers outperform humans at scale The track record and growth of Assets Under Management (AUM) of firms that spearheaded algorithmic trading has played a key role in generating investor interest and subsequent industry efforts to replicate their success. Systematic funds differ from HFT in that trades may be held significantly longer while seeking to exploit arbitrage opportunities as opposed to advantages from sheer speed. Systematic strategies that mostly or exclusively rely on algorithmic decision-making were most famously introduced by mathematician James Simons who founded Renaissance Technologies in 1982 and built it into the premier quant firm. Its secretive Medallion Fund, which is closed to outsiders, has earned an estimated annualized return of 35% since 1982. DE Shaw, Citadel, and Two Sigma, three of the most prominent quantitative hedge funds that use systematic strategies based on algorithms, rose to the all-time top-20 performers for the first time in 2017 in terms of total dollars earned for investors, after fees, and since inception. DE Shaw, founded in 1988 with $47 billion AUM in 2018 joined the list at number 3. Citadel started in 1990 by Kenneth Griffin, manages $29 billion and ranks 5, and Two Sigma started only in 2001 by DE Shaw alumni John Overdeck and David Siegel, has grown from $8 billion AUM in 2011 to $52 billion in 2018. Bridgewater started in 1975 with over $150 billion AUM, continues to lead due to its Pure Alpha Fund that also incorporates systematic strategies. Similarly, on the Institutional Investors 2017 Hedge Fund 100 list, five of the top six firms rely largely or completely on computers and trading algorithms to make investment decisions—and all of them have been growing their assets in an otherwise challenging environment. Several quantitatively-focused firms climbed several ranks and in some cases grew their assets by double-digit percentages. Number 2-ranked Applied Quantitative Research (AQR) grew its hedge fund assets 48% in 2017 to $69.7 billion and managed $187.6  billion firm-wide. Among all hedge funds, ranked by compounded performance over the last three years, the quant-based funds run by Renaissance Technologies achieved ranks 6 and 24, Two Sigma rank 11, D.E. Shaw no 18 and 32, and Citadel ranks 30 and 37. Beyond the top performers, algorithmic strategies have worked well in the last several years. In the past five years, quant-focused hedge funds gained about 5.1% per year while the average hedge fund rose 4.3% per year in the same period. ML driven funds attract $1 trillion AUM The familiar three revolutions in computing power, data, and ML methods have made the adoption of systematic, data-driven strategies not only more compelling and cost-effective but a key source of competitive advantage. As a result, algorithmic approaches are not only finding wider application in the hedge-fund industry that pioneered these strategies but across a broader range of asset managers and even passively-managed vehicles such as ETFs. In particular, predictive analytics using machine learning and algorithmic automation play an increasingly prominent role in all steps of the investment process across asset classes, from idea-generation and research to strategy formulation and portfolio construction, trade execution, and risk management. Estimates of industry size vary because there is no objective definition of a quantitative or algorithmic fund, and many traditional hedge funds or even mutual funds and ETFs are introducing computer-driven strategies or integrating them into a discretionary environment in a human-plus-machine approach. Morgan Stanley estimated in 2017 that algorithmic strategies have grown at 15% per year over the past six years and control about $1.5 trillion between hedge funds, mutual funds, and smart beta ETFs. Other reports suggest the quantitative hedge fund industry was about to exceed $1 trillion AUM, nearly doubling its size since 2010 amid outflows from traditional hedge funds. In contrast, total hedge fund industry capital hit $3.21 trillion according to the latest global Hedge Fund Research report. The market research firm Preqin estimates that almost 1,500 hedge funds make a majority of their trades with help from computer models. Quantitative hedge funds are now responsible for 27% of all US stock trades by investors, up from 14% in 2013. But many use data scientists—or quants—which, in turn, use machines to build large statistical models (WSJ). In recent years, however, funds have moved toward true ML, where artificially-intelligent systems can analyze large amounts of data at speed and improve themselves through such analyses. Recent examples include Rebellion Research, Sentient, and Aidyia, which rely on evolutionary algorithms and deep learning to devise fully-automatic Artificial Intelligence (AI)-driven investment platforms. From the core hedge fund industry, the adoption of algorithmic strategies has spread to mutual funds and even passively-managed exchange-traded funds in the form of smart beta funds, and to discretionary funds in the form of quantamental approaches. The emergence of quantamental funds Two distinct approaches have evolved in active investment management: systematic (or quant) and discretionary investing. Systematic approaches rely on algorithms for a repeatable and data-driven approach to identify investment opportunities across many securities; in contrast, a discretionary approach involves an in-depth analysis of a smaller number of securities. These two approaches are becoming more similar to fundamental managers take more data-science-driven approaches. Even fundamental traders now arm themselves with quantitative techniques, accounting for $55 billion of systematic assets, according to Barclays. Agnostic to specific companies, quantitative funds trade patterns and dynamics across a wide swath of securities. Quants now account for about 17% of total hedge fund assets, data compiled by Barclays shows. Point72 Asset Management, with $12 billion in assets, has been shifting about half of its portfolio managers to a man-plus-machine approach. Point72 is also investing tens of millions of dollars into a group that analyzes large amounts of alternative data and passes the results on to traders. Investments in strategic capabilities Rising investments in related capabilities—technology, data and, most importantly, skilled humans—highlight how significant algorithmic trading using ML has become for competitive advantage, especially in light of the rising popularity of passive, indexed investment vehicles, such as ETFs, since the 2008 financial crisis. Morgan Stanley noted that only 23% of its quant clients say they are not considering using or not already using ML, down from 44% in 2016. Guggenheim Partners LLC built what it calls a supercomputing cluster for $1 million at the Lawrence Berkeley National Laboratory in California to help crunch numbers for Guggenheim's quant investment funds. Electricity for the computers costs another $1 million a year. AQR is a quantitative investment group that relies on academic research to identify and systematically trade factors that have, over time, proven to beat the broader market. The firm used to eschew the purely computer-powered strategies of quant peers such as Renaissance Technologies or DE Shaw. More recently, however, AQR has begun to seek profitable patterns in markets using ML to parse through novel datasets, such as satellite pictures of shadows cast by oil wells and tankers. The leading firm BlackRock, with over $5 trillion AUM, also bets on algorithms to beat discretionary fund managers by heavily investing in SAE, a systematic trading firm it acquired during the financial crisis. Franklin Templeton bought Random Forest Capital, a debt-focused, data-led investment company for an undisclosed amount, hoping that its technology can support the wider asset manager. We looked at how ML plays a role in different industry trends around algorithmic trading. If you want to learn more about design and execution of algorithmic trading strategies, and use cases of ML in algorithmic trading, be sure to check out the book 'Hands on Machine Learning for algorithmic trading'. Using machine learning for phishing domain detection [Tutorial] Anatomy of an automated machine learning algorithm (AutoML) 10 machine learning algorithms every engineer needs to know
Read more
  • 0
  • 0
  • 6570

article-image-a-quick-look-at-ml-in-algorithmic-trading-strategies
Natasha Mathur
14 Feb 2019
6 min read
Save for later

A Quick look at ML in algorithmic trading strategies

Natasha Mathur
14 Feb 2019
6 min read
Algorithmic trading relies on computer programs that execute algorithms to automate some, or all, elements of a trading strategy. Algorithms are a sequence of steps or rules to achieve a goal and can take many forms. In the case of machine learning (ML), algorithms pursue the objective of learning other algorithms, namely rules, to achieve a target based on data, such as minimizing a prediction error.  In this article, we have a look at use cases of ML and how it is used in algorithmic trading strategies. These algorithms encode various activities of a portfolio manager who observes market transactions and analyzes relevant data to decide on placing buy or sell orders. The sequence of orders defines the portfolio holdings that, over time, aim to produce returns that are attractive to the providers of capital, taking into account their appetite for risk. This article is an excerpt taken from the book 'Hands-On Machine Learning for Algorithmic Trading' written by Stefan Jansen.  The book explores effective trading strategies in real-world markets using NumPy, spaCy, pandas, scikit-learn, and Keras. Ultimately, the goal of active investment management consists in achieving alpha, that is, returns in excess of the benchmark used for evaluation. The fundamental law of active management applies the information ratio (IR) to express the value of active management as the ratio of portfolio returns above the returns of a benchmark, usually an index, to the volatility of those returns. It approximates the information ratio as the product of the information coefficient (IC), which measures the quality of forecast as their correlation with outcomes, and the breadth of a strategy expressed as the square root of the number of bets. The use of ML for algorithmic trading, in particular, aims for more efficient use of conventional and alternative data, with the goal of producing both better and more actionable forecasts, hence improving the value of active management. Quantitative strategies have evolved and become more sophisticated in three waves: In the 1980s and 1990s, signals often emerged from academic research and used a single or very few inputs derived from the market and fundamental data. These signals are now largely commoditized and available as ETF, such as basic mean-reversion strategies. In the 2000s, factor-based investing proliferated. Funds used algorithms to identify assets exposed to risk factors like value or momentum to seek arbitrage opportunities. Redemptions during the early days of the financial crisis triggered the quant quake of August 2007 that cascaded through the factor-based fund industry. These strategies are now also available as long-only smart-beta funds that tilt portfolios according to a given set of risk factors. The third era is driven by investments in ML capabilities and alternative data to generate profitable signals for repeatable trading strategies. Factor decay is a major challenge: the excess returns from new anomalies have been shown to drop by a quarter from discovery to publication, and by over 50% after publication due to competition and crowding. There are several categories of trading strategies that use algorithms to execute trading rules: Short-term trades that aim to profit from small price movements, for example, due to arbitrage Behavioral strategies that aim to capitalize on anticipating the behavior of other market participants Programs that aim to optimize trade execution, and A large group of trading based on predicted pricing The HFT funds discussed above most prominently rely on short holding periods to benefit from minor price movements based on bid-ask arbitrage or statistical arbitrage. Behavioral algorithms usually operate in lower liquidity environments and aim to anticipate moves by a larger player likely to significantly impact the price. The expectation of the price impact is based on sniffing algorithms that generate insights into other market participants' strategies, or market patterns such as forced trades by ETFs. Trade-execution programs aim to limit the market impact of trades and range from the simple slicing of trades to match time-weighted average pricing (TWAP) or volume-weighted average pricing (VWAP). Simple algorithms leverage historical patterns, whereas more sophisticated algorithms take into account transaction costs, implementation shortfall or predicted price movements. These algorithms can operate at the security or portfolio level, for example, to implement multileg derivative or cross-asset trades. Let's now have a look at different applications in Trading where ML is of key importance. Use Cases of ML for Trading ML extracts signals from a wide range of market, fundamental, and alternative data, and can be applied at all steps of the algorithmic trading-strategy process. Key applications include: Data mining to identify patterns and extract features Supervised learning to generate risk factors or alphas and create trade ideas Aggregation of individual signals into a strategy Allocation of assets according to risk profiles learned by an algorithm The testing and evaluation of strategies, including through the use of synthetic data The interactive, automated refinement of a strategy using reinforcement learning Supervised learning for alpha factor creation and aggregation The main rationale for applying ML to trading is to obtain predictions of asset fundamentals, price movements or market conditions. A strategy can leverage multiple ML algorithms that build on each other. Downstream models can generate signals at the portfolio level by integrating predictions about the prospects of individual assets, capital market expectations, and the correlation among securities. Alternatively, ML predictions can inform discretionary trades as in the quantamental approach outlined above. ML predictions can also target specific risk factors, such as value or volatility, or implement technical approaches, such as trend following or mean reversion. Asset allocation ML has been used to allocate portfolios based on decision-tree models that compute a hierarchical form of risk parity. As a result, risk characteristics are driven by patterns in asset prices rather than by asset classes and achieve superior risk-return characteristics. Testing trade ideas Backtesting is a critical step to select successful algorithmic trading strategies. Cross-validation using synthetic data is a key ML technique to generate reliable out-of-sample results when combined with appropriate methods to correct for multiple testing. The time series nature of financial data requires modifications to the standard approach to avoid look-ahead bias or otherwise contaminate the data used for training, validation, and testing. In addition, the limited availability of historical data has given rise to alternative approaches that use synthetic data. Reinforcement learning Trading takes place in a competitive, interactive marketplace. Reinforcement learning aims to train agents to learn a policy function based on rewards. In this article, we briefly discussed how ML has become a key ingredient for different stages of algorithmic trading strategies. If you want to learn more about trading strategies that use ML, be sure to check out the book  'Hands-On Machine Learning for Algorithmic Trading'. Using machine learning for phishing domain detection [Tutorial] Anatomy of an automated machine learning algorithm (AutoML) 10 machine learning algorithms every engineer needs to know
Read more
  • 0
  • 0
  • 7423

article-image-the-new-tech-worker-movement-how-did-we-get-here-and-what-comes-next
Bhagyashree R
28 Jan 2019
8 min read
Save for later

The new tech worker movement: How did we get here? And what comes next?

Bhagyashree R
28 Jan 2019
8 min read
Earlier this month, Logic Magazine, a print magazine about technology, hosted a discussion about the past, present, and future of the tech worker movement. This event was co-sponsored by solidarity groups like the Tech Worker Coalition, Coworker.org, NYC-DSA Tech Action Working Group, and Science for the people. Among the panelists were Joan Greenbaum, who was involved in organizing tech workers in the mainframe era and was part of Computer People for Peace. Meredith Whittaker is a research scientist at New York University and co-founder of the AI Now Institute, Google Open Research group, and one of the organizers of Google Walkout. Liz Fong-Jones, the Developer Advocate at Google Cloud Platform was also present, who recently tweeted that she will be leaving the company in February, because of Google’s lack of leadership in response to the demands made by employees during the Google walkout in November 2018. Also in the attendance were Emma Quail representing Unite Here and Patricia Rosa, a Facebook food service worker, who was inspired to fight for the union after watching a pregnant friend lose her job because she took one day off for a doctor’s appointment. The discussion was held in New York, hosted by Ben Tarnoff, the co-founder of Logic Magazine. It lasted for almost an hour, after which the Q&A session started. You can see the full discussion at Logic’s Facebook page. The rise of tech workers organizing In recent years, we have seen tech workers coming together to stand against any unjust decision taken by their companies. We saw tech workers at companies like Google, Amazon, and Microsoft raising their voices against contracts, with government agencies like ICE and Pentagon, which are just “profit-oriented” and can prove harmful to humanity. For instance, there was a huge controversy around Google’s Project Maven, which was focused on analyzing drone footage and could have been eventually used to improve drone strikes on the battlefield. More than 3,000 Google employees signed a petition against this project that led to Google deciding not to renew its contract with the U.S. Department of Defense in 2019. In December 2018, Google workers launched an industry-wide effort focusing on the end of forced arbitration, which affects at least 60 million workers in the US alone. In June, Amazon employees demanded Jeff Bezos to stop selling Rekognition, Amazon's facial recognition technology, to law enforcement agencies and to discontinue partnerships with companies that work with U.S. Immigration and Customs Enforcement (ICE). We also saw workers organizing campaigns demanding safer workplaces, free from sexual harassment and gender discrimination, better working conditions, retirement plans, professionalism standards, and fairness in equity compensation. In November, there was a massive Google Walkout with 20,000 Google employees from all over the world to protest against how Google handled sexual harassment cases. This backlash was triggered when it came into light that Google paid millions of dollars as exit packaged to its male executives who were accused of sexual misconduct. Let’s look at some of the highlights from this discussion: What do these issues ranging from controversial contracts, workplace issues, better benefits, a safe equitable workplace have to do with one another? Most companies today are motivated by profits they make, which also shows in the technology they produce. These technologies benefit a small fraction of users while affecting a larger predictable demographic of people, for instance, black and brown people. Meredith Whittaker remarks, “These companies are acting like parallel states right now.” The technologies that they produce have a significant impact over a number of domains that we are not even aware of. Liz Fong-Jones feels that it is also about us as tech workers taking responsibility for what we build. We are feeding into the profit motive these companies have if we keep participating in building systems that can have bad implications for users or not speaking up for the workers working alongside us. To hold these companies accountable and to ensure that all workers are being used for good and people are treated fairly, we all need to come together no matter in what part of the company we are working in. Joan Greenbaum also believes that these types of movement cannot be successful without forming alliances. Any alliance work between tech workers and different roles? Emma Quail shared that there have been many collaborations between engineers, tech employees, cafeteria workers, and other service workers in the fights against companies treating their employees differently. These collaborations are important as tech workers and engineers are much more privileged in these companies. “They have more voice, their job is taken more seriously,” said Emma Quail. Patricia Rosa sharing her experience said, “When some of the tech workers came to one of our negotiations and spoke on our behalf, the company got nervous, and they finally gave them the contract.” Liz Fong-Jones mentions that the main challenge to eliminate this discrimination is that employers want to keep their workers separate. As an example to this, she added, “Google prohibits its cafeteria workers from being on campus when they are not on shift, it prohibits them from holding leadership positions and employee resource groups.” These companies resort to these policies because they do not want their “valuable employees” to find out about the working conditions of other workers. In the last few years, the tech worker movement saw a huge boost in catching the attention of society, but this did not happen overnight. How did we get to this moment? Liz Fong-Jones attributes the Me Too movement as one of the turning points. This movement made workers realize that they are not alone and there are people who share the same concerns. Another thing that Liz Fong-Jones thinks led us to this movement was, management coming with proposals that can have negative implications on people and asked employees to keep secrets. But now tech workers are more informed about what exactly they are building. In the last few years,  tech companies have come under the attention and scrutiny of the public because of the many tech scandals whether it is related to data, software, or workplace, rights. One of the root cause of this was an endless growth requirement. Meredith Whittaker shares, “Over the last few years, we saw series of relentless embarrassing answers to substantially serious questions. They cannot keep going like this.” What’s in the future? Joan Greenbaum rightly mentions that tech companies should actually, “look to work with people what the industry calls users.” They should adopt participatory design instead of user-centered design. Participatory design is basically an approach in which all stakeholders, from employees, partners to local business owners, customers are involved in the design process. Meredith Whittaker remarks, “The people who are getting harmed by these technologies are not the people who are going to get a paycheck from these companies. They are not going to check tech power or tech culture unless we learn how to know each other and form alliances that also connect corporate.” Once we all come together and form alliances, we will be able to pinpoint these companies about the updates and products these companies are building to know about their implications. So, the future basically is in doing our homework, knowing how these companies work, building relationships and coming together against any unjust decisions by these companies. Liz Fong-Jones adds, “The Google Walkout was just the beginning. The labor movement will spread into other companies and also having more visible effects beyond a walkout.” Emma Quail believes that companies will need to address issues related to housing, immigration, rights for people. Patricia Rosa shared that for the future we need to work towards spreading awareness among other workers that there are people who care about their rights and how they are being treated at the workplace. If they are aware that there are people to support them they will not be scared to speak up as Patricia was when she started her journey. Some of the questions asked in the Q&A session were: What's different politically about tech than any other industry? How was the Google walkout organized?  I was a tech contractor and didn't hear about it until it happened. Are there any possibilities of creating a single union of all tech workers no matter what their roles are? Is that a desirable far goal? How tech workers working in one state can relate to the workers working internationally? Watch the full discussion at Logic’s Facebook page. Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus How far will Facebook go to fix what it broke: Democracy, Trust, Reality
Read more
  • 0
  • 0
  • 3055
article-image-do-you-need-artificial-intelligence-and-machine-learning-expertise-in-house
Guest Contributor
22 Jan 2019
7 min read
Save for later

Do you need artificial intelligence and machine learning expertise in house?

Guest Contributor
22 Jan 2019
7 min read
Developing artificial intelligence expertise is a challenge. There’s a huge global demand for practitioners with the right skills and knowledge and a lack of people who can actually deliver what’s needed. It’s difficult because many of the most talented engineers are being hired by the planet’s leading tech companies on salaries that simply aren’t realistic for many organizations. Ultimately, you have two options: form an in-house artificial intelligence development team or choose an external software development team or consultant with proven artificial intelligence expertise. Let’s take a closer look at each strategy. Building an in-house AI development team If you want to develop your own AI capabilities, you will need to bring in strong technical skills in machine learning. Since recruiting experts in this area isn’t an easy task, upskilling your current in-house development team may be an option. However, you will need to be confident that your team has the knowledge and attitude to develop those skills. Of course, it’s also important to remember that a team building artificial intelligence is comprised of a range of skills and areas of expertise. If you can see how your team could evolve in that way, you’re halfway to solving your problem. AI experts you need for building a project  Big Data engineers: Before analyzing data, you need you collect, organize, and process it. AI is usually based on big data, so you need the engineers who have experience working with structured and unstructured data, and can build a secure data platform. They should have sound knowledge of Hadoop, Spark, R, Hive, Pig, and other Big Data technologies.  Data scientists: Data scientists are a vital part of your AI team. They work their magic with data, building the models, investigating, analyzing, and interpreting it. They leverage data mining and other techniques to surface hidden insights and solve business problems. NLP specialists: A lot of AI projects involve Natural Language Processing, so you will probably need NLP specialists. NLP allows computers to understand and translate human language serving as a bridge between human communication and machine interpretation. Machine learning engineers: These specialists utilize machine learning libraries, deploying ML solutions into production. They take care of the maintainability and scalability of data science code. Computer vision engineers: They specialize in imagery recognition, correlating image to a particular metric instead of correlating metrics to metrics. For example, computer vision is used for modeling objects or environments (medical image analysis), identification tasks (a species identification system), and processes controlling (industrial robots).  Speech recognition engineers: You will need these experts if you want to build your speech recognition system. Speech recognition can be very useful in telecommunication services, in-car systems, medical documentation, and education. For instance, it is used in language learning for practicing pronunciation. Partnering with an AI solution provider If you realize that recruiting and building your own in-house AI team is too difficult and expensive, you can engage with an external AI provider. Such an approach helps companies keep the focus on their core expertise and avoid the headache of recruiting the engineers and setting up the team. Also, it allows them to kick off the project much faster and thus gain a competitive advantage. Factors to consider when choosing an artificial intelligence solution provider AI engineering experience Due to the huge popularity of AI these days, many companies claim to be professional AI development providers without practical experience. Hence it’s extremely important to do extensive research. Firstly, you should study the portfolio and case studies of the company. Find out which AI, machine learning or data science projects your potential vendor worked on and what kind of artificial intelligence solutions the company has delivered. For instance, you may check out these European AI development companies and the products they developed. Also, make sure a provider has experience in the types of machine learning algorithms (supervised, unsupervised, and reinforcement), data structures and algorithms, computer vision, NLP, etc that are relevant to your project needs. Expertise in AI technologies Artificial Intelligence covers a multitude of different technologies, frameworks, and tools. Make sure your external engineering team consists of professional data scientists and data engineers who can solve your business problems. Building the AI team and selecting the necessary skill set might be challenging for businesses that have no internal AI expertise. Therefore, ask a vendor to provide tech experts or delivery managers who will advise you on the team composition and help you hire the right people. Capacities to scale a team When choosing a team, you should consider not only your primary needs but also the potential growth of your business. If you expect your company to scale up, you’ll need more engineering capacities. Therefore, take into account your partner’s ability to ramp up the team in the future. Also, consider factors such as the vendor’s employer image and retention rate since your ability to attract top AI talent and keep them on your project will largely depend on it. Suitable cooperation model It is essential to choose the AI company with a cooperation model that fits your business requirements. The most popular cooperation models are Fixed Price, Time and Material, and Dedicated Development Team. Within the fixed price model all the requirements and the scope of work are set from the start, and you as a customer need to have them described to the smallest detail as it will be extremely difficult to make change requests during the project. However, it is not the best option for AI projects since they involve a lot of R&D and it is difficult to define everything at the initial stage. Time and material model is the best for small projects when you don’t need the specialists to be fully dedicated to your project. This is not the best choice for AI development as the hourly rates of AI engineers are extremely high and the whole project would cost you a fortune with this type of contract. In order to add more flexibility yet keep control over the project budget, it is better to choose a dedicated development team model or staff augmentation. It will allow you to change the requirements when needed and have control over your team. With this type of engagement, you will be able to keep the knowledge within your team and develop your AI expertise as developers will work exclusively for you. Conclusion If you have to deal with the challenge of building AI expertise in your company, there are two possible ways to go. First off, you can attract local AI talent and build the expertise in-house. Then you have to assemble the team of data scientists, data engineers, and other specialists depending on your needs. However, developing AI expertise in-house is always time- and cost-consuming taking into account the shortage of well-qualified machine learning specialists and superlative salary expectations. The other option is to partner with an AI development vendor and hire an extended team of engineers. In this case, you have to consider a number of factors such as the company’s experience in delivering AI solutions, the ability to allocate the necessary resources, the technological expertise, and its capabilities to satisfy your business requirements. Author Bio Romana Gnatyk is Content Marketing Manager at N-IX passionate about software development. Writing insightful content on various IT topics, including software product development, mobile app development, artificial intelligence, the blockchain, and different technologies. Researchers introduce a machine learning model where the learning cannot be proved “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] Apple ups it’s AI game; promotes John Giannandrea as SVP of machine learning
Read more
  • 0
  • 0
  • 4106

article-image-how-are-mobile-apps-transforming-the-healthcare-industry
Guest Contributor
15 Jan 2019
5 min read
Save for later

How are Mobile apps transforming the healthcare industry?

Guest Contributor
15 Jan 2019
5 min read
Mobile App Development has taken over and completely re-written the healthcare industry. According to Healthcare Mobility Solutions reports, the Mobile healthcare application market is expected to be worth more than $84 million by the year 2020. These mobile applications are not just limited to use by patients but are also massively used by doctors and nurses. As technology evolves, it simultaneously opens up the possibility of being used in multiple ways. Similar has been the journey of healthcare mobile app development that has originated from the latest trends in technology and has made its way to being an industry in itself. The technological trends that have helped build mobile apps for the healthcare industry are Blockchain You probably know blockchain technology, thanks to all the cryptocurrency rage in recent years. The blockchain is basically a peer-to-peer database that keeps a verified record of all transactions, or any other information that one needs to track and have it accessible to a large community. The healthcare industry can use a technology that allows it to record the medical history of patients, and store it electronically, in an encrypted form, that cannot be altered or hacked into. Blockchain succeeds where a lot of health applications fail, in the secure retention of patient data. The Internet of Things The Internet of Things (IoT) is all about connectivity. It is a way of interconnecting electronic devices, software, applications, etc., to ensure easy access and management across platforms. The loT will assist medical professionals in gaining access to valuable patient information so that doctors can monitor the progress of their patients. This makes treatment of the patient easier, and more closely monitored, as doctors can access the patient’s current profile anywhere and suggest treatment, medicine, and dosages. Augmented Reality From the video gaming industry, Augmented Reality has made its way to the medical sector. AR refers to the creation of an interactive experience of a real-world environment through superimposition of computer-generated perceptual information. AR is increasingly used to develop mobile applications that can be used by doctors and surgeons as a training experience. It stimulates a real-world experience of diagnosis and surgery, and by doing so, enhances the knowledge and its practical application that all doctors must necessarily possess. This form of training is not limited in nature, and can, therefore, simultaneously train a large number of medical practitioners. Big Data Analytics Big Data has the potential to provide comprehensive statistical information, only accessed and processed through sophisticated software. Big Data Analytics becomes extremely useful when it comes to managing the hospital’s resources and records in an efficient manner. Aside from this, it is used in the development of mobile applications that store all patient data, thus again, eliminating the need for excessive paperwork. This allows medical professionals to focus more on attending and treating the patients, rather than managing database. These technological trends have led to the development of a diverse variety of mobile applications to be used for multiple purposes in the healthcare industry. Listed below are the benefits of the mobile apps deploying these technological trends, for the professionals and the patients alike. Telemedicine Mobile applications can potentially play a crucial role in making medical services available to the masses. An example is an on-call physician on telemedicine duty. A mobile application will allow the physician to be available for a patient consult without having to operate via  PC. This will make the doctors more accessible and will bring quality treatment to the patients quickly. Enhanced Patient Engagement There are mobile applications that place all patient data – from past medical history to performance metrics, patient feedback, changes in the treatment patterns and schedules, at the push of a button on the smartphone application for the medical professional to consider and make a decision on the go. Since all data is recorded in real-time, it makes it easy for doctors to change shifts without having to explain to the next doctor the condition of the patient in person. The mobile application has all the data the supervisors or nurses need. Easy Access to Medical Facilities There are a number of mobile applications that allow patients to search for medical professionals in their area, read their reviews and feedback by other patients, and then make an online appointment if they are satisfied with the information that they find. Apart from these, they can also download and store their medical lab reports, and order medicines online at affordable prices. Easy Payment of Bills Like in every other sector, mobile applications in healthcare have made monetary transactions extremely easy. Patients or their family members, no longer need to spend hours waiting in the line to pay the bills. They can instantly pick a payment plan and pay bills immediately or add reminders to be notified when a bill is due. Therefore, it can be safely said that the revolution that the healthcare industry is undergoing and has worked in the favor of all the parties involved – Medical Professionals, Patients, Hospital Management and the Mobile App Developers. Author's Bio Ritesh Patil is the co-founder of Mobisoft Infotech that helps startups and enterprises in mobile technology. He’s an avid blogger and writes on mobile application development. He has developed innovative mobile applications across various fields such as Finance, Insurance, Health, Entertainment, Productivity, Social Causes, Education and many more and has bagged numerous awards for the same. Social Media – Twitter, LinkedIn Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0 7 Popular Applications of Artificial Intelligence in Healthcare
Read more
  • 0
  • 0
  • 8317

article-image-why-retailers-need-to-prioritize-ecommerce-automation-in-2019
Guest Contributor
14 Jan 2019
6 min read
Save for later

Why Retailers need to prioritize eCommerce Automation in 2019

Guest Contributor
14 Jan 2019
6 min read
The retail giant Amazon plans to reinvent in-store shopping in much the same way it revolutionized online shopping. Amazon’s cashierless stores (3000 of which you can expect by 2021) give a glimpse into what the future of eCommerce could look like with automation. eCommerce automation involves combining the right eCommerce software with the right processes and the right people to streamline and automate the order lifecycle. This reduces the complexity and redundancy of many tasks that an eCommerce retailer typically faces from the time a customer places her order until the time it is delivered to them. Let’s look at why eCommerce retailers should prioritize automation in 2019. 1. Augmented Customer Experience + Personalization A PwC study titled “Experience is Everything” suggests that 42% of consumers would pay more for a friendly, welcoming experience. Way back in 2015, Gartner predicted that nearly 50% of companies would be implementing changes in their business model in order to augment customer experience. This is especially true for eCommerce, and automation in eCommerce certainly represents one of these changes. Customization and personalization of services are a huge boost for customer experience. A BCG report revealed that retailers who implemented personalization strategies saw a 6-10% boost in their sales. How can automation help? To start with, you can automate your email marketing campaigns and make them more personalized by adding recommendations, discount codes, and more. 2.  Fraud Prevention The scope for fraud on the Internet is huge. According to the October 2017 Global Fraud Index, Account Takeover fraud cost online retailers a whopping $3.3 billion dollars in Q2 2017 alone. Additionally, the average transaction rate for eCommerce fraud has also been on this rise. eCommerce retailers have been using a number of fraud prevention tools such as address verification service, CVN (card verification number), credit history check, and more to verify the buyer’s identity. An eCommerce tool equipped with Machine Learning capabilities such as Shuup, can detect fraudulent activity and effectively run through thousands of checks in the blink of an eye. Automating order handling can ensure that these preliminary checks are carried out without fail, in addition to carrying out specific checks to assess the riskiness of a particular order. Depending on the nature and scale of the enterprise, different retailers would want to set different thresholds for fraud detection, and eCommerce automation makes that possible. If a medium-risk transaction is detected, the system can be automated to notify the finance department for immediate review, whereas high-risk transactions can be canceled immediately. Automating your eCommerce software processes allows you to break the mold of one-size-fits-all coding and make the solution specific to the needs of your organization. 3. Better Customer Service Customer service and support is an essential part of the buying process. Automated customer support does not necessarily mean your customers will get canned responses to all their queries. However,  it means that common queries can be dealt with in a more efficient manner. Live chats and chatbots have become incredibly popular with eCommerce retailers because these features offer convenience to both the customer and the retailer. The retailer’s support staff will not be held up with routine inquiries and the customer can get their queries resolved quickly. Timely responses are a huge part of what constitutes positive customer experience. Live chat can even be used for shopping carts in order to decrease cart abandonment rates. Automating priority tickets as well as the follow-up on resolved and unresolved tickets is another way to automate customer service. With new generation automated CRM systems/ help desk software you can automate the tags that are used to filter tickets. It saves the customer service rep’s time and ensures that priority tickets are resolved quickly. Customer support/service can be made as a strategic asset in your overall strategy. The same Oracle study mentioned above suggests that back in 2011, 50% of customers would give a brand at most one week to respond to a query before they stopped doing business with them. You can imagine what those same numbers for 2018 would be. 4. Order Fulfillment Physical fulfillment of orders is prone to errors as it requires humans to oversee the warehouse selection. With an automated solution, you can set up an order fulfillment to match the warehouse requirements as closely as possible. This ensures that the closest warehouse which has the required item in stock is selected. This guarantees timely delivery of the order to the customer. It could also be set up to integrate with your billing software so as to calculate accurate billing/shipping charges, taxes (for out of state/overseas shipments), etc. Automation can also help manage your inventory effectively. Product availability is updated automatically with each transaction, be it a return or the addition of a new product. If the stock of a particular in-demand item was nearing danger levels, your eCommerce software would send an automated email to the supplier asking to replenish its stock ASAP. Automation ensures that your prerequisites to order fulfillment are fulfilled for successful processing and delivery of an order. Challenges with eCommerce automation The aforementioned benefits are not without some risks, just as any evolving concept tends to be. One of the most important challenges to making automation work for eCommerce smoothly is data accuracy. As eCommerce platforms gear up for greater multichannel/omnichannel retailing strategies, automation is certainly going to help them bolster and enhance their workflows, but only if the right checks are in place. Automation still has a long way to go, so for now, it might be best to focus on automating tasks that take up a lot of time such as updating inventory, reserving products for certain customers, updating customers on their orders, etc. Advanced applications such as fraud detection might still take a few years to be truly ‘automated’ and free from any need for human review. For now, eCommerce retailers still have a whole lot to look forward to. All things considered, automating the key tasks/functions of your eCommerce platform will impart flexibility, agility, and a scope for improved customer experience. Invest rightly in automation solutions for your eCommerce software to stay competitive in the dynamic, unpredictable eCommerce retail scenario. Author Bio Fretty Francis works as a Content Marketing Specialist at SoftwareSuggest, an online platform that recommends software solutions to businesses. Her areas of expertise include eCommerce platforms, hotel software, and project management software. In her spare time, she likes to travel and catch up on the latest technologies. Software developer tops the 100 Best Jobs of 2019 list by U.S. News and World Report IBM Q System One, IBM’s standalone quantum computer unveiled at CES 2019 Announcing ‘TypeScript Roadmap’ for January 2019- June 2019
Read more
  • 0
  • 0
  • 5709
article-image-is-blockchain-a-failing-trend-or-can-it-build-a-better-world-harish-garg-provides-his-insight-interview
Packt Editorial Staff
02 Jan 2019
4 min read
Save for later

Is Blockchain a failing trend or can it build a better world? Harish Garg provides his insight [Interview]

Packt Editorial Staff
02 Jan 2019
4 min read
In 2018, Blockchain and cryptocurrency exploded across tech. We spoke to Packt author Harish Garg on what they see as the future of Blockchain in 2019 and beyond. Harish Garg, founder of BignumWorks Software LLP, is a data scientist and lead software developer with 17 years' software industry experience. BignumWorks is an India-based software consultancy that provides consultancy services in software development and technical training. Harish has worked for McAfee\Intel for 11+ years. He is an expert in creating data visualizations using R, Python, and web-based visualization libraries. Find all of Harish Garg's books for Packt here. From early adopters to the enterprise What do you think was the biggest development in blockchain during 2018? The biggest development in Blockchain during 2018 was the explosion of Blockchain based digital currencies. We have now thousands of different coins and projects supported by these coins. 2018 was also the year when Blockchain really captured the imagination of public at large, beyond just technical savvy early adopters. 2018 also saw first a dramatic rise in the price of digital currencies, especially Bitcoin and then a similar dramatic fall in the last half of the year. Do you think 2019 is the year that enterprise embraces blockchain? Why? Absolutely. Early adoption of Enterprise blockchain is already underway in 2018. Companies like IBM have already released and matured their Blockchain offerings for enterprises. 2018 also saw the big behemoth of Cloud Services, Amazon Web Services launching their own Blockchain solutions. We are on the cusp of wider adoption of Blockchain in enterprises in 2019. Key Blockchain challenges in 2019 What do you think the principle challenges in deploying blockchain technology are, and how might developers address them in 2019? There have been two schools that have been emerging about the way blockchain is perceived. One one side, there are people who are pitching Blockchain as some kind of ultimate Utopia, the last solution to solve all of humanity’s problems. And on the other end of the spectrum are people who dismiss Blockchain as another fading trend with nothing substantial to offer. These two kind of schools pose the biggest challenge to the success of Blockchain technology. The truth is somewhere lies in between these two. Developers need to take the job of Blockchain evangelism in their own hands and make sure the right kind of expectations are set up for policy makers and customers. Have the Bitcoin bubble and greater scrutiny from regulators made blockchain projects less feasible, or do they provide a more solid market footing for the technology? Why? Bitcoin has invited lot of scrutiny from regulators and governments, without the bubble too. Bitcoin upends the notion of a nation state controlling the supply of money. So obviously different governments are reacting to it with a wide range of actions, ranging from outright ban from using the existing banking systems to buy and sell Bitcoin and other digital currencies to some countries putting a legal framework in place to securely let their citizens trade in them. The biggest fear they have is the black money being pumped into digital currencies. With proper KYC procedures, these fears can be removed. However, governments and financial institutions are also realizing the advantages Blockchain offer in streamlining their banking and financial markets and are launching pilot projects to adopt Blockchain. Blockchain and disruption in 2019 Will Ethereum continue to dominate the industry or are there new platforms that you think present a serious challenge? Why? Ethereum do have an early mover advantage. However, we know that the early moved advantage is not such a big moat to cross for new competitors. There are likely to be competing and bigger platforms to emerge from the likes of Facebook, Amazon, and IBM that will solve the scalability issues Ethereum faces. What industries do you think blockchain technology is most likely to disrupt in 2019, and why? Finance and Banking are still the biggest industries that will see an explosion of creative products coming out due to the adoption of Blockchain technology. Products for Government use are going to be big especially wherever there is a need for immutable source of truth, like in the case of land records. Do you have any other thoughts on the future of blockchain you’d like to share? We are at a very early stage of Blockchain adoption. It’s very hard to predict right now what kind of killer apps will emerge few years down the line. Nobody predicted smartphones in 2007 will give rise to Apps like Uber. Important thing is to have the right mix of optimism and skepticism.
Read more
  • 0
  • 0
  • 3132

article-image-tech-workers-coalition-volunteers-talk-unionization-and-solidarity-in-silicon-valley
Natasha Mathur
03 Dec 2018
9 min read
Save for later

Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley

Natasha Mathur
03 Dec 2018
9 min read
In the latest podcast episode of Delete your account, Roqayah Chamseddine and Kumars Salehi talked to Ares and Kristen, volunteers with the Tech Workers Coalition (TWC), about how they function and organize to bring social justice and solidarity to the tech industry. What is the Tech Workers Coalition? The Tech Workers Coalition is a democratically structured, all-volunteer, and worker-led organization of tech and tech adjacent workers across the US who organize and offer support for activist, civic engagement and education projects. They primarily do work in the Bay Area Seattle, but they are also supporting and working on initiatives across the United States. While they work largely to defend the rights of tech workers, the organization argues for wider solidarity with existing social and economic justice movements. Key Takeaways The podcast discusses the evolution of TWC (from facilitating Google employees in their protest against Google’s Pentagon contract to helping Google employees in “walkout for real change”), pushback received, TWC’s unionizing goal, and their journey going forward. A brief history of the Tech Workers Coalition Tech Workers Coalition started with a friendship between Rachel Melendes, a former cafeteria worker and Matt Schaefer, an engineer. The first meetings, in 2014 and 2015, comprised a few full-time employees at tech companies. These meetings were occasions for discussing and sharing experiences of working in the tech industry in Silicon Valley. It’s worth noting that those involved didn’t just include engineers - subcontracted workers, cafeteria workers, security guards, and janitors were all involved too. So, TWC began life as a forum for discussing workplace issues, such as pay disparity, harassment, and discrimination. However, this forum evolved, with those attending becoming more and more aware that formal worker organization could be a way of achieving a more tangible defense of worker rights in the tech industry. Kristen points out in the podcast how 2016 presidential elections in the US were “mobilizing” and laid a foundation for TWC in terms of determining where their interests lay. She also described how ideological optimism of Silicon Valley companies - evidenced in brand values like “connecting people” and “don’t be evil”, encourages many people to join the tech industry for “naive but well-intentioned reasons.” One example presented by Kristen is of the 14th December Trump tower meeting in 2016, where Donald Trump invited top tech leaders including Tim Cook ( CEO, Apple), Jeff Bezos ( CEO, Amazon), Larry Page (CEO, Alphabet), and Sheryl Sandberg ( COO, Facebook) for a “technology roundup”. Kristen highlights that the meeting, seen by some as an opportunity to put forward the Silicon Valley ethos of openness and freedom, didn’t actually fulfill what it might have done. The acquiescence of these tech leaders to a President widely viewed negatively by many tech workers forced employees to look critically at their treatment in the workplace. It’s almost as if it was the moment, for many workers, when the fact those at the top of the tech industry weren’t on their side. From this point, the TWC has gone from strength to strength. There are now more than 500 people in the Tech Workers Coalition group on Slack that discuss and organize activities to bring more solidarity in the tech industry. Ideological splits within the tech left Ares also talks about ideological splits within the community of left-wing activists in the tech industry. For example, when Kristen joined TWC in 2016, many of the conversations focused on questions like are tech workers actually workers? and aren’t they at fault for gentrification? The fact that the debate has largely moved on from these issues says much about how thinking has changed in activist communities. While in the past activists may have taken a fairly self-flagellating view of, say, gentrification - a view that is arguably unproductive and offers little opportunity for practical action - today, activists focus on what tech workers have in common with those doing traditional working-class jobs. Kristen explains: “tech workers aren’t the ones benefiting from spending 3 grand a month on a 1 bedroom apartment, even if that’s possible for them in a way that is not for many other working people. You can really easily see the people that are really profiting from that are landlords and real estate developers”. As Salehi also points out in the episode, solidarity should ultimately move beyond distinctions and qualifiers like income. TWC’s recent efforts in unionizing tech Google’s walkout for Real Change A recent example of TWC’s efforts to encourage solidarity across the tech industry is its support of Google’s Walkout for Real Change. Earlier this month, 20,000 Google employees along with Vendors, and Contractors walked out of their respective Google offices to protest discrimination and sexual harassment in the workplace. As part of the walkout, Google employees laid out five demands urging Google to bring about structural changes within the workplace. To facilitate the walkout, TWC organized a retaliation hotline that allowed employees to call in if they faced any retribution for participating in the walkout. If an employee contacted the hotline, TWC would then support them in taking their complaints to the labor bureau. TWC also provided resources based on their existing networks and contacts with the National Labour Relations Board (NLRB). Read Also: Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Ares called the walkout “an escalation in tactic” that would force tech execs to concede to employee demands. He also described how the walkout caused a “ripple effect” -  since seeing Google end its forced arbitration policy, Facebook soon followed too. Protest against AI drones It was back in October when Google announced that it will not be competing for the Pentagon’s cloud-computing contract worth $10 billion, saying the project may conflict with its principles for the ethical use of AI. Google employees had learned about Google’s decision to provide and develop artificial intelligence to a controversial military pilot program known as Project Maven, earlier this year. Project Maven aimed to speed up analysis of drone footage by automatically labeling images of objects and people. Many employees had protested against this move by Google by resigning from the company.  TWC supported Google employees by launching a petition in April in addition to the one that was already in circulation, demanding that Google abandon its work on Maven. The petition also demanded that other major tech companies, such as IBM and Amazon, refuse to work with the U.S. Defense Department. TWC’s Unionizing goal and major obstacles faced in the tech industry On the podcast, Kristen highlights that union density across the tech industry is quite low. While unionization across the industry is one of the TWC’s goals, it’s not their immediate goal. “It depends on the workplace, and what the workers there want to do. We’re starting at a place that is comparable to a lot of industries in the 19th century in terms of what shape it could take, it's very nascent. It will take a lot of experimentation”, she says. The larger goal of TWC is to challenge established tech power structures and practices in order to better serve the communities that have been impacted negatively by them. “We are stronger when we act together, and there’s more power when we come together,” says Kristen. “We’re the people who keep the system going. Without us, companies won't be able to function”. TWC encourages people to think about their role within a workplace, and how they can develop themselves as leaders within the workplace. She adds that unionizing is about working together to change things within the workplace, and if it's done on a large enough scale, “we can see some amount of change”. Issues within the tech industry Kristen also discusses how issues such as meritocracy, racism, and sexism are still major obstacles for the tech industry. Meritocracy is particularly damaging as it prevents change - while in principle it might make sense, it has become an insidious way of maintaining exclusivity for those with access and experience. Kristen argues that people have been told all their lives that if you try hard you’ll succeed and if you don’t then that’s because you didn't try hard enough. “People are taught to be okay with their alienation in society,” she says. If meritocracy is the system through which exclusivity is maintained, sexism, sexual harassment, misogyny, and racism are all symptoms of an industry that, for its optimism and language of change, is actually deeply conservative. Depressingly, there are too many examples to list in full, but one particularly shocking report by The New York Times highlighted sexual misconduct perpetrated by those in senior management. While racism may, at the moment, be slightly less visible in the tech industry - not least because of an astonishing lack of diversity - the internal memo by Mark Luckie, formerly of Facebook, highlighted the ways in which Facebook was “failing its black employees and its black users”. What’s important from a TWC perspective is that none of these issues can be treated in isolation and as individual problems. By organizing workers and providing people with a space in which to share their experiences, the organization can encourage forms of solidarity that break down the barriers that exist across the industry. What’s next for TWC? Kristen mentions how the future for TWC depends on what happens next as there are lots of things that could change rather quickly. Looking at the immediate scope of TWC’s future work, there are projects that they're working on. Ares also mentions how he is blown away by how things have chalked out in the past couple of years and are optimistic about pushing the tendency of rebellion within the tech industry with TWC. “I've been very positively surprised with how things are going but it hasn't been without lots of hard work with lots of folks within the coalition and beyond. In that sense it is rewarding, to see the coalition grow where it is now”, says Kristen. Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 4155