Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-4-important-business-intelligence-considerations-for-the-rest-of-2019
Richard Gall
16 Sep 2019
7 min read
Save for later

4 important business intelligence considerations for the rest of 2019

Richard Gall
16 Sep 2019
7 min read
Business intelligence occupies a strange position, often overshadowed by fields like data science and machine learning. But it remains a critical aspect of modern business - indeed, the less attention the world appears to pay to it, the more it is becoming embedded in modern businesses. Where analytics and dashboards once felt like a shiny and exciting interruption in our professional lives, today it is merely the norm. But with business intelligence almost baked into the day to day routines and activities of many individuals, teams, and organizations, what does this actually mean in practice. For as much as we’d like to think that we’re all data-driven now, the reality is that there’s much we can do to use data more effectively. Research confirms that data-driven initiatives often fail - so with that in mind here’s what’s important when it comes to business intelligence in 2019. Popular business intelligence eBooks and videos Oracle Business Intelligence Enterprise Edition 12c - Second Edition Microsoft Power BI Quick Start Guide Implementing Business Intelligence with SQL Server 2019 [Video] Hands-On Business Intelligence with Qlik Sense Hands-On Dashboard Development with QlikView Getting the balance between self-service business intelligence and centralization Self-service business intelligence is one of the biggest trends to emerge in the last two years. In practice, this means that a diverse range of stakeholders (marketers and product managers for example) have access to analytics tools. They’re no longer purely the preserve of data scientists and analysts. Self-service BI makes a lot of sense in the context of today’s data rich and data-driven environment. The best way to empower team members to actually use data is to remove any bottlenecks (like a centralized data team) and allow them to go directly to the data and tools they need to make decisions. In essence, self-service business intelligence solutions are a step towards the democratization of data. However, while the notion of democratizing data sounds like a noble cause, the reality is a little more complex. There are a number of different issues that make self-service BI a challenging thing to get right. One of the biggest pain points, for example, are the skill gaps of teams using these tools. Although self-service BI should make using data easy for team members, even the most user-friendly dashboards need a level of data literacy to be useful. Read next: What are the limits of self-service BI? Many analytics products are being developed with this problem in mind. But it’s still hard to get around - you don’t, after all, want to sacrifice the richness of data for simplicity and accessibility. Another problem is the messiness of data itself - and this ultimately points to one of the paradoxes of self-service BI. You need strong alignment - centralization even - if you’re to ensure true democratization. The answer to all this isn’t to get tied up in decentralization or centralization. Instead, what’s important is striking a balance between the two. Decentralization needs centralization - there needs to be strong governance and clarity over what data exists, how it’s used, how it’s accessed - someone needs to be accountable for that for decentralized, self-service BI to actually work. Read next: How Qlik Sense is driving self-service Business Intelligence Self-service business intelligence: recommended viewing Power BI Masterclass - Beginners to Advanced [Video] Data storytelling that makes an impact Data storytelling is a phrase that’s used too much without real consideration as to what it means or how it can be done. Indeed, all too often it’s used to refer to stylish graphs and visualizations. And yes, stylish graphs and data visualizations are part of data storytelling, but you can’t just expect some nice graphics to communicate in depth data insights to your colleagues and senior management. To do data storytelling well, you need to establish a clear sense of objectives and goals. By that I’m not referring only to your goals, but also those of the people around you. It goes without saying that data and insight needs context, but what that context should be, exactly, is often the hard part - objectives and aims are perhaps the straightforward way of establishing that context and ensuring your insights are able to establish the scope of a problem and propose a way forward. Data storytelling can only really make an impact if you are able to strike a balance between centralization and self-service. Stakeholders that use self-service need confidence that everything they need is both available and accurate - this can only really be ensured by a centralized team of data scientists, architects, and analysts. Data storytelling: recommend viewing Data Storytelling with Qlik Sense [Video] Data Storytelling with Power BI [Video] The impact of cloud It’s impossible to properly appreciate the extent to which cloud is changing the data landscape. Not only is it easier than ever to store and process data, it’s also easy to do different things with it. This means that it’s now possible to do machine learning, or artificial intelligence projects with relative ease (the word relative being important, of course). For business intelligence, this means there needs to be a clear strategy that joins together every piece of the puzzle, from data collection to analysis. This means there needs to be buy-in and input from stakeholders before a solution is purchased - or built - and then the solution needs to be developed with every individual use case properly understood and supported. Indeed, this requires a combination of business acumen, soft skills, and technical expertise. A large amount of this will rest on the shoulders of an organization’s technical leadership team, but it’s also worth pointing out that those in other departments still have a part to play. If stakeholders are unable to present a clear vision of what their needs and goals are it’s highly likely that the advantages of cloud will pass them by when it comes to business intelligence. Cloud and business intelligence: recommended viewing Going beyond Dashboards with IBM Cognos Analytics [Video] Business intelligence ethics Ethics has become a huge issue for organizations over the last couple of years. With the Cambridge Analytica scandal placing the spotlight on how companies use customer data, and GDPR forcing organizations to take a new approach to (European) user data, it’s undoubtedly the case that ethical considerations have added a new dimension to business intelligence. But what does this actually mean in practice? Ethics manifests itself in numerous ways in business intelligence. Perhaps the most obvious is data collection - do you have the right to use someone’s data in a certain way? Sometimes the law will make it clear. But other times it will require individuals to exercise judgment and be sensitive to the issues that could arise. But there are other ways in which individuals and organizations need to think about ethics. Being data-driven is great, especially if you can approach insight in a way that is actionable and proactive. But at the same time it’s vital that business intelligence isn’t just seen as a replacement for human intelligence. Indeed, this is true not just in an ethical sense, but also in terms of sound strategic thinking. Business intelligence without human insight and judgment is really just the opposite of intelligence. Conclusion: business intelligence needs organizational alignment and buy-in There are many issues that have been slowly emerging in the business intelligence world for the last half a decade. This might make things feel confusing, but in actual fact it underlines the very nature of the challenges organizations, leadership teams, and engineers face when it comes to business intelligence. Essentially, doing business intelligence well requires you - and those around you - to tie all these different elements. It's certainly not straightforward, but with focus and a clarity of thought, it's possible to build a really effective BI program that can fulfil organizational needs well into the future.
Read more
  • 0
  • 0
  • 6433

article-image-how-artificial-intelligence-and-machine-learning-can-help-us-tackle-the-climate-change-emergency
Vincy Davis
16 Sep 2019
14 min read
Save for later

How artificial intelligence and machine learning can help us tackle the climate change emergency

Vincy Davis
16 Sep 2019
14 min read
“I don’t want you to be hopeful. I want you to panic. I want you to feel the fear I feel every day. And then I want you to act on changing the climate”- Greta Thunberg Greta Thunberg is a 16-year-old Swedish schoolgirl, who is famously called as a climate change warrior. She has started an international youth movement against climate change and has been nominated as a candidate for the Nobel Peace Prize 2019 for climate activism. According to a recent report by the Intergovernmental Panel (IPCC), climate change is seen as the top global threat by many countries. The effects of climate change is going to make 1 million species go extinct, warns a UN report. The Earth’s rising temperatures are fueling longer and hotter heat waves, more frequent droughts, heavier rainfall, and more powerful hurricanes. Antarctica is breaking. Indonesia, the world's 4th most populous country, just shifted its capital from Jakarta because it's sinking. Singapore's worried investments are moving away. Last year, Europe experienced an  'extreme year' for unusual weather events. After a couple of months of extremely cold weather, heat and drought plagued spring and summer with temperatures well above average in most of the northern and western areas. The UK Parliament has declared ‘climate change emergency’ after a series of intense protests earlier this month. More than 1,200 people were killed across South Asia due to heavy monsoon rains and intense flooding (in some places it was the worst in nearly 30 years). The CampFire, in November 2018, was the deadliest and most destructive in California’s history, causing the death of at least 85 people and destroying about 14,000 homes. Australia’s most populous state New South Wales suffered from an intense drought in 2018. According to a report released by the UN last year, there are “Only 11 Years Left to Prevent Irreversible Damage from Climate Change”.  Addressing climate change: How ARTIFICIAL INTELLIGENCE (AI) can help? As seen above, environmental impacts due to climate changes are clear, the list is vast and depressing. It is important to address climate change issues as they play a key role in the workings of a natural ecosystem like change in the nature of global rainfall, diminishing ice-sheets, and other factors on which the human economy and the civilization depends on. With the help of Artificial Intelligence (AI), we can increase our probability of becoming efficient, or at least slow down the damage caused by climate change. In the recently held ICLR 2019 (International Conference on Learning Representations), Emily Shuckburgh, a Climate scientist and deputy head of the Polar Oceans team at the British Antarctic Survey highlighted the need of actionable information on climate risk. It elaborated on how we can monitor, treat and find a solution to the climate changes using machine learning. Also mentioned is, how AI can synthesize and interpolate different datasets within a framework that will allow easy interrogation by users and near-real time ingestion of new data. According to MIT tech review on climate changes, there are three approaches to address climate change: mitigation, navigation and suffering. Technologies generally concentrate on mitigation, but it’s high time that we give more focus to the other two approaches. In a catastrophically altered world, it would be necessary to concentrate on adaptation and suffering. This review states that, the mitigation steps have had almost no help in preserving fossil fuels. Thus it is important for us to learn to adapt to these changes. Building predictive models by relying on masses of data will also help in providing a better idea of how bad the effect of a disaster can be and help us to visualize the suffering. By implementing Artificial Intelligence in these approaches, it will help not only to reduce the causes but also to adapt to these climate changes. Using AI, we can predict the accurate status of climate change, which will help create better futuristic climate models. These predictions can be used to identify our biggest vulnerabilities and risk zones. This will help us to respond in a better way to the impact of climate change such as hurricanes, rising sea levels, and higher temperatures. Let’s see how Artificial Intelligence is being used in all the three approaches - Mitigation: Reducing the severity of climate change Looking at the extreme climatic changes, many researchers have started exploring how AI can step-in to reduce the effects of climate change. These include ways to reduce greenhouse gas emissions or enhance the removal of these gases from the atmosphere. In view of consuming less energy, there has been an active increase in technologies to use energy smartly. One such startup is the ‘Verv’. It is an intelligent IoT hub which uses patented AI technology to give users the authority to take control of their energy usage. This home energy system provides you with information about your home appliances and other electricity data directly from the mains, which helps to reduce your electricity bills and lower your carbon footprint. ‘Igloo Energy’ is another system which helps customers use energy efficiently and save money. It uses smart meters to analyse behavioural, property occupancy and surrounding environmental data inputs to lower the energy consumption of users. ‘Nnergix’ is a weather analytics startup focused in the renewable energy industry. It collects weather and energy data from multiple sources from the industry in order to feed machine learning based algorithms to run several analytic solutions with the main goal to help any system become more efficient during operations and reduce costs. Recently, Google announced that by using Artificial Intelligence, it’s wind energy has boosted up to 20 percent. A neural network is trained on the widely available weather forecasts and historical turbine data. The DeepMind system is configured to predict the wind power output 36 hours ahead of actual generation. The model then recommends to make hourly delivery commitments to the power grid a full day in advance, based on the predictions. Large industrial systems are the cause of 54% of global energy consumption. This high-level of energy consumption is the primary contributor to greenhouse gas emissions. In 2016, Google’s ‘DeepMind’ was able to reduce the energy required to cool Google Data Centers by 30%. Initially, the team made a general purpose learning algorithm which was developed into a full-fledged AI system with features including continuous monitoring and human override. Just last year, Google has put an AI system in charge of keeping its data centers cool. Every five minutes, AI pulls a snapshot of the data center cooling system from thousands of sensors. This data is fed into deep neural networks, which predicts how different choices will affect future energy consumption. The neural networks are trained to maintain the future PUE (Power Usage Effectiveness) and to predict the future temperature and pressure of the data centre over the next hour, to ensure that any tweaks did not take the data center beyond its operating limits. Google has found that the machine learning systems were able to consistently achieve a 30 percent reduction in the amount of energy used for cooling, the equivalent of a 15 percent reduction in overall PUE. As seen, there are many companies trying to reduce the severity of climate change. Navigation: Adapting to current conditions Though there have been brave initiatives to reduce the causes of climate change, they have failed to show any major results. This could be due to the increasing demand for energy resources, which is expected to grow immensely globally. It is now necessary to concentrate more on adapting to climate change, as we are in a state where it is almost impossible to undo its effects. Thus, it is better to learn and navigate through this climate change. A startup in Berlin, called ‘GreenAdapt’ has created a software using AI, which can tackle local impacts induced both by gradual changes and changes of extreme weather events such as storms. It identifies  effects of climatic changes and proposes adequate adaptation measures. Another startup called ‘Zuli’ has a smartplug that reduces energy use. It contains sensors that can estimate energy usage, wirelessly communicate with your smartphone, and accurately sense your location. A firm called ‘Gridcure’ provides real-time analytics and insights for energy and utilities. It helps power companies recover losses and boost revenue by operating more efficiently. It also helps them provide better delivery to consumers, big reductions in energy waste, and increased adoption of clean technologies. With mitigation and navigation being pursued enough, let’s see how firms are working on futuristic goals. Visualization: Predicting the future It is also equally important to visualize accurate climate models, which will help humans to cope up with the aftereffects of climate change. Climate models are mathematical representations of the Earth's climate system, which takes into account humidity, temperature, air pressure, wind speed and direction, as well as cloud cover and predict future weather conditions. This can help in tackling disasters. It’s also imperative to fervently increase our information on global climate changes which will help to create more accurate models. A startup modeling firm called ‘Jupiter’ is trying to better the accuracy of predictions regarding climate changes. It makes physics-based and Artificial Intelligence-powered decisions using data from millions of ground-based and orbital sensors. Another firm, ‘BioCarbon Engineering’ plans to use drones which will fly over potentially suitable areas and compile 3D maps. Then, it will scatter small containers over the best areas containing fertilized seeds as well as nutrients and moisture gel. In this way, 36,000 trees can be planted every day in a way that is cheaper than other methods. After planting, drones will continue to monitor the germinating seeds and deliver further nutrients when necessary to ensure their healthy growth. This could help to absorb carbon dioxide from the atmosphere. Another initiative is by a ETH doctoral student at the Functional Materials Laboratory, who has developed a cooling curtain made of a porous triple-layer membrane as an alternative to electrically powered air conditioning. In 2017, Microsoft came up with ‘AI for Earth’ initiative, which primarily focuses on climate conservation, biodiversity, etc. AI for Earth awards grants to projects that use artificial intelligence to address critical areas that are vital for building a sustainable future. Microsoft is also using its cloud computing service Azure, to give computing resources to scientists working on environmental sustainability programs. Intel has deployed Artificial Intelligence-equipped Drones in Costa Rica to construct models of the forest terrain and calculate the amount of carbon being stored based on tree height, health, biomass, and other factors. The collected data about carbon capture can enhance management and conservation efforts, support scientific research projects on forest health and sustainability, and enable many other kinds of applications. The ‘Green Horizon Project from IBM’ analyzes environmental data and predicts pollution as well as tests scenarios that involve pollution-reducing tactics. IBM's Deep Thunder’ group works with research centers in Brazil and India to accurately predict flooding and potential mudslides due to the severe storms. As seen above, there are many organizations and companies ranging from startups to big tech who have understood the adverse effects of climate change and are taking steps to address them. However, there are certain challenges/limitations acting as a barrier for these systems to be successful. What do big tech firms and startups lack? Though many big tech and influential companies boast of immense contribution to fighting climate change, there have been instances where these firms get into lucrative deals with oil companies. Just last year, Amazon, Google and Microsoft struck deals with oil companies to provide cloud, automation, and AI services to them. These deals were published openly by Gizmodo and yet didn’t attract much criticism. This trend of powerful companies venturing into oil businesses even after knowing the effects of dangerous climate changes is depressing. Last year, Amazon quietly launched the ‘Amazon Sustainability Data Initiative’.It helps researchers store many weather observations and forecasts, satellite images and metrics about oceans, air quality so that they can be used for modeling and analysis. This encourages organizations to use the data to make decisions which will encourage sustainable development. This year, Amazon has expanded its vision by announcing ‘Shipment Zero’ to make all Amazon shipments with 50% net zero by 2030, with a wider aim to make it 100% in the future. However, Shipment Zero only commits to net carbon reductions. Recently, Amazon ordered 20,000 diesel vans whose emissions will need to be offset with carbon credits. Offsets can entail forest management policies that displace indigenous communities, and they do nothing to reduce diesel pollution which disproportionately harms communities of color. Some in the industry expressed disappointment that Amazon’s order is for 20,000 diesel vans — not a single electric vehicle. In April, Over 4,520 Amazon employees organized against Amazon’s continued profiting from climate devastation. They signed an open letter addressed to Jeff Bezos and Amazon board of directors asking for a company-wide action plan to address climate change and an end to the company’s reliance on dirty energy resources. Recently, Microsoft doubled its internal carbon fee to $15 per metric ton on all carbon emissions. The funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. On the other hand, Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil.  Microsoft Azure has also partnered with Equinor, a multinational energy company to provide data services in a deal worth hundreds of millions of dollars. Instead of gaining profit from these deals, Microsoft could have taken a stand by ending partnerships with these fossil fuel companies which accelerate oil and gas exploration and extraction. With respect to smaller firms, often it is difficult for a climate-focused conservative startup to survive due to the dearth of finance. Many such organizations are small and relatively weak as they struggle to rise in a sector with little apathy and lack of steady financing. Also startups being non-famous, it is difficult for them to market their ideas and convince people to try their systems. They always need a commercial boost to find more takers. Pitfalls of using Artificial Intelligence for climate preservation Though AI has enormous potential to help us create a sustainable future, it is only part of a bigger set of tools and pathways needed to reach the goal. It also comes with its own limitations and side effects. An inability to control malicious AI can cause unexpected outcomes. Hackers can use AI to develop smart malware that interfere with early warnings, enable bad actors to control energy, transportation or other critical systems and could also get them access to sensitive data. This could result in unexpected outcomes at crucial output points for AI systems. AI bias, is another dangerous phenomena, that can give an irrational result to a working system. Bias in an AI system mainly occurs in the data or in the system’s algorithmic model which may produce incorrect results in its functions and security. [dropcap]M[/dropcap]ore importantly, we should not rely on Artificial Intelligence alone to fight the effects of climate change. Our focus should be to work on the causes of climate change and try to minimize it, from an individual level. Even governments in every country must contribute, by initiating “climate policies” which will help its citizens in the long run. One vital task would be to implement quick responses in case of climate emergencies. Like the recent case of Odisha storms, the pinpoint accuracy by the Indian weather association helped to move millions of people to safe spaces, resulting in minimum casualties. Next up in Climate Amazon employees plan to walkout for climate change during the Sept 20th Global Climate Strike Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 6140

article-image-a-new-stuxnet-level-vulnerability-named-simjacker-used-to-secretly-spy-over-mobile-phones-in-multiple-countries-for-over-2-years-adaptive-mobile-security-reports
Savia Lobo
13 Sep 2019
6 min read
Save for later

A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports

Savia Lobo
13 Sep 2019
6 min read
Updated: On September 27, a few researchers from the Security Research Labs (SRLabs) released five key research findings based on the extent of Simjacker and how one can understand whether is SIM is vulnerable to such an exploit. Yesterday, Adaptive Mobile Security made a breakthrough announcement revealing a new vulnerability which the firm calls Simjacker has been used by attackers to spy over mobile phones. Researchers at Adaptive Mobile Security believe the vulnerability has been exploited for at least the last 2 years “by a highly sophisticated threat actor in multiple countries, primarily for the purposes of surveillance.” They further added that the Simjacker vulnerability “is a huge jump in complexity and sophistication compared to attacks previously seen over mobile core networks. It represents a considerable escalation in the skillset and abilities of attackers seeking to exploit mobile networks.” Also Read: 25 million Android devices infected with ‘Agent Smith’, a new mobile malware How Simjacker attack works and why it is a grave threat In the Simjacker attack, an SMS that contains a specific spyware-like code is sent to a victim’s mobile phone. This SMS when received, instructs the UICC (SIM Card) within the phone to ‘take over’ the mobile phone, in order to retrieve and perform sensitive commands. “During the attack, the user is completely unaware that they received the SMS with the Simjacker Attack message, that information was retrieved, and that it was sent outwards in the Data Message SMS - there is no indication in any SMS inbox or outbox,” the researchers mention on their official blog post. Source: Adaptive Mobile Security The Simjacker attack relies on the S@T(SIMalliance Toolbox ‘pronounced as sat’) browser software as an execution environment. The S@T browser, an application specified by the SIMalliance, can be installed on different UICC (SIM cards), including eSIMs. The S@T browser software is quite old and unpopular with an initial aim to enable services such as getting your account balance through the SIM card. The software specifications have not been updated since 2009 and have been superseded by many other technologies since then. Researchers say they have observed the “S@T protocol being used by mobile operators in at least 30 countries whose cumulative population adds up to over a billion people, so a sizable amount of people are potentially affected. It is also highly likely that additional countries have mobile operators that continue to use the technology on specific SIM cards.” Simjacker attack is a next-gen SMS attack Simjacker attack is unique. Previous SMS malware involved sending links to malware. However, the Simjacker Attack Message carries a complete malware payload, specifically spyware with instructions for the SIM card to execute the attack. Simjacker attack can do more than simply tracking the user’s location and user’s personal data. By modifying the attack message, the attacker could instruct the UICC to execute a range of other attacks. This is because the same method allows an attacker to have complete access to the STK command set including commands such as launch browser, send data, set up a call, and much more. Also Read: Using deep learning methods to detect malware in Android Applications The researchers used these commands in their own tests and were successfully able to make targeted handsets open up web browsers, ring other phones, send text messages and so on. They further highlighted other purposes this attack could be used for: Mis-information (e.g. by sending SMS/MMS messages with attacker-controlled content) Fraud (e.g. by dialling premium rate numbers), Espionage (as well as the location retrieving attack an attacked device it could function as a listening device, by ringing a number), Malware spreading (by forcing a browser to open a web page with malware located on it) Denial of service (e.g by disabling the SIM card) Information retrieval (retrieve other information like language, radio type, battery level etc.) The researchers highlight another benefit of the Simjacker attack for the attackers: many of its attacks seem to work independent of handset types, as the vulnerability is dependent on the software on the UICC and not the device. Adaptive Mobile says behind the Simjacker attack is a “specific private company that works with governments to monitor individuals.” This company also has extensive access to the SS7 and Diameter core network. Researchers said that in one country, roughly 100-150 specific individual phone numbers being targeted per day via Simjacker attacks. Also, a few phone numbers had been tracked a hundred times over a 7-day period, suggesting they belonged to high-value targets. Source: Adaptive Mobile Security The researchers added that they have been “working with our own mobile operator customers to block these attacks, and we are grateful for their assistance in helping detect this activity.” They said they have also communicated to the GSM Association – the trade body representing the mobile operator community - the existence of this vulnerability. This vulnerability has been managed through the GSMA CVD program, allowing information to be shared throughout the mobile community. “Information was also shared to the SIM alliance, a trade body representing the main SIM Card/UICC manufacturers and they have made new security recommendations for the S@T Browser technology,” the researchers said. “The Simjacker exploit represents a huge, nearly Stuxnet-like, leap in complexity from previous SMS or SS7/Diameter attacks, and show us that the range and possibility of attacks on core networks are more complex than we could have imagined in the past,” the blog mentions. The Adaptive Mobile Security team will present more details about the Simjacker attack in a presentation at Virus Bulletin Conference, London, on 3rd October 2019. https://twitter.com/drogersuk/status/1172194836985913344 https://twitter.com/campuscodi/status/1172141255322689537   To know more about the Simjacker attack in detail, read Adaptive Mobile’s official blog post. SRLabs researchers release protection tools against Simjacker and other SIM-based attacks On September 27, a few researchers from the Security Research Labs (SRLabs) released five key findings based on the extent of Simjacker and how one can understand whether is SIM is vulnerable to such an exploit. The researchers have highlighted five key findings in their research report and also provided an FAQ for users to implement necessary measures. Following are the five key research findings the SRLabs researchers mention: Around 6% of 800 tested SIM cards in recent years were vulnerable to Simjacker A second, previously unreported, vulnerability affects an additional 3.5% of SIM cards The tool SIMtester  provides a simple way to check any SIM card for both vulnerabilities (and for a range of other issues reported in 2013) The SnoopSnitch Android app warns users about binary SMS attacks including Simjacker since 2014. (Attack alerting requires a rooted Android phone with Qualcomm chipset.) A few Simjacker attacks have been reported since 2016 by the thousands of SnoopSnitch users that actively contribute data To know about these key findings by SRLabs' researchers in detail, read the official report. Other interesting news in Security Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 4010

article-image-the-cap-theorem-in-practice-the-consistency-vs-availability-trade-off-in-distributed-databases
Richard Gall
12 Sep 2019
7 min read
Save for later

The CAP Theorem in practice: The consistency vs. availability trade-off in distributed databases

Richard Gall
12 Sep 2019
7 min read
When you choose a database you are making a design decision. One of the best frameworks for understanding what this means in practice is the CAP Theorem. What is the CAP Theorem? The CAP Theorem, developed by computer scientist Eric Brewer in the late nineties, states that databases can only ever fulfil two out of three elements: Consistency - that reads are always up to date, which means any client making a request to the database will get the same view of data. Availability - database requests always receive a response (when valid). Partition tolerance - that a network fault doesn’t prevent messaging between nodes. In the context of distributed (NoSQL) databases, this means there is always going to be a trade-off between consistency and availability. This is because distributed systems are always necessarily partition tolerant (ie. it simply wouldn’t be a distributed database if it wasn’t partition tolerant.) Read next: Different types of NoSQL databases and when to use them How do you use the CAP Theorem when making database decisions? Although the CAP Theorem can feel quite abstract, it has practical, real-world consequences. From both a technical and business perspective the trade-offs will lead you to some very important questions. There are no right answers. Ultimately it will be all about the context in which your database is operating, the needs of the business, and the expectations and needs of users. You will have to consider things like: Is it important to avoid throwing up errors in the client? Or are we willing to sacrifice the visible user experience to ensure consistency? Is consistency an actual important part of the user’s experience Or can we actually do what we want with a relational database and avoid the need for partition tolerance altogether? As you can see, these are ultimately user experience questions. To properly understand those, you need to be sensitive to the overall goals of the project, and, as said above, the context in which your database solution is operating. (Eg. Is it powering an internal analytics dashboard? Or is it supporting a widely used external-facing website or application?) And, as the final bullet point highlights, it’s always worth considering whether the consistency v availability trade-off should matter at all. Avoid the temptation to think a complex database solution will always be better when a simple, more traditional solution will do the job. Of course, it’s important to note that systems that aren’t partition tolerant are a single point of failure in a system. That introduces the potential for unreliability. Prioritizing consistency in a distributed database It’s possible to get into a lot of technical detail when talking about consistency and availability, but at a really fundamental level the principle is straightforward: you need consistency (or what is called a CP database) if the data in the database must always be up to date and aligned, even in the instance of a network failure (eg. the partitioned nodes are unable to communicate with one another for whatever reason). Particular use cases where you would prioritize consistency is when you need multiple clients to have the same view of the data. For example, where you’re dealing with financial information, personal information, using a database that gives you consistency and confidence that data you are looking at is up to date in a situation where the network is unreliable or fails. Examples of CP databases MongoDB Learning MongoDB 4 [Video] MongoDB 4 Quick Start Guide MongoDB, Express, Angular, and Node.js Fundamentals Redis Build Complex Express Sites with Redis and Socket.io [Video] Learning Redis HBase Learn by Example : HBase - The Hadoop Database [Video] HBase Design Patterns Prioritizing availability in a distributed database Availability is essential when data accumulation is a priority. Think here of things like behavioral data or user preferences. In scenarios like these, you will want to capture as much information as possible about what a user or customer is doing, but it isn’t critical that the database is constantly up to date. It simply just needs to be accessible and available even when network connections aren’t working. The growing demand for offline application use is also one reason why you might use a NoSQL database that prioritizes availability over consistency. Examples of AP databases Cassandra Learn Apache Cassandra in Just 2 Hours [Video] Mastering Apache Cassandra 3.x - Third Edition DynamoDB Managed NoSQL Database In The Cloud - Amazon AWS DynamoDB [Video] Hands-On Amazon DynamoDB for Developers [Video] Limitations and criticisms of CAP Theorem It’s worth noting that the CAP Theorem can pose problems. As with most things, in truth, things are a little more complicated. Even Eric Brewer is circumspect about the theorem, especially as what we expect from distributed databases. Back in 2012, twelve years after he first put his theorem into the world, he wrote that: “Although designers still need to choose between consistency and availability when partitions are present, there is an incredible range of flexibility for handling partitions and recovering from them. The modern CAP goal should be to maximize combinations of consistency and availability that make sense for the specific application. Such an approach incorporates plans for operation during a partition and for recovery afterward, thus helping designers think about CAP beyond its historically perceived limitations.” So, this means we must think about the trade-off between consistency and availability as a balancing act, rather than a binary design decision. Elsewhere, there have been more robust criticisms of CAP Theorem. Software engineer Martin Kleppmann, for example, pleaded Please stop calling databases CP or AP in 2015. In a blog post he argues that CAP Theorem only works if you adhere to specific definitions of consistency, availability, and partition tolerance. “If your use of words matches the precise definitions of the proof, then the CAP theorem applies to you," he writes. “But if you’re using some other notion of consistency or availability, you can’t expect the CAP theorem to still apply.” The consequences of this are much like those described in Brewer’s piece from 2012. You need to take a nuanced approach to database trade-offs in which you think them through on your own terms and up against your own needs. The PACELC Theorem One of the developments of this line of argument is an extension to the CAP Theorem: the PACELC Theorem. This moves beyond thinking about consistency and availability and instead places an emphasis on the trade-off between consistency and latency. The PACELC Theorem builds on the CAP Theorem (the ‘PAC’) and adds an else (the ‘E’). What this means is that while you need to choose between availability and consistency if communication between partitions has failed in a distributed system, even if things are running properly and there are no network issues, there is still going to be a trade-off between consistency and latency (the ‘LC’). Conclusion: Learn to align context with technical specs Although the CAP Theorem might seem somewhat outdated, it is valuable in providing a way to think about database architecture design. It not only forces engineers and architects to ask questions about what they want from the technologies they use, but it also forces them to think carefully about the requirements of a given project. What are the business goals? What are user expectations? The PACELC Theorem builds on CAP in an effective way. However, the most important thing about these frameworks is how they help you to think about your problems. Of course the CAP Theorem has limitations. Because it abstracts a problem it is necessarily going to lack nuance. There are going to be things it simplifies. It’s important, as Kleppmann reminds us - to be mindful of these nuances. But at the same time, we shouldn’t let an obsession with nuance and detail allow us to miss the bigger picture.
Read more
  • 0
  • 0
  • 24050
Banner background image

article-image-endpoint-protection-hardening-and-containment-strategies-for-ransomware-attack-protection-cisa-recommended-fireeye-report-highlights
Savia Lobo
12 Sep 2019
8 min read
Save for later

Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights

Savia Lobo
12 Sep 2019
8 min read
Last week, the Cybersecurity and Infrastructure Security Agency (CISA) shared some strategies with users and organizations to prevent, mitigate, and recover against ransomware. They said, “The Cybersecurity and Infrastructure Security Agency (CISA) has observed an increase in ransomware attacks across the Nation. Helping organizations protect themselves from ransomware is a chief priority for CISA.” They have also advised that those attacked by ransomware should report immediately to CISA, a local FBI Field Office, or a Secret Service Field Office. In the three resources shared, the first two include general awareness about what ransomware is and why it is a major threat, mitigations, and much more. The third resource is a FireEye report on ransomware protection and containment strategies. Also Read: Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera CISA INSIGHTS and best practices to prevent ransomware The CISA, as a part of their first “CISA INSIGHTS” product, has put down three simple steps or recommendations organizations can take to manage their cybersecurity risk. CISA advises users to take necessary precautionary steps such as backing up the entire system offline, keeping the system updated and patched, update security solutions, and much more. If users have been affected by ransomware, they should contact the CISA or FBI immediately, work with an experienced advisor to help recover from the attack, isolate the infected systems and phase your return to operations, etc. Further, the CISA also tells users to practice good cyber hygiene, i.e. backup, update, whitelist apps, limit privilege, and using multi-factor authentication. Users should also develop containment strategies that will make it difficult for bad actors to extract information. Users should also review disaster recovery procedures and validate goals with executives, and much more. The CISA team has suggested certain best practices which the organizations should employ to stay safe from a ransomware attack. These include, users should restrict permissions to install and run software applications, and apply the principle of “least privilege” to all systems and services thus, limiting ransomware to spread further. The organization should also ensure using application whitelisting to allow only approved programs to run on a network. All firewalls should be configured to block access to known malicious IP addresses. Organizations should also enable strong spam filters to prevent phishing emails from reaching the end users and authenticate inbound emails to prevent email spoofing. A measure to scan all incoming and outgoing emails to detect threats and filter executable files from reaching end-users should be initiated. Read the entire CISA INSIGHTS to know more about the various ransomware outbreak strategies in detail. Also Read: ‘City Power Johannesburg’ hit by a ransomware attack that encrypted all its databases, applications and network FireEye report on Ransomware Protection and Containment strategies As a third resource, the CISA shared a FireEye report titled “Ransomware Protection and Containment Strategies: Practical Guidance for Endpoint Protection, Hardening, and Containment”. In this whitepaper, FireEye discusses different steps organizations can proactively take to harden their environment to prevent the downstream impact of a ransomware event. These recommendations can also help organizations with prioritizing the most important steps required to contain and minimize the impact of a ransomware event after it occurs. The FireEye report points out that any ransomware can be deployed across an environment in two ways. First, by Manual propagation by a threat actor after they have penetrated an environment and have administrator-level privileges broadly across the Environment to manually run encryptors on the targeted system through Windows batch files, Microsoft Group Policy Objects, and existing software deployment tools used by the victim’s organization. Second, by Automated propagation where the credential or Windows token is extracted directly from disk or memory to build trust relationships between systems through Windows Management Instrumentation, SMB, or PsExec. This binds systems and executes payloads. Hackers also automate brute-force attacks on unpatched exploitation methods, such as BlueKeep and EternalBlue. “While the scope of recommendations contained within this document is not all-encompassing, they represent the most practical controls for endpoint containment and protection from a ransomware outbreak,” FireEye researchers wrote. To combat these two deployment techniques, the FireEye researchers have suggested two enforcement measures which can limit the capability for a ransomware or malware variant to impact a large scope of systems within an environment. The FireEye report covers several technical recommendations to help organizations mitigate the risk of and contain ransomware events some of which include: RDP Hardening Remote Desktop Protocol (RDP) is a common method used by malicious actors to remotely connect to systems, laterally move from the perimeter onto a larger scope of systems for deploying malware. Organizations should also scan their public IP address ranges to identify systems with RDP (TCP/3389) and other protocols (SMB – TCP/445) open to the Internet in a proactive manner. RDP and SMB should not be directly exposed to ingress and egress access to/from the Internet. Other measures that organizations can take include: Enforcing Multi-Factor Authentication Organizations can either integrate a third-party multi-factor authentication technology or leverage a Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS. Leveraging Network Level Authentication (NLA) Network Level Authentication (NLA) provides an extra layer of pre-authentication before a connection is established. It is also useful for protecting against brute force attacks, which mostly target open internet-facing RDP servers. Reducing the exposure of privileged and service accounts For ransomware deployment throughout an environment, both privileged and service accounts credentials are commonly utilized for lateral movement and mass propagation. Without a thorough investigation, it may be difficult to determine the specific credentials that are being utilized by a ransomware variant for connectivity within an environment. Privileged account and service account logon restrictions For accounts having privileged access throughout an environment, these should not be used on standard workstations and laptops, but rather from designated systems (e.g., Privileged Access Workstations (PAWS)) that reside in restricted and protected VLANs and Tiers. Explicit privileged accounts should be defined for each Tier, and only utilized within the designated Tier. The recommendations for restricting the scope of access for privileged accounts is based upon Microsoft’s guidance for securing privileged access. As a quick containment measure, consider blocking any accounts with privileged access from being able to login (remotely or locally) to standard workstations, laptops, and common access servers (e.g., virtualized desktop infrastructure). If a service account is only required to be leveraged on a single endpoint to run a specific service, the service account can be further restricted to only permit the account’s usage on a predefined listing of endpoints. Protected Users Security Group With the “Protected Users” security group for privileged accounts, an organization can minimize various risk factors and common exploitation methods for exposing privileged accounts on endpoints. Starting from Microsoft Windows 8.1 and Microsoft Windows Server 2012 R2 (and above), the “Protected Users” security group was introduced to manage credential exposure within an environment. Members of this group automatically have specific protections applied to their accounts, including: The Kerberos ticket granting ticket (TGT) expires after 4 hours, rather than the normal 10-hour default setting.  No NTLM hash for an account is stored in LSASS since only Kerberos authentication is used (NTLM authentication is disabled for an account).  Cached credentials are blocked. A Domain Controller must be available to authenticate the account. WDigest authentication is disabled for an account, regardless of an endpoint’s applied policy settings. DES and RC4 can’t be used for Kerberos pre-authentication (Server 2012 R2 or higher); rather Kerberos with AES encryption will be enforced. Accounts cannot be used for either constrained or unconstrained delegation (equivalent to enforcing the “Account is sensitive and cannot be delegated” setting in Active Directory Users and Computers). Cleartext password protections Organizations should also try minimizing the exposure of credentials and tokens in memory on endpoints. On older Windows Operating Systems, cleartext passwords are stored in memory (LSASS) to primarily support WDigest authentication. The WDigest should be explicitly disabled on all Windows endpoints where it is not disabled by default. WDigest authentication is disabled in Windows 8.1+ and in Windows Server 2012 R2+, by default. Starting from Windows 7 and Windows Server 2008 R2, after installing Microsoft Security Advisory KB2871997, WDigest authentication can be configured either by modifying the registry or by using the “Microsoft Security Guide” Group Policy template from the Microsoft Security Compliance Toolkit. To implement these and other ransomware protection and containment strategies, read the FireEye report. Other interesting news in Cybersecurity Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries Exim patches a major security bug found in all versions that left millions of Exim servers vulnerable to security attacks CircleCI reports of a security breach and malicious database in a third-party vendor account
Read more
  • 0
  • 0
  • 3659

article-image-different-types-of-nosql-databases-and-when-to-use-them
Richard Gall
10 Sep 2019
8 min read
Save for later

Different types of NoSQL databases and when to use them

Richard Gall
10 Sep 2019
8 min read
Why NoSQL databases? The popularity of NoSQL databases over the last decade or so has been driven by an explosion of data. Before what’s commonly described as ‘the big data revolution’, relational databases were the norm - these are databases that contain structured data. Structured data can only be structured if it is based on an existing schema that defines the relationships (hence relational) between the data inside the database. However, with the vast quantities of data that are now available to just about every business with an internet connection, relational databases simply aren’t equipped to handle the complexity and scale of large datasets. Why not SQL databases? This is for a couple of reasons. The defined schemas that are a necessary component of every relational database will not only undermine the richness and integrity of the data you’re working with, relational databases are also hard to scale. Relational databases can only scale vertically, not horizontally. That’s fine to a certain extent, but when you start getting high volumes of data - such as when millions of people use a web application, for example - things get really slow and you need more processing power. You can do this by upgrading your hardware, but that isn’t really sustainable. By scaling out, as you can with NoSQL databases, you can use a distributed network of computers to handle data. That gives you more speed and more flexibility. This isn’t to say that relational and SQL databases have had their day. They still fulfil many use cases. The only difference is that NoSQL can offers a level of far greater power and control for data intensive use cases. Indeed, using a NoSQL database when SQL will do is only going to add more complexity to something that just doesn’t need it. Seven NoSQL Databases in a Week Different types of NoSQL databases and when to use them So, now we’ve looked at why NoSQL databases have grown in popularity in recent years, lets dig into some of the different options available. There are a huge number of NoSQL databases out there - some of them open source, some premium products - many of them built for very different purposes. Broadly speaking there are 4 different models of NoSQL databases: Key-Value pair-based databases Column-based databases Document-oriented databases Graph databases Let’s take a look at these four models, how they’re different from one another, and some examples of the product options in each. Key Value pair-based NoSQL database management systems Key/Value pair based NoSQL databases store data in, as you might expect, pairs of keys and values. Data is stored with a matching key - keys have no relation or structure (so, keys could be height, age, hair color, for example). When should you use a key/value pair-based NoSQL DBMS? Key/value pair based NoSQL databases are the most basic type of NoSQL database. They’re useful for storing fairly basic information, like details about a customer. Which key/value pair-based DBMS should you use? There are a number of different key/value pair databases. The most popular is Redis. Redis is incredibly fast and very flexible in terms of the languages and tools it can be used with. It can be used for a wide variety of purposes - one of the reasons high-profile organizations use it, including Verizon, Atlassian, and Samsung. It’s also open source with enterprise options available for users with significant requirements. Redis 4.x Cookbook Other than Redis, other options include Memcached and Ehcache. As well as those, there are a number of other multi-model options (which will crop up later, no doubt) such as Amazon DynamoDB, Microsoft’s Cosmos DB, and OrientDB. Hands-On Amazon DynamoDB for Developers [Video] RDS PostgreSQL and DynamoDB CRUD: AWS with Python and Boto3 [Video] Column-based NoSQL database management systems Column-based databases separate data into discrete columns. Instead of using rows - whereby the row ID is the main key - column-based database systems flip things around to make the data the main key. By using columns you can gain much greater speed when querying data. Although it’s true that querying a whole row of data would take longer in a column-based DBMS, the use cases for column based databases mean you probably won’t be doing this. Instead you’ll be querying a specific part of the data rather than the whole row. When should you use a column-based NoSQL DBMS? Column-based systems are most appropriate for big data and instances where data is relatively simple and consistent (they don’t particularly handle volatility that well). Which column-based NoSQL DBMS should you use? The most popular column-based DBMS is Cassandra. The software prizes itself on its performance, boasting 100% availability thanks to lacking a single point of failure, and offering impressive scalability at a good price. Cassandra’s popularity speaks for itself - Cassandra is used by 40% of the Fortune 100. Mastering Apache Cassandra 3.x - Third Edition Learn Apache Cassandra in Just 2 Hours [Video] There are other options available, such as HBase and Cosmos DB. HBase High Performance Cookbook Document-oriented NoSQL database management systems Document-oriented NoSQL systems are very similar to key/value pair database management systems. The only difference is that the value that is paired with a key is stored as a document. Each document is self-contained, which means no schema is required - giving a significant degree of flexibility over the data you have. For software developers, this is essential - it’s for this reason that document-oriented databases such as MongoDB and CouchDB are useful components of the full-stack development tool chain. Some search platforms such as ElasticSearch use mechanisms similar to standard document-oriented systems - so they could be considered part of the same family of database management systems. When should you use a document-oriented DBMS? Document-oriented databases can help power many different types of websites and applications - from stores to content systems. However, the flexibility of document-oriented systems means they are not built for complex queries. Which document-oriented DBMS should you use? The leader in this space is, MongoDB. With an amazing 40 million downloads (and apparently 30,000 more every single day), it’s clear that MongoDB is a cornerstone of the NoSQL database revolution. MongoDB 4 Quick Start Guide MongoDB Administrator's Guide MongoDB Cookbook - Second Edition There are other options as well as MongoDB - these include CouchDB, CouchBase, DynamoDB and Cosmos DB. Learning Azure Cosmos DB Guide to NoSQL with Azure Cosmos DB Graph-based NoSQL database management systems The final type of NoSQL database is graph-based. The notable distinction about graph-based NoSQL databases is that they contain the relationships between different data. Subsequently, graph databases look quite different to any of the other databases above - they store data as nodes, with the ‘edges’ of the nodes describing their relationship to other nodes. Graph databases, compared to relational databases, are multidimensional in nature. They display not just basic relationships between tables and data, but more complex and multifaceted ones. When should you use a graph database? Because graph databases contain the relationships between a set of data (customers, products, price etc.) they can be used to build and model networks. This makes graph databases extremely useful for applications ranging from fraud detection to smart homes to search. Which graph database should you use? The world’s most popular graph database is Neo4j. It’s purpose built for data sets that contain strong relationships and connections. Widely used in the industry in companies such as eBay and Walmart, it has established its reputation as one of the world’s best NoSQL database products. Back in 2015 Packt’s Data Scientist demonstrated how he used Neo4j to build a graph application. Read more. Learning Neo4j 3.x [Video] Exploring Graph Algorithms with Neo4j [Video] NoSQL databases are the future - but know when to use the right one for the job Although NoSQL databases will remain a fixture in the engineering world, SQL databases will always be around. This is an important point - when it comes to databases, using the right tool for the job is essential. It’s a valuable exercise to explore a range of options and get to know how they work - sometimes the difference might just be a personal preference about usability. And that’s fine - you need to be productive after all. But what’s ultimately most essential is having a clear sense of what you’re trying to accomplish, and choosing the database based on your fundamental needs.
Read more
  • 0
  • 0
  • 22047
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-is-scala-3-0-a-new-language-all-together-martin-odersky-its-designer-says-yes-and-no
Bhagyashree R
10 Sep 2019
6 min read
Save for later

Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”

Bhagyashree R
10 Sep 2019
6 min read
At Scala Days Lausanne 2019 in July, Martin Odersky, the lead designer of Scala, gave a tour of the upcoming major version, Scala 3.0. He talked about the roadmap to Scala 3.0, its new features, how its situation is different from Python 2 vs 3, and much more. Roadmap to Scala 3.0 Odersky announced that “Scala 3.0 has almost arrived” since all the features are fleshed out, with implementations in the latest releases of Dotty, the next generation compiler for Scala. The team plans to go into feature freeze and release Scala 3.0 M1 in fall this year. Following that the team will focus on stabilization, complete SIP process, and write specs and user docs. They will also work on community build, compatibility, and migration tasks. All these tasks will take about a year, so we can expect Scala 3.0 release in fall 2020. Scala 2.13 was released in June this year. It was shipped with redesigned collections, updated futures implementation, language changes including literal types, partial unification on by default, by-name implicits, macro annotations, among others. The team is also working simultaneously on its next release, Scala 2.14. Its main focus will be to ease out the migration process from Scala 2 to 3 by defining migration tools, shim libraries, targeted deprecations, and more. What’s new in this major release There is a whole lot of improvements coming in Scala 3.0, some of which Odersky discussed in his talk: Scala will drop the ‘new’ keyword: Starting with Scala 3.0, you will be able to omit ‘new’ from almost all instance creations. With this change, developers will no longer have to define a case class just to get nice constructor calls. Also, this will prevent accidental infinite loops in the cases when ‘apply’ has the same arguments as the constructor. Top-level definitions: In Scala 3.0, top-level definitions will be added as a replacement for package objects. This is because only one package object definition is allowed per package. Also, a trait or class defined in the package object is different from the one defined in the package, which can lead to unexpected behavior. Redesigned enumeration support: Previously, Scala did not provide a very straightforward way to define enums. With this update, developers will have a simple way to define new types with a finite number of values or constructions. They will also be able to add parameters and define fields and methods. Union types: In previous versions, union types can be defined with the help of constructs such as Either or subtyping hierarchies, but these constructs are bulkier. Adding union types to the language will fix Scala’s least upper bounds problem and provide added modelling power. Extension methods: With extension methods, you can define methods that can be used infix without any boilerplate. These will essentially replace implicit classes. Delegates: Implicit is a “bedrock of programming” in Scala. However, they suffer from several limitations. Odersky calls implicit conversions “recipe for disaster” because they tend to interact very badly with each other and add too much implicitness. Delegates will be their simpler and safer alternative. Functions everywhere: In Scala, functions and methods are two different things. While methods are members of classes and objects, functions are objects themselves. Until now, methods were quite powerful as compared to functions. They are defined by properties like dependent, polymorphic, and implicit. With Scala 3.0, these properties will be associated with functions as well. Recent discussions regarding the updates in Scala 3.0 A minimal alternative for scala-reflect and TypeTag Scala 3.0 will drop support for ‘scala-reflect’ and ‘TypeTag’. Also, there hasn't been much discussion about its alternative. However, some developers believe that it is an important feature and is currently in use by many projects including Spark and doobie. Explaining the reason behind dropping the support, a SIP committee member, wrote on the discussion forum, “The goal in Scala 3 is what we had before scala-reflect. For use-cases where you only need an 80% solution, you should be able to accomplish that with straight-up Java reflection. If you need more, TASTY can provide you the basics. However, we don’t think a 100% solution is something folks need, and it’s unclear if there should be a “core” implementation that is not 100%.” Odersky shared that Scala 3.0 has quoted.Type as an alternative to TypeTag. He commented, “Scala 3 has the quoted package, with quoted.Expr as a representation of expressions and quoted.Type as a representation of types. quoted.Type essentially replaces TypeTag. It does not have the same API but has similar functionality. It should be easier to use since it integrates well with quoted terms and pattern matching.” Follow the discussion on Scala Contributors. Python-style indentation (for now experimental) Last month, Odersky proposed to bring indentation based syntax in Scala while also supporting the brace-based. This is because when it was first created, most of the languages used braces. However, with time indentation-based syntax has actually become the conventional syntax. Listing the reasons behind this change, Odersky wrote, The most widely taught language is now (or will be soon, in any case) Python, which is indentation based. Other popular functional languages are also indentation based (e..g Haskell, F#, Elm, Agda, Idris). Documentation and configuration files have shifted from HTML and XML to markdown and yaml, which are both indentation based. So by now indentation is very natural, even obvious, to developers. There's a chance that anything else will increasingly be considered "crufty" Odersky on whether Scala 3.0 is a new language Odersky answers this with both yes and no. Yes, because Scala 3.0 will include several language changes including feature removals. The introduced new constructs will improve user experience and on-boarding dramatically. There will also be a need to rewrite current Scala books to reflect the recent developments. No, because it will still be Scala and all core constructs will still be the same. He concludes, "Between yes and no, I think the fairest answer is to say it is really a process. Scala 3 keeps most constructs of Scala 2.13, alongside the new ones. Some constructs like old implicits and so on will be phased out in the 3.x release train. So, that requires some temporary duplication in the language, but the end result should be a more compact and regular language.” Comparing with Python 2 and 3, Odersky believes that Scala's situation is better because of static typing and binary compatibility. The current version of Dotty can be linked with Scala 2.12 or 2.13 files. He shared that in the future, it will be possible to have a Dotty library module that can then be used by both Scala 2 and 3 modules. Read also: Core Python team confirms sunsetting Python 2 on January 1, 2020 Watch Odersky’s talk to know more in detail. https://www.youtube.com/watch?v=_Rnrx2lo9cw&list=PLLMLOC3WM2r460iOm_Hx1lk6NkZb8Pj6A Other news in programming Implementing memory management with Golang’s garbage collector Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?  
Read more
  • 0
  • 0
  • 12045

article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 6097

article-image-5-pitfalls-of-react-hooks-you-should-avoid-kent-c-dodds
Sugandha Lahoti
09 Sep 2019
7 min read
Save for later

5 pitfalls of React Hooks you should avoid - Kent C. Dodds

Sugandha Lahoti
09 Sep 2019
7 min read
The React community first introduced Hooks, back in October 2018 as a JavaScript function to allow using React without classes. The idea was simple - With the help of Hooks, you will be able to “hook into” or use React state and other React features from function components. In February, React 16.8 released with the stable implementation of Hooks. As much as Hooks are popular, there are certain pitfalls which developers should avoid when they are learning and adopting React Hooks. In his talk, “React Hook Pitfalls” at React Rally 2019 (August 22-23 2019), Kent C. Dodds talks about 5 common pitfalls of React Hooks and how to avoid/fix them. Kent is a world renowned speaker, maintainer and contributor of hundreds of popular npm packages. He's actively involved in the open source community of React and general JavaScript ecosystem. He’s also the creator of react-testing-library which provides simple and complete React DOM testing utilities that encourage good testing practices. Tl;dr Problem: Starting without a good foundation Solution: Read the React Hooks docs and the FAQ Problem: Not using (or ignoring) the ESLint plugin Solution: Install, use, and follow the ESLint plugin Problem: Thinking in Lifecycles Solution: Don't think about Lifecycles, think about synchronizing side effects to state Problem: Overthinking performance Solution: React is fast by default and so research before applying performance optimizations pre-maturely Problem: Overthinking the testing of React hooks Solution: Avoid testing ‘implementation details’ of the component. Pitfall #1 Starting without a good foundation Often React developers begin coding without reading the documentation and that leads to a number of issues and small problems. Kent recommends developers to start by reading the React Hooks documentation and the FAQ section thoroughly. He jokingly adds, “Once you read the frequently asked questions, you can ask the infrequently asked questions. And then maybe those will get in the docs, too. In fact, you can make a pull request and put it in yourself.” Pitfall #2: Not using or (ignoring) the ESLint plugin The ESLint plugin is the official plugin built by the React team. It has two rules: "rules of hooks" and "exhaustive deps." The default recommended configuration of these rules is to set "rules of hooks" to an error, and the "exhaustive deps" to a warning. The linter plugin enforces these rules automatically. The two “Rules of Hooks” are: Don’t call Hooks inside loops, conditions, or nested functions Instead, always use Hooks at the top level of your React function. By following this rule, you ensure that Hooks are called in the same order each time a component renders. Only Call Hooks from React Functions Don’t call Hooks from regular JavaScript functions. Instead, you can either call Hooks from React function components or call them from custom Hooks. Kent agrees that sometimes the rule is incapable of performing static analysis on your code properly due to limitations of ESLint. “I believe”, he says, “ this is why it's recommended to set the exhaustive deps rule to "warn" instead of "error." When this happens, the plugin will tell you so in the warning. He recommends  developers should restructure their code to avoid that warning. The solution Kent offers for this pitfall is to Install, follow, and use the ESLint plugin. The ESLint plugin, he says will not only catch easily missable bugs, but it will also teach you things about your code and hooks in the process. Pitfall #3: Thinking in Lifecycles In Hooks the components are declarative. Kent says that this feature allows you to stop thinking about "when things should happen in the lifecycle of the component" (which doesn't matter that much) and more about "when things should happen in relation to state changes" (which matters much more.) With React Hooks, he adds, you're not thinking about component Lifecycles, instead you're thinking about synchronizing the state of the side-effects with the state of the application. This idea is difficult for React developers to grasp initially, however once you do it, he adds, you will naturally experience fewer bugs in your apps thanks to the design of the API. https://twitter.com/ryanflorence/status/1125041041063665666 Solution: Think about synchronizing side effects to state, rather than lifecycle methods. Pitfall #4: Overthinking performance Kent says that even though it's really important to be considerate of performance, you should also think about your code complexity. If your code is complex, you can't give people the great features they're looking for, as you will be spending all your time, dealing with the complexity of your code. He adds, "unnecessary re-renders" are not necessarily bad for performance. Just because a component re-renders, doesn't mean the DOM will get updated (updating the DOM can be slow). React does a great job at optimizing itself; it’s fast by default. For this, he mentions. “If your app's unnecessary re-renders are causing your app to be slow, first investigate why renders are slow. If rendering your app is so slow that a few extra re-renders produces a noticeable slow-down, then you'll likely still have performance problems when you hit "necessary re-renders." Once you fix what's making the render slow, you may find that unnecessary re-renders aren't causing problems for you anymore.” If still unnecessary re-renders are causing you performance problems, then you can unpack the built-in performance optimization APIs like React.memo, React.useMemo, and React.useCallback. More information on this on Kent’s blogpost on useMemo and useCallback. Solution: React is fast by default and so research before applying performance optimizations pre-maturely; profile your app and then optimize it. Pitfall #5: Overthinking the testing of React Hooks Kent says, that people are often concerned that they need to rewrite their tests along with all of their components when they refactor to hooks from class components. He explains, “Whether your component is implemented via Hooks or as a class, it is an implementation detail of the component. Therefore, if your test is written in such a way that reveals that, then refactoring your component to hooks will naturally cause your test to break.” He adds, “But the end-user doesn't care about whether your components are written with hooks or classes. They just care about being able to interact with what those components render to the screen. So if your tests interact with what's being rendered, then it doesn't matter how that stuff gets rendered to the screen, it'll all work whether you're using classes or hooks.” So, to avoid this pitfall, Kent’s recommendation is that you write tests that will work irrespective of whether you're using classes or hook. Before you upgrade to Hooks, start writing your tests free of implementation detail and your refactored hooks can be validated by the tests that you've written for your classes. The more your tests resemble the way your software is used, the more confidence they can give you. In review: Read the docs and the FAQ. Install, use and follow the ESLint plugin. Think about synchronizing side effects to state. Profile your app and then optimize it. Avoid testing implementation details. Watch the full talk on YouTube. https://www.youtube.com/watch?v=VIRcX2X7EUk Read more about React #Reactgate forces React leaders to confront community’s toxic culture head on React.js: why you should learn the front end JavaScript library and how to get started Ionic React RC is now out!
Read more
  • 0
  • 0
  • 8508

article-image-developers-from-the-swift-for-tensorflow-project-propose-adding-first-class-differentiable-programming-to-swift
Bhagyashree R
09 Sep 2019
5 min read
Save for later

Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift

Bhagyashree R
09 Sep 2019
5 min read
After working for over 1.5 years on the Differentiable Programming Mega-Proposal, Richard Wei, a developer at Google Brain, and his team submitted the proposal on the Swift Evolution forum on Thursday last week. This proposal aims to “push Swift's capabilities to the next level in numerics and machine learning” by introducing differentiable programming as a new language feature in Swift. It is a part of the Swift for TensorFlow project under which the team is integrating TensorFlow directly into the language to offer developers a next-generation platform for machine learning. What is differentiable programming With the increasing sophistication in deep learning models and the introduction of modern deep learning frameworks, many researchers have started to realize that building neural networks is very similar to programming. Yann LeCun, VP and Chief AI Scientist at Facebook, calls differentiable programming “a little more than a rebranding of the modern collection Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.” He compares it with regular programming, with the only difference that the resulting programs are “parameterized, automatically differentiated, and trainable/optimizable.” Many also say that differentiable programming is a different name for automatic differentiation, a collection of techniques to numerically evaluate the derivative of a function. It can be seen as a new programming paradigm in which programs can be differentiated throughout. Check out the paper “Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagation” to get a better understanding of differentiable programming. Why differential programming is proposed in Swift Swift is an expressive, high-performance language, which makes it a perfect candidate for numerical applications. According to the proposal authors, first-class support for differentiable programming in Swift will allow safe and powerful machine learning development. The authors also believe that this is a “big step towards high-level numerical computing support.” With this proposal, they aim to make Swift a “real contender in the numerical computing and machine learning landscape.” Here are some of the advantages of adding first-class support for differentiable programming in Swift: Better language coverage: First-class differentiable programming support will enable differentiation to work smoothly with other Swift features. This will allow developers to code normally without being restricted to a subset of Swift. Enable extensibility: This will provide developers an “extensible differentiable programming system.” They will be able to create custom differentiation APIs by leveraging primitive operators defined in the standard library and supported by the type system. Static warnings and errors: This will enable the compiler to statically identify the functions that cannot be differentiated or will give a zero derivative. It will then be able to give a non-differentiability error or warning. This will improve productivity by making common runtime errors in machine learning directly debuggable without library boundaries. Some of the components that will be added in Swift under this proposal are: The Differentiable protocol: This is a standard library protocol that will generalize all data structures that can be a parameter or result of a differentiable function. The @differentiable declaration attribute: This will be used to mark all the function-like declarations as differentiable. The @differentiable function types: This is a subtype of normal function types with a different runtime representation and calling convention. Differentiable function types will have differentiable parameters and results. Differential operators: These are the core differentiation APIs that take ‘@differentiable’ functions as inputs and return derivative functions or compute derivative values. @differentiating and @transposing attributes: These attributes are for declaring custom derivative function for some other function declaration. This proposal sparked a discussion on Hacker News. Many developers were excited about bringing differentiable programming support in the Swift core. A user commented, “This is actually huge. I saw a proof of concept of something like this in Haskell a few years back, but it's amazing it see it (probably) making it into the core of a mainstream language. This may let them capture a large chunk of the ML market from Python - and hopefully, greatly improve ML APIs while they're at it.” Some felt that a library could have served the purpose. “I don't see why a well-written library could not serve the same purpose. It seems like a lot of cruft. I doubt, for example, Python would ever consider adding this and it's the de facto language that would benefit the most from something like this - due to the existing tools and communities. It just seems so narrow and not at the same level of abstraction that languages typically sit at. I could see the language supporting higher-level functionality so a library could do this without a bunch of extra work (such as by some reflection),” a user added. Users also discussed another effort that goes along the lines of this project: Julia Zygote, which is a working prototype for source-to-source automatic differentiation. A user commented, “Yup, work is continuing apace with Julia’s next-gen Zygote project. Also, from the GP’s thought about applications beyond DL, my favorite examples so far are for model-based RL and Neural ODEs.” To know more in detail, check out the proposal: Differentiable Programming Mega-Proposal. Other news in programming Why Perl 6 is considering a name change? The Julia team shares its finalized release process with the community TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 4552
article-image-what-can-you-expect-at-neurips-2019
Sugandha Lahoti
06 Sep 2019
5 min read
Save for later

What can you expect at NeurIPS 2019?

Sugandha Lahoti
06 Sep 2019
5 min read
Popular machine learning conference NeurIPS 2019 (Conference on Neural Information Processing Systems) will be held on Sunday, December 8 through Saturday, December 14 at the Vancouver Convention Center. The conference invites papers tutorials, and submissions on cross-disciplinary research where machine learning methods are being used in other fields, as well as methods and ideas from other fields being applied to ML.  NeurIPS 2019 accepted papers Yesterday, the conference published the list of their accepted papers. A total of 1429 papers have been selected. Submissions opened from May 1 on a variety of topics such as Algorithms, Applications, Data implementations, Neuroscience, and Cognitive Science, Optimization, Probabilistic Methods, Reinforcement Learning and Planning, and Theory. (The full list of Subject Areas are available here.) This year at NeurIPS 2019, authors of accepted submissions were mandatorily required to prepare either a 3-minute video or a PDF of slides summarizing the paper or prepare a PDF of the poster used at the conference. This was done to make NeurIPS content accessible to those unable to attend the conference. NeurIPS 2019 also introduced a mandatory abstract submission deadline, a week before final submissions are due. Only a submission with a full abstract was allowed to have the full paper uploaded. The authors were also asked to answer questions from the Reproducibility Checklist during the submission process. NuerIPS 2019 tutorial program NeurIPS also invites experts to present tutorials that feature topics that are of interest to a sizable portion of the NeurIPS community and are different from the ones already presented at other ML conferences like ICML or ICLR. They looked for tutorial speakers that cover topics beyond their own research in a comprehensive manner that encompasses multiple perspectives.  The tutorial chairs for NeurIPS 2019 are Danielle Belgrave and Alice Oh. They initially compiled a list based on the last few years’ publications, workshops, and tutorials presented at NeurIPS and at related venues. They asked colleagues for recommendations and conducted independent research. In reviewing the potential candidates, the chair read papers to understand their expertise and watch their videos to appreciate their style of delivery. The list of candidates was emailed to the General Chair, Diversity & Inclusion Chairs, and the rest of the Organizing Committee for their comments on this shortlist. Following a few adjustments based on their input, the potential speakers were selected. A total of 9 tutorials have been selected for NeurIPS 2019: Deep Learning with Bayesian Principles - Emtiyaz Khan Efficient Processing of Deep Neural Network: from Algorithms to Hardware Architectures - Vivienne Sze Human Behavior Modeling with Machine Learning: Opportunities and Challenges - Nuria Oliver, Albert Ali Salah Interpretable Comparison of Distributions and Models - Wittawat Jitkrittum, Dougal Sutherland, Arthur Gretton Language Generation: Neural Modeling and Imitation Learning -  Kyunghyun Cho, Hal Daume III Machine Learning for Computational Biology and Health - Anna Goldenberg, Barbara Engelhardt Reinforcement Learning: Past, Present, and Future Perspectives - Katja Hofmann Representation Learning and Fairness - Moustapha Cisse, Sanmi Koyejo Synthetic Control - Alberto Abadie, Vishal Misra, Devavrat Shah NeurIPS 2019 Workshops NeurIPS Workshops are primarily used for discussion of work in progress and future directions. This time the number of Workshop Chairs doubled, from two to four; selected chairs are Jenn Wortman Vaughan, Marzyeh Ghassemi, Shakir Mohamed, and Bob Williamson. However, the number of workshop submissions went down from 140 in 2018 to 111 in 2019. Of these 111 submissions, 51 workshops were selected. The full list of selected Workshops is available here.  The NeurIPS 2019 chair committee introduced new guidelines, expectations, and selection criteria for the Workshops. This time workshops had an important focus on the nature of the problem, intellectual excitement of the topic, diversity, and inclusion, quality of proposed invited speakers, organizational experience and ability of the team and more.  The Workshop Program Committee consisted of 37 reviewers with each workshop proposal assigned to two reviewers. The reviewer committee included more senior researchers who have been involved with the NeurIPS community. Reviewers were asked to provide a summary and overall rating for each workshop, a detailed list of pros and cons, and specific ratings for each of the new criteria. After all reviews were submitted, each proposal was assigned to two of the four chair committee members. The chair members looked through assigned proposals and their reviews to form an educated assessment of the pros and cons of each. Finally, the entire chair held a meeting to discuss every submitted proposal to make decisions.  You can check more details about the conference on the NeurIPS website. As always keep checking this space for more content about the conference. In the meanwhile, you can read our previous year coverage: NeurIPS Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk] NeurIPS 2018: Rethinking transparency and accountability in machine learning NeurIPS 2018: Developments in machine learning through the lens of Counterfactual Inference [Tutorial] Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]
Read more
  • 0
  • 0
  • 4665

article-image-google-is-circumventing-gdpr-reveals-braves-investigation-for-the-authorized-buyers-ad-business-case
Bhagyashree R
06 Sep 2019
6 min read
Save for later

Google is circumventing GDPR, reveals Brave's investigation for the Authorized Buyers ad business case

Bhagyashree R
06 Sep 2019
6 min read
Last year, Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, filed a complaint against Google’s DoubleClick/Authorized Buyers ad business with the Irish Data Protection Commission (DPC). New evidence produced by Brave reveals that Google is circumventing GDPR and also undermining its own data protection measures. Brave calls Google’s Push Pages a GDPR workaround Brave’s new evidence rebuts some of Google’s claims regarding its DoubleClick/Authorized Buyers system, the world’s largest real-time advertising auction house. Google says that it prohibits companies that use its real-time bidding (RTB) ad system “from joining data they receive from the Cookie Matching Service.” In September last year, Google announced that it has removed encrypted cookie IDs and list names from bid requests with buyers in its Authorized Buyers marketplace. Brave’s research, however, found otherwise, “Brave’s new evidence reveals that Google allowed not only one additional party, but many, to match with Google identifiers. The evidence further reveals that Google allowed multiple parties to match their identifiers for the data subject with each other.” When you visit a website that has Google ads embedded on its web pages, Google will run a real-time bidding ad auction to determine which advertiser will get to display its ads. For this, it uses Push Pages, which is the mechanism in question here. Brave hired Zach Edwards, the co-founder of digital analytics startup Victory Medium, and MetaX, a company that audits data supply chains, to investigate and analyze a log of Dr. Ryan’s web browsing. The research revealed that Google's Push Pages can essentially be used as a workaround for user IDs. Google shares a ‘google_push’ identifier with the participating companies to identify a user. Brave says that the problem here is that the identifier that was shared was common to multiple companies. This means that these companies could have cross-referenced what they learned about the user from Google with each other. Used by more than 8.4 million websites, Google's DoubleClick/Authorized Buyers broadcasts personal data of users to 2000+ companies. This data includes the category of what a user is reading, which can reveal their political views, sexual orientation, religious beliefs, as well as their locations. There are also unique ID codes that are specific to a user that can let companies uniquely identify a user. All this information can give these companies a way to keep tabs on what users are “reading, watching, and listening to online.” Brave calls Google’s RTB data protection policies “weak” as they ask these companies to self-regulate. Google does not have much control over what these companies do with the data once broadcast. “Its policy requires only that the thousands of companies that Google shares peoples’ sensitive data with monitor their own compliance, and judge for themselves what they should do,” Brave wrote. A Google spokesperson, as a response to this news, told Forbes, “We do not serve personalised ads or send bid requests to bidders without user consent. The Irish DPC — as Google's lead DPA — and the UK ICO are already looking into real-time bidding in order to assess its compliance with GDPR. We welcome that work and are co-operating in full." Users recommend starting an “information campaign” instead of a penalty that will hardly affect the big tech This news triggered a discussion on Hacker News where users talked about the implications of RTB and what strict actions the EU can take to protect user privacy. A user explained, "So, let's say you're an online retailer, and you have Google IDs for your customers. You probably have some useful and sensitive customer information, like names, emails, addresses, and purchase histories. In order to better target your ads, you could participate in one of these exchanges, so that you can use the information you receive to suggest products that are as relevant as possible to each customer. To participate, you send all this sensitive information, along with a Google ID, and receive similar information from other retailers, online services, video games, banks, credit card providers, insurers, mortgage brokers, service providers, and more! And now you know what sort of vehicles your customers drive, how much they make, whether they're married, how many kids they have, which websites they browse, etc. So useful! And not only do you get all these juicy private details, but you've also shared your customers sensitive purchase history with anyone else who is connected to the exchange." Others said that a penalty is not going to deter Google. "The whole penalty system is quite silly. The fines destroy small companies who are the ones struggling to comply, and do little more than offer extremely gentle pokes on the wrist for megacorps that have relatively unlimited resources available for complete compliance, if they actually wanted to comply." Users suggested that the EU should instead start an information campaign. "EU should ignore the fines this time and start an "information campaign" regarding behavior of Google and others. I bet that hurts Google 10 times more." Some also said that not just Google but the RTB participants should also be held responsible. "Because what Google is doing is not dissimilar to how any other RTB participant is acting, saying this is a Google workaround seems disingenuous." With this case, Brave has launched a full-fledged campaign that aims to “reform the multi-billion dollar RTB industry spans sixteen EU countries.” To achieve this goal it has collaborated with several privacy NGOs and academics including the Open Rights Group, Dr. Michael Veale of the Turing Institute, among others. In other news, a Bloomberg report reveals that Google and other internet companies have recently asked for an amendment to the California Consumer Privacy Act, which will be enacted in 2020. The law currently limits how digital advertising companies collect and make money from user data. The amendments proposed include approval for collecting user data for targeted advertising, using the collected data from websites for their own analysis, and many others. Read the Bloomberg report to know more in detail. Other news in Data Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge GDPR complaint in EU claim billions of personal data leaked via online advertising bids European Union fined Google 1.49 billion euros for antitrust violations in online advertising  
Read more
  • 0
  • 0
  • 2650

article-image-key-skills-every-database-programmer-should-have
Sugandha Lahoti
05 Sep 2019
7 min read
Save for later

Key skills every database programmer should have

Sugandha Lahoti
05 Sep 2019
7 min read
According to Robert Half Technology’s 2019 IT salary report, ‘Database programmer’ is one of the 13 most in-demand tech jobs for 2019. For an entry-level programmer, the average salary is $98,250 which goes up to $167,750 for a seasoned expert. A typical database programmer is responsible for designing, developing, testing, deploying, and maintaining databases. In this article, we will list down the top critical tech skills essential to database programmers. #1 Ability to perform Data Modelling The first step is to learn to model the data. In Data modeling, you create a conceptual model of how data items relate to each other. In order to efficiently plan a database design, you should know the organization you are designing the database from. This is because Data models describe real-world entities such as ‘customer’, ‘service’, ‘products’, and the relation between these entities. Data models provide an abstraction for the relations in the database. They aid programmers in modeling business requirements and in translating business requirements into relations. They are also used for exchanging information between the developers and business owners. During the design phase, the database developer should pay great attention to the underlying design principles, run a benchmark stack to ensure performance, and validate user requirements. They should also avoid pitfalls such as data redundancy, null saturation, and tight coupling. #2 Know a database programming language, preferably SQL Database programmers need to design, write and modify programs to improve their databases. SQL is one of the top languages that are used to manipulate the data in a database and to query the database. It's also used to define and change the structure of the data—in other words, to implement the data model. Therefore it is essential that you learn SQL. In general, SQL has three parts: Data Definition Language (DDL): used to create and manage the structure of the data Data Manipulation Language (DML): used to manage the data itself Data Control Language (DCL): controls access to the data Considering, data is constantly inserted into the database, changed, or retrieved DML is used more often in day-to-day operations than the DDL, so you should have a strong grasp on DML. If you plan to grow in a database architect role in the near future, then having a good grasp of DDL will go a long way. Another reason why you should learn SQL is that almost every modern relational database supports SQL. Although different databases might support different features and implement their own dialect of SQL, the basics of the language remain the same. If you know SQL, you can quickly adapt to MySQL, for example. At present, there are a number of categories of database models predominantly, relational, object-relational, and NoSQL databases. All of these are meant for different purposes. Relational databases often adhere to SQL. Object-relational databases (ORDs) are also similar to relational databases. NoSQL, which stands for "not only SQL," is an alternative to traditional relational databases useful for working with large sets of distributed data. They provide benefits such as availability, schema-free, and horizontal scaling, but also have limitations such as performance, data retrieval constraints, and learning time. For beginners, it is advisable to first start with experimenting on relational databases learning SQL, gradually transitioning to NoSQL DBMS. #3 Know how to Extract, Transform, Load various data types and sources A database programmer should have a good working knowledge of ETL (Extract, Transform Load) programming. ETL developers basically extract data from different databases, transform it and then load the data into the Data Warehouse system. A Data Warehouse provides a common data repository that is essential for business needs. A database programmer should know how to tune existing packages, tables, and queries for faster ETL processing. They should conduct unit tests before applying any change to the existing ETL process. Since ETL takes data from different data sources (SQL Server, CSV, and flat files), a database developer should have knowledge on how to deal with different data sources. #4 Design and test Database plans Database programmers o perform regular tests to identify ways to solve database usage concerns and malfunctions. As databases are usually found at the lowest level of the software architecture, testing is done in an extremely cautious fashion. This is because changes in the database schema affect many other software components. A database developer should make sure that when changing the database structure, they do not break existing applications and that they are using the new structures properly. You should be proficient in Unit testing your database. Unit tests are typically used to check if small units of code are functioning properly. For databases, unit testing can be difficult. So the easiest way to do all of that is by writing the tests as SQL scripts. You should also know about System Integration Testing which is done on the complete system after the hardware and software modules of that system have been integrated. SIT validates the behavior of the system and ensures that modules in the system are functioning suitably. #5 Secure your Database Data protection and security are essential for the continuity of business. Databases often store sensitive data, such as user information, email addresses, geographical addresses, and payment information. A robust security system to protect your database against any data breach is therefore necessary. While a database architect is responsible for designing and implementing secure design options, a database admin must ensure that the right security and privacy policies are in place and are being observed. However, this does not absolve database programmers from adopting secure coding practices. Database programmers need to ensure that data integrity is maintained over time and is secure from unauthorized changes or theft. They need to especially be careful about Table Permissions i.e who can read and write to what tables. You should be aware of who is allowed to perform the 4 basic operations of INSERT, UPDATE, DELETE and SELECT against which tables. Database programmers should also adopt authentication best practices depending on the infrastructure setup, the application's nature, the user's characteristics, and data sensitivity. If the database server is accessed from the outside world, it is beneficial to encrypt sessions using SSL certificates to avoid packet sniffing. Also, you should secure database servers that trust all localhost connections, as anyone who accesses the localhost can access the database server. #6 Optimize your database performance A database programmer should also be aware of how to optimize their database performance to achieve the best results. At the basic level, they should know how to rewrite SQL queries and maintain indexes. Other aspects of optimizing database performance, include hardware configuration, network settings, and database configuration. Generally speaking, tuning database performance requires knowledge about the system's nature. Once the database server is configured you should calculate the number of transactions per second (TPS) for the database server setup. Once the system is up and running, and you should set up a monitoring system or log analysis, which periodically finds slow queries, the most time-consuming queries, etc. #7 Develop your soft skills Apart from the above technical skills, a database programmer needs to be comfortable communicating with developers, testers and project managers while working on any software project. A keen eye for detail and critical thinking can often spot malfunctions and errors that may otherwise be overlooked. A database programmer should be able to quickly fix issues within the database and streamline the code. They should also possess quick-thinking to prioritize tasks and meet deadlines effectively. Often database programmers would be required to work on documentation and technical user guides so strong writing and technical skills are a must. Get started If you want to get started with becoming a Database programmer, Packt has a range of products. Here are some of the best: PostgreSQL 11 Administration Cookbook Learning PostgreSQL 11 - Third Edition PostgreSQL 11 in 7 days [ Video ] Using MySQL Databases With Python [ Video ] Basic Relational Database Design [ Video ] How to learn data science: from data mining to machine learning How to ace a data science interview 5 barriers to learning and technology training for small software development teams
Read more
  • 0
  • 0
  • 14589
article-image-how-to-integrate-a-medium-editor-in-angular-8
Guest Contributor
05 Sep 2019
5 min read
Save for later

How to integrate a Medium editor in Angular 8

Guest Contributor
05 Sep 2019
5 min read
In the world of text editing, there is a new era of WYSIWYG (What You See Is What You Get). We all know how styling and formatting become the important elements of your website but most of the times it is tough to pick a simple, easy-to-use and powerful editor. Currently, the good days are coming back with the new Medium Editor! Medium Editor is an independent Javascript library to make a coasting content manager bar which springs up when you select a bit of content of your page which is enlivened by the magnificence of Medium.com. You can turn every field from a message on your contact form to a whole article on the back-end into a professionally styled text paragraph contains quote blocks, heading, hyperlinks or just a few selected words. You can also try to incorporate text editor into Angular 8 for the ease of updates and edits of your content. Angular 8 has released its latest feature - beta 6 with the attractive new functionalities for testing your software and fixing bugs. One of them is Bazel - Google's open-source part of its internal build tool called Blaze which is capable of performing incremental builds and tests. Let us check how you can integrate a Medium editor in the Angular 8 platform. Also Read: Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more Steps to create an editor using Angular 8 Step 1: First thing first, Create a project in Angular and you can also make use of bootstrap for making it look pretty good by adding CDN links in the index.html After entering the above-given command line, it will generate an angular starter application after it has completed installing all the dependencies. Step 2: Install an npm package by entering the below-given line. And then, include the CSS and js in angular.json file Step 3: Create a component with your chosen name and then create the one with the name create Step 4: Click to the newly created component.html and make a div by giving it a template reference of the name Try the above-given code snippet under a few bootstrap classes just to give a basic stylings Step 5: Select your component class and make a variable editor to view the child property as listed below: Step 6: Then, we will make use of one lifecycle hook of angular which is ngAfterViewInit. Paste the above-given scrap and you may get a mistake like media supervisor that isn't characterized all things considered, in this way, we have to declare it on the top like In the wake of rolling out the above improvements, you can, for the most part, make a little medium-supervisor to utilize it for yourself. You can pick over to compose anything and simply select the content you have written to see the enchantment. Step 7: After this, you may require some more alternatives in your editorial manager toolbar. For doing as such, you have to pass an arrangement object in the MediumEditor Constructor. By making the selective changes, you will be able to see a load of available options. Step 8: So, now you have got an editor, you can easily get the data from it. If someone writes a post then you need to have an HTML write of the same. Once more, you have to partition the screen into two parts wherein one half, there will be a supervisor and the other half will show the see of the post. [box type="shadow" align="" class="" width=""]In the second half of the screen, you need to assign the value of inner HTML as given above[/box] Wrap Up Every system is prone to pros and cons and so does the Angular too. Angular offers a clean code development along with the high-performance framework that manages to route, providing seamless updates using Command Line Interface and retrieving the state of location services. Also, you can debug the templates in Angular 8 and supports multiple applications in one domain. Contrary to this, Angular might be confusing for the newcomers as there is no accurate manual which includes the proper documentation of the framework. Also, it lacks the developer community and there is limited scope to debug Limited Routing. However, Angular 8 supports multiple applications in one domain and user-friendly for all the versions of the operating system. So, here we come to the end of the article. We hope you have gained information on how to integrate the latest medium editor to Angular 8. Do give it a try! Till then - keep learning! Author Bio Dave Jarvis is working as a Business Development Executive at eTatvaSoft.com, an enterprise-level mobile & web application development company. He aims to sharpen his analytical skills, deepening his data understanding and broaden his business knowledge in these years of his career. Click here to find more information about the company. Follow him on Twitter. Other interesting news in Web development Google Chrome 76 now supports native lazy-loading Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 10362

article-image-go-1-13-releases-with-error-wrapping-tls-1-3-enabled-by-default-improved-number-literals-and-more
Bhagyashree R
04 Sep 2019
5 min read
Save for later

Go 1.13 releases with error wrapping, TLS 1.3 enabled by default, improved number literals, and more

Bhagyashree R
04 Sep 2019
5 min read
Yesterday, the Go team announced the release of Go 1.13. This version comes with uniform and modernized number literals, support for error wrapping, TLS 1.3 on by default, improved modules support, and much more. Also, the go command now uses module mirror and checksum database by default. Want to learn Go? Get started or get up to date with our new edition of Mastering Go. Learn more. https://twitter.com/golang/status/1168966214108033029   Key updates in Go 1.13 TLS 1.3 enabled by default Go 1.13 comes with Transportation Layer Security (TLS) 1.3 support in the crypto/tls package enabled by default. If you wish to disable it, add tls13=0 to the GODEBUG environment variable. The team shared that the option to opt-out will be removed in Go 1.14. Uniform and modernized number literal prefixes Go 1.13 introduces optional prefixes for binary and octal integer literals, hexadecimal floating-point literals, and imaginary literals. The 0b or 0B prefix will indicate a binary integer literal, for instance, ‘0b1011.’ The 0o or 0O prefix will indicate an octal integer literal such as 0o660. The 0 prefix used in previous versions is also valid. The 0x or 0X prefix will be used to represent the mantissa of a floating-point number in hexadecimal format such as 0x1.0p-1021. The imaginary suffix (i) can be used with any integer or floating-point literal. To improve readability, the digits of any number literal can now be separated using underscores, for instance,  1_000_000, 0b_1010_0110. It can also be used between the literal prefix and the first digit. Read also: The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Go 1.13 features module mirror to download modules Starting with Go 1.13, the go command will download and authenticate modules using the module mirror and checksum database by default. A module mirror is a module proxy that fetches modules from origin servers and then caches them for use in future requests. This enables the mirror to serve the source code even when the origin servers are down. The go command will use the module mirror maintained by the Go team, which is served at https://proxy.golang.org. In case you are using an earlier version of the go command, you can use this service by setting ‘GOPROXY=https://proxy.golang.org’ in your local environment. Checksum database to authenticate modules The go command maintains two Go modules files called go.mod and go.sum. Introduced in version 1.11, Go modules are an alternative to GOPATH with support for versioning and package distribution. They are basically a collection of Go packages stored in a file with a go.mod file at its root. The go.sum file consists of SHA-256 hashes of the source code that the go command can use to identify any misbehavior by an origin server or proxy. However, the drawback of this method is that it “works entirely by the trust on your first use.” When a version of a dependency is added for the first time, the go command will fetch the code and add lines to go.sum file on the fly. “The problem is that those go.sum lines aren’t being checked against anyone else’s: they might be different from the go.sum lines that the go command just generated for someone else, perhaps because a proxy intentionally served malicious code targeted to you,” the team explains. To solve this the Go team has come up with a global source of go.sum lines which they call a checksum database. This will ensure that the go command always adds the same lines to everyone’s go.sum file. So, whenever the go command receives new source code, it can verify its hash against the global database, which is served by sum.golang.org. Read also: Golang 1.13 module mirror, index, and Checksum database are now production-ready Support for error wrapping As per the error value proposal, Go 1.13 comes with support for error wrapping. This provides a standard way to wrap errors to the standard library. Now, an error can wrap another error by defining an Unwrap method that returns the wrapped error. Explaining the working, the team wrote, “An error e can wrap another error w by providing an Unwrap method that returns w. Both e and w are available to programs, allowing e to provide additional context to w or to reinterpret it while still allowing programs to make decisions based on w.” To support this behavior, the fmt.Errorf function now has a new %w verb for creating wrapped errors. There are also three new functions in the errors package namely errors.Unwrap, errors.Is and errors.As to simplify unwrapping and inspecting wrapped errors. These were some of the updates in Go 1.13. To read the entire list of features, check out its official release notes. What’s new in programming languages The Julia team shares its finalized release process with the community Introducing Nushell: A Rust-based shell TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 3507