Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-why-arc-welder-is-good-choice-to-run-android-apps-on-desktop-using-chrome-browser
Guest Contributor
21 Aug 2019
5 min read
Save for later

Why ARC Welder is a good choice to run Android apps on desktop using the Chrome browser

Guest Contributor
21 Aug 2019
5 min read
Running Android apps on Chrome is a complicated task, especially when you are not using a Chromebook. However, it should be noted that Chrome has an in-built tool (now) that allows users to test Android-based application in the browser, launched by Google in 2015, known as App Runtime for Chrome (ARC) Welder. What is ARC Welder? The ARC Welder tool allows Android applications to run on Google Chrome for Windows, OS X, Linux systems. ARC Welder is basically for app developers who want to test run their Android applications within Chrome OS and confront any runtime errors or bugs. The tool was launched as an experimental concept for developers previously but later was available for download for everyone. Main functions: ARC Welder offers an easy and streamlined method for application testing. At the first step, the user will be required to add the bundle into the existing application menu. Users are provided with the freedom to write to any file or a folder which can be opened via ARC software assistance. Any beginner developer or a user can choose to leave the settings page as they (settings) will be set to default if skipped or left unsaved. Here’s how to run ARC Welder tool for running android application: Download or upgrade to the latest version of Google Chrome browser. Download and run the ARC Welder application from the Google Chrome Store. Add a third-party APK file host. After downloading the APK app file in your laptop/PC, click Open. Select the mode “Phone” and ‘Tablet”--either of which you wish you run the application on. Lastly, click on the "Launch App" button. Points to remember for running ARC Welder on Chrome: ARC Welder tool only works with APK files, which means that in order to get your Android Applications successfully run on your laptop, you will be required to download APK files of the specific application you wish to install on your desktop. You can find APK files from the below mentioned APK databases: APKMirror AndroidAPKsFree AndroidCrew APKPure Points to remember before installing ARC Welder: Only one specific application can be loaded at one single time. On the basis of your application, you will be required to select the portrait/landscape mode manually. Tablet and Phone mode specifications are necessary as they have different outcomes. ARC Welder is based on Android 4.4. This means that users are required to test applications that support Android 4.4 or above. Note: Points 1 and 2 can be considered as limitations of ARC Welder. Pros: Cross-platform as it works on Windows, Linux, Mac and Chrome OS. Developed by Google which means the software will evolve quickly considering the upgrade pace of Android (also developed by Google). Allows application testing in Google Chrome web browser. Cons: Not all Google Play Services are supported by ARC Welder. ARC Welder only supports “ARM” APK format. Keyboard input is spotty. Takes 2-3 minutes to install as compared to other testing applications like BlueStacks (one-click install). No accelerometer simulation. Users are required to choose the “orientation” mode before getting into the detailed interface of ARC Welder. There are competitors of ARC Welder like BlueStacks which is often preferred by a majority of developers due to its one-click install feature. Although ARC Welder gives a much better performance, it still ranks at 7th (BlueStacks stands at 6th). Apart from shortcomings, ARC Welder continues to evolve and secure its faithful following of beginners to expert developers. In the next section, we’ll have a look at the few alternatives to ARC Welder. Few Alternatives: Genymotion - It is an easy to use android emulator for your computer. It works as a virtual machine and enables you to run mobile apps and games on your desktop and laptop efficiently. Andy - It is an operating system that works as an android emulator for your computer. It allows you to open up mobile apps and play mobile games in a version of the Android operating system on your Mac or Windows desktop. BlueStacks - It is a website that has been built to format mobile apps and make them compatible to the desktop computers. It also helps to open ip mobile gaming apps on computers and laptops. MEmu - It is the fastest android emulator that allows you to play mobile games on PC for free. It is known for its performance, and user experience. It supports most of the popular mobile apps and games, and various system configurations. Koplayer - It is a free, one of the best android emulator for PC that supports video recording, multiple accounts, and keyboard. Built on x86 architecture, it is more stable and faster than Bluestacks. Not to mention, it is very interesting to load android apps on chrome browser on your computer and laptop, no matter which operating system you are using. It could be very useful to run android apps on chrome browser when Google play store and Apple app store are prone to exploitation. Although right now we can run a few apps using ARC Welder, one at a time, surely the developers will add more functionality and take this to the next level. So, are you ready to use mobile apps play mobile games on your PC using ARC Welder? If you have any questions, leave in the comment box, we’ll respond back. Author Bio Hilary is a writer, content manager at Androidcrew.com. She loves to share the knowledge and insights she gained along the way with others.    
Read more
  • 0
  • 0
  • 20996

article-image-7-crucial-devops-metrics-that-you-need-to-track
Guest Contributor
20 Aug 2019
9 min read
Save for later

7 crucial DevOps metrics that you need to track

Guest Contributor
20 Aug 2019
9 min read
DevOps has taken the IT world by storm and is increasingly becoming the de facto industry standard for software development. The DevOps principles have the potential to result in a competitive differentiation allowing the teams to deliver a high quality software developed at a faster rate which adequately meets the customer requirements. DevOps prevents the development and operations teams from functioning in two distinct silos and ensures seamless collaboration between all the stakeholders. Collection of feedback and its subsequent incorporation plays a critical role in DevOps implementation and formulation of a CI/CD pipeline. Successful transition to DevOps is a journey, not a destination. Setting up benchmarks, measuring yourself against them and tracking your progress is important for determining the stage of DevOps architecture you are in and ensuring a smooth journey onward. Feedback loops are a critical enabler for delivery of the application and metrics help transform the qualitative feedback into quantitative form. Collecting the feedback from the stakeholders is only half the work, gathering insights and communicating it through the DevOps team to keep the CI/CD pipeline on track is equally important. This is where the role of metrics comes in. DevOps metrics are the tools that your team needs for ensuring that the feedback is collected and communicated with the right people to improve upon the existing processes and functions in a unit. Here are 7 DevOps metrics that your team needs to track for a successful DevOps transformation: 1. Deployment frequency Quick iteration and continuous delivery are key measurements of DevOps success. It basically means how long the software takes to deploy and how often the deployment takes place. Keeping track of the frequency with which the new code is deployed helps keep track of the development process. The ultimate goal of deployment is to be able to release smaller deployments of code as quickly as possible. Smaller deployments are easier to test and release. They also improve the discoverability of bugs in the code allowing for faster and timely resolution of the same. Determining the frequency of deployments needs to be done separately for development, testing, staging, and production environments. Keeping track of the frequency of deployment to QA or pre-production environments is also an important consideration. A high deployment frequency is a tell-tale sign that things are going smooth in the production cycle. Smaller deployments are easier to test and release so higher deployment frequency directly corresponds with higher efficiency. No wonder tech giants such as Amazon and Netflix deploy code thousands of times a day. Amazon has built a deployment engine called Apollo that has deployed more than 50 million deployments in 12 months which is more than one deployment per second. This results in reduced outages and decreased downtimes. 2. Failed deployments Any deployment that causes issues or outages for your users is a failed deployment. Tracking the percentage of deployments that result in negative feedback from the user’s end is an important DevOps metric. The DevOps teams are expected to build quality in the product right from the beginning of the project. The responsibility for ensuring the quality of the software is also disseminated through the entire team and not just centered around the QA. While in an ideal scenario, there should be no failed deployments, that’s often not the case. Tracking the percentage of deployment that results in negative sentiment in the project helps you ascertain the ground level realities and makes you better prepared for such occurrences in the future. Only if you know what is wrong can you formulate a plan to fix it. While a failure rate of 0 is the magic number, less than 5% failed deployments is considered workable. In case the metric consistently shows spike of failed deployments over 10%, the existing process needs to be broken down into smaller segments with mini-deployments. Fixing 5 issues in 100 deployments is any day easier than fixing 50 in 1000 within the same time-frame. 3. Code committed Code committed is a DevOps metric that tracks the number of commits the team makes to the software before it can be deployed into production. This serves as an indicator of the development velocity as well as the code quality. The number of code commits that a team makes has to be within the optimum range defined by the DevOps team. Too many commits may be indicative of low quality or lack of direction in development. Similarly, if the commits are too low, it may be an indicator that the team is too taxed and non-productive. Uncovering the reason behind the variation in code committed is important for maintaining the productivity and project velocity while also ensuring optimal satisfaction within the team members. 4. Lead Time The software development cycle is a continuous process where new code is constantly developed and successfully deployed to production. Lead time for changes in DevOps is the time taken to go from code committed to code successfully running into production. It is an important indicator to determine the efficiency in the existing process and identifying the possible areas of improvement. The lead time and mean time to change (MTTC) result in the DevOps team getting a better hold of the project. By measuring the amount of time passing between its inception and the actual production and deployment, the team’s ability to adapt to change as the project requirements evolve can be computed. 5. Error rate Errors in any software application are inevitable. A few occasional errors aren’t a red flag but keeping track of the error rates and being on the lookout for any unusual spikes is important for the health of your application. A significant rise in error rate is an indicator of inherent quality problems and ongoing performance-related issues. The errors that you encounter can be of two types, bugs and production issues. Bugs are the exceptions in the code discovered after deployment. Production issues, on the other hand, are issues related to database connections and query timeouts. The error rate is calculated as a function of the transactions that result in an error during a particular time window. For a specified time duration, out of a 1000 transactions, if 20 have errors, the error rate is calculated as 20/1000 or 2 percent. A few intermittent errors throughout the application life cycle is a normal occurrence but any unusual spikes that occur need to be looked out for. The process needs to be analysed for bugs and production issues and the exceptions that occur need to be handled concurrently. 6. Mean time to detection Issues happen in every project but how fast you discover the issues is what matters. Having robust application monitoring and optimal coverage would help you find out any issues that happen as quickly as possible. The mean time to detection metric (MTTD) is the amount of time that passes between the beginning of the issue and the time when the issue gets detected and some remedial action is taken. The time to fix the issues is not covered under MTTD. Ideally, the DevOps teams need to strive to keep the MTTD as low as possible (ideally close to zero) i.e the DevOps teams should be able to detect any issues as soon as they occur. There needs to be a proper protocol established and communication channels need to be in place in order to help the team discover the error quickly and respond to its correction in a rapid manner. 7. Mean time to recovery Time to restore service or Mean time to recovery (MTTR) is a critical part of any project. It is the average time taken by the team to repair a failure in the system. It comprises of the time taken from failure detection till the time the project starts operating in the normal manner. Recovery and resilience are key components that determine the market readiness of a project. MTTR is an important DevOps metric because it allows for tracking of complex issues and failures while judging the capability of the team to handle change and bounce back again. The ideal recovery time for the fix to take place should be as low as possible, thus minimizing the overall system downtime. System downtimes and outages though undesirable are unavoidable. This especially runs true in the current development scenario where companies are making the move to the cloud. Designing for failure is a concept that needs to be ingrained right from the start. Even major applications like Facebook & Whatsapp, Twitter, Cloudflare, and Slack are not free of outages. What matters is that the downtime is kept minimal. Mean time to recovery thus becomes critical to realize the time the DevOps teams would need to bring the system back on track. Closing words DevOps isn’t just about tracking metrics, it is primarily about the culture. Organizations that make the transition to DevOps place immense emphasis on one goal-rapid delivery of stable, high-quality software through automation and continuous delivery. Simply having a bunch of numbers in the form of DevOps metrics isn’t going to help you across the line. You need to have a long-term vision combined with valuable insights that the metrics provide. It is only by monitoring these over a period of time and tracking your team’s progress in achieving the goals that you have set can you hope to reap the true benefits that DevOps offers. Author Bio Vinati Kamani writes about emerging technologies and their applications across various industries for Arkenea, a custom software development company and devops consulting company. When she's not on her desk penning down articles or reading up on the recent trends, she can be found traveling to remote places and soaking up different cultural experiences. DevOps engineering and full-stack development – 2 sides of the same agile coin Introducing kdevops, modern devops framework for Linux kernel development Why do IT teams need to transition from DevOps to DevSecOps?
Read more
  • 0
  • 0
  • 7809

article-image-how-to-migrate-from-magento-1-to-magento-2-a-comprehensive-guide
Guest Contributor
15 Aug 2019
8 min read
Save for later

How to migrate from Magento 1 to Magento 2. A comprehensive guide

Guest Contributor
15 Aug 2019
8 min read
Migrating from Magento 1 to Magento 2 has been one of the most commonly discussed topics in the world of eCommerce. Magento 2 was made available in 2015. Subsequently, Magento declared it will end its official support to Magento 1 in 2020. This makes the migration to Magento not only desirable but also necessary. Why you should migrate to Magento 2 As mentioned above, support to Magento 1 ends 2020. Here’s a list of the six most important reasons why migration from Magento 1.x to Magento 2 is important for your Magento store. Security Once the official support to Magento ends, security patches for different versions of Magento 1.x will no longer be offered. That means, if you continue running your Magento website on Magento 1.x, you’ll be exposed to a variety of risks and threats, many of which may have no official solution. Competition When your store is practically the only store that hasn’t migrated to Magento 2, you are at a severe competitive disadvantage. So while your competitors enjoy all the innovations that will continue happening on Magento 2, your Magento 1 website will be left out. Mobile friendly From regular shopping to special holiday purchases, an increasingly bigger proportion of e-commerce businesses come from mobile devices. Magento 2 is better optimized for mobile phones as compared to Magento 1. Performance In the e-commerce industry, better performance leads to better business, increased revenue and higher conversions. Magento 2 enables up to 66% faster add-to-cart server response times than Magento 1. Hence, Magento 2 becomes your best bet for growth. Checkout The number of steps for checkout has been slashed in Magento 2, marking a significant improvement in the buying process. Magento 2 offers the Instant Purchase feature which lets repeat customers purchase faster. Interface Magento 1 had an interface that wasn’t always friendly. Magento 2 has delved deeper to find the exact pain-points and made the new interface extremely user-friendly. Adding new products, editing product features or simply looking for tools has become easier with  Magento 2. FAQs for Magento migration By when should I migrate my store? All forms of official support for Magento 1 will be discontinued on June 2020, you should be migrating your store before that. Your Magento e-commerce store should be ready well before the deadline, so it’s highly recommended you start working towards the migration right away. How long will the migration take? It’s difficult to answer that question without further information about your store. The size of your store, its database and the kind of customization you need are some of the factors that influence the time horizon. Should I hire a Magento developer for the migration or should I let my in-house team deal with it? As with the earlier question, this question too needs further information. If you’re having your own team do it, allow them a good deal of time to learn a number of things and factor in a few false-starts as well. However, doing the migration all by yourself means you’ll have to divert a lot of in-house resources to the migration. That can negatively impact your ongoing business and put undue pressure on your revenue streams. Nearly all Magento stores have found that instead if they hire an experienced Magento 2 developer, they get better outcomes. Pre-migration checklist for moving from Magento 1 to Magento 2 Before you carry out the actual migration, you’ll want to prepare your site for the migration. Here’s your pre-migration checklist for Magento 1 to Magento 2 Filter your data. As you move to a better more sophisticated technology, you don’t want to carry outdated data or data that’s no way relevant to your business needs. There’s no point loading the new system with stuff that will only hog resources without ever being useful. So begin by removing data that’s not going to be useful. Critique your site. This is perhaps the best time to have a close look at your site and seriously consider upgrading it. Advanced technology like Magento 2 will produce even better results if your site reflects the current trends in e-commerce store design. Magento 2 offers better opportunities and you don’t want to be left out just because your site isn’t equipped to encash them. Build redundancy. Despite all your planning, there’s always a small risk of some kind of data loss. To safeguard yourself against it, make sure you replicate your Magento 1.x database. When you are actually implementing the migration, use this replicated database as your source for migration, without disturbing the original. Prepare to freeze admin activities. When you begin the dry run or the actual migration, continuing your administrative activities can alter your database. That would result in a patchy migration with some loose ends. To prevent this, go through a drill to prepare your business to stop all admin activities when you practice dry run and actual implementation of migration from Magento 1 to Magento 2. Finalize your blueprints. Unless absolutely critical, don’t waver from your original plans. Sticking to what you had planned will produce the best results. Changes that have not been factored in, can slow down or weaken your migration and even make it more expensive. Steps for migration from Magento 1 to Magento Migration from Magento 1 to Magento 2 doesn’t just depend on 1 activity but it is interdependent on multiple activities. They are: Data Migration Theme Migration Customization Migration, and Extension Migration Let’s look at each of them separately. Data Migration Step 1: Download Magento 2 without taking in the sample data. Follow the steps given for the setup and install the platform. Step 2: You will need a Data Migration Tool to transfer your data. You can download it from the official website. Remember, the Data Migration Tool version should be the same as the Magento 2 codebase version. Step 3: Feed the public keys and private keys for authorization. The keys too are available from the Magento site. Step 4: Configure the Data Migration Tool. How you configure it depends on which Magento 2 edition (Community Edition or Enterprise Edition) you would be using. You may not migrate from Enterprise Edition to Community Edition. Step 5: The next step is a mapping between Magento 1 and Magento 2 databases. Step 6: Get into maintenance mode to prepare for the actual migration. This will stop all administrative activities. Step 7: In the final step, you may migrate the Magento site, along with the system configuration like shipping and payments. Theme Migration Unlike Data Migration, Theme Migration in Magento doesn’t have standard tools that will take care of your theme migration. That’s also because of the fact that the frontend templates and their codes are hugely different in Magento 1.x and Magento 2.x So instead of looking for a tool, the best way out will be to get a new theme. You could either buy a Magento 2 theme that suits your style and requirements and customize it or develop one. This is one of the reasons why we suggested, upgrading your entire Magento store. Customization Migration The name customization itself suggests that what works for one online store won’t fit another. Which is why there’s no single way of migrating any of the customizations you might have done for your Magento 1. So you’ll be required to design all the customizations you need. However, there’s an important point to remember. Because of its efficiency and versatility, your store on Magento 2 may need lesser customization than you believe. So before you hurry into re-designing everything, take time to study what exactly you need and to what degree Magento 2 satisfies those needs. As you migrate from Magento 1.x to Magento 2.x, the number of customizations will possibly turn out to be considerably fewer than what you originally planned. Extension Migration Again, the same rule applies for extensions and plugins. What plugins worked for Magento 1 will likely not work for Magento 2 and you will have to build them again. Instead of interpreting it as something that’s frustrating, you can actually take it as an opportunity to correct minor errors and improve the overall experience. A dedicated Magento developer who specializes in Magento migration services can be of great help here. Final remarks on Magento migration If all this sounds a little overwhelming, relax, you’re not alone. Because Magento 2 is considerably superior to Magento 1, the migration may appear more challenging than what you had originally bargained for. In any case, the migration is compulsory; otherwise, you’ll face security threats and won’t be able to handle the competition. From the year 2020, this migration will not be a choice, so you might as well begin early so that you have more time to plan out things better. If you need help, a competent Magento web development company can make the migration more efficient and easier for you. Author Bio Kaartik Iyer is the Founder & CEO at Infigic Technologies, a web and mobile app development company. Kaartik has contributed to sites like Huffington Post, Yourstory, Tamebay to name a few. He's passionate about fitness, entrepreneurship, startups and all things digital. You can connect with him on LinkedIn for a quick chat on any of these topics. Why should your e-commerce site opt for Headless Magento 2? Adobe is going to acquire Magento for $1.68 Billion 5 things to consider when developing an eCommerce website
Read more
  • 0
  • 0
  • 3987

article-image-understanding-security-features-in-the-google-cloud-platform-gcp
Vincy Davis
27 Jul 2019
10 min read
Save for later

Understanding security features in the Google Cloud Platform (GCP)

Vincy Davis
27 Jul 2019
10 min read
Google's long experience and success in, protecting itself against cyberattacks plays to our advantage as customers of the Google Cloud Platform (GCP). From years of warding off security threats, Google is well aware of the security implications of the cloud model. Thus, they provide a well-secured structure for their operational activities, data centers, customer data, organizational structure, hiring process, and user support. Google uses a global scale infrastructure to provide security to build commercial services, such as Gmail, Google search, Google Photos, and enterprise services, such as GCP and gsuite. This article is an excerpt taken from the book, "Google Cloud Platform for Architects.", written by Vitthal Srinivasan, Janani Ravi and Et al. In this book, you will learn about Google Cloud Platform (GCP) and how to manage robust, highly available, and dynamic solutions to drive business objective. This article gives an insight into the security features in Google Cloud Platform, the tools that GCP provides for users benefit, as well as some best practices and design choices for security. Security features at Google and on the GCP Let's start by discussing what we get directly by virtue of using the GCP. These are security protections that we would not be able to engineer for ourselves. Let's go through some of the many layers of security provided by the GCP. Datacenter physical security: Only a small fraction of Google employees ever get to visit a GCP data center. Those data centers, the zones that we have been talking so much about, probably would seem out of a Bond film to those that did—security lasers, biometric detectors, alarms, cameras, and all of that cloak-and-dagger stuff. Custom hardware and trusted booting: A specific form of security attacks named privileged access attacks are on the rise. These involve malicious code running from the least likely spots that you'd expect, the OS image, hypervisor, or boot loader. There is the only way to really protect against these, which is to design and build every single element in-house. Google has done that, including hardware, a firmware stack, curated OS images, and a hardened hypervisor. Google data centers are populated with thousands of servers connected to a local network. Google selects and validates building components from vendors and designs custom secure server boards and networking devices for server machines. Google has cryptographic signatures on all low-level components, such as BIOS, bootloader, kernel, and base OS, to validate the correct software stack is booting up. Data disposal: The detritus of the persistent disks and other storage devices that we use are also cleaned thoroughly by Google. This data destruction process involves several steps: an authorized individual will wipe the disk clean using a logical wipe. Then, a different authorized individual will inspect the wiped disk. The results of the erasure are stored and logged too. Then, the erased driver is released into inventory for reuse. If the disk was damaged and could not be wiped clean, it is stored securely and not reused, and such devices are periodically destroyed. Each facility where data disposal takes place is audited once a week. Data encryption: By default GCP always encrypts all customer data at rest as well as in motion. This encryption is automatic, and it requires no action on the user's part. Persistent disks, for instance, are already encrypted using AES-256, and the keys themselves are encrypted with master keys. All these key management and rotation is managed by Google. In addition to this default encryption, a couple of other encryption options exist as well, more on those in the following diagram: Secure service deployment: Google's security documentation will often refer to secure service deployment, and it is important to understand that in this context, the term service has a specific meaning in the context of security: a service is the application binary that a developer writes and runs on infrastructure. This secure service deployment is based on three attributes: Identity: Each service running on Google infrastructure has an associated service account identity. A service has to submit cryptographic credentials provided to it to prove its identity while making or receiving remote procedure calls (RPC) to other services. Clients use these identities to make sure that they are connecting to an intended server and the server will use to restrict access to data and methods to specific clients. Integrity: Google uses a cryptographic authentication and authorization technique at an application layer to provide strong access control at the abstraction level for interservice communication. Google has an ingress and egress filtering facility at various points in their network to avoid IP spoofing. With this approach, Google is able to maximize their network's performance and its availability. Isolation: Google has an effective sandbox technique to isolate services running on the same machine. This includes Linux user separation, language and kernel-based sandboxes, and hardware virtualization. Google also secures operation of sensitive services such as cluster orchestration in GKE on exclusively dedicated machines. Secure interservice communication: The term inter-service communication refers to GCP's resources and services talking to each other. For doing so, the owners of the services have individual whitelists of services which can access them. Using them, the owner of the service can also allow some IAM identities to connect with the services managed by them.Apart from that, Google engineers on the backend who would be responsible to manage the smooth and downtime-free running of the services are also provided special identities to access the services (to manage them, not to modify their user-input data). Google encrypts interservice communication by encapsulating application layer protocols in RPS mechanisms to isolate the application layer and to remove any kind of dependency on network security. Using Google Front End: Whenever we want to expose a service using GCP, the TLS certificate management, service registration, and DNS are managed by Google itself. This facility is called the Google Front End (GFE) service. For example, a simple file of Python code can be hosted as an application on App Engine that (application) will have its own IP, DNS name, and so on. In-built DDoS protections: Distributed Denial-of-Service attacks are very well studied, and precautions against such attacks are already built into many GCP services, notably in networking and load balancing. Load balancers can actually be thought of as hardened, bastion hosts that serve as lightning rods to attract attacks, and so are suitably hardened by Google to ensure that they can withstand those attacks. HTTP(S) and SSL proxy load balancers, in particular, can protect your backend instances from several threats, including SYN floods, port exhaustion, and IP fragment floods. Insider risk and intrusion detection: Google constantly monitors activities of all available devices in Google infrastructure for any suspicious activities. To secure employees' accounts, Google has replaced phishable OTP second factors with U2F, compatible security keys. Google also monitors its customer devices that employees use to operate their infrastructure. Google also conducts a periodic check on the status of OS images with security patches on customer devices. Google has a special mechanism to grant access privileges named application-level access management control, which exposes internal applications to only specific users from correctly managed devices and expected network and geographic locations. Google has a very strict and secure way to manage its administrative access privileges. They have a rigorous monitoring process of employee activities and also a predefined limit for administrative accesses for employees. Google-provided tools and options for security As we've just seen, the platform already does a lot for us, but we still could end up leaving ourselves vulnerable to attack if we don't go about designing our cloud infrastructure carefully. To begin with, let's understand a few facilities provided by the platform for our benefit. Data encryption options: We have already discussed Google's default encryption; this encrypts pretty much everything and requires no user action. So, for instance, all persistent disks are encrypted with AES-256 keys that are automatically created, rotated, and themselves encrypted by Google. In addition to default encryption, there are a couple of other encryption options available to users. Customer-managed encryption keys (CMEK) using Cloud KMS: This option involves a user taking control of the keys that are used, but still storing those keys securely on the GCP, using the key management service. The user is now responsible for managing the keys that are for creating, rotating and destroying them. The only GCP service that currently supports CMEK is BigQuery and is in beta stage for Cloud Storage. Customer-supplied encryption keys (CSEK): Here, the user specifies which keys are to be used, but those keys do not ever leave the user's premises. To be precise, the keys are sent to Google as a part of API service calls, but Google only uses these keys in memory and never persists them on the cloud. CSEK is supported by two important GCP services: data in cloud storage buckets as well as by persistent disks on GCE VMs. There is an important caveat here though: if you lose your key after having encrypted some GCP data with it, you are entirely out of luck. There will be no way for Google to recover that data. Cloud security scanner: Cloud security scanner is a GCP, provided security scanner for common vulnerabilities. It has long been available for App Engine applications, but is now also available in alpha for Compute Engine VMs. This handy utility will automatically scan and detect the following four common vulnerabilities: Cross-site scripting (XSS) Flash injection Mixed content (HTTP in HTTPS) The use of outdated/insecure libraries Like most security scanners, it automatically crawls an application, follows links, and tries out as many different types of user input and event handlers as possible. Some security best practices Here is a list of design choices that you could exercise to cope with security threats such as DDoS attacks: Use hardened bastion hosts such as load balancers (particularly HTTP(S) and SSL proxy load balancers). Make good use of the firewall rules in your VPC network. Ensure that incoming traffic from unknown sources, or on unknown ports, or protocols is not allowed through. Use managed services such as Dataflow and Cloud Functions wherever possible; these are serverless and so have smaller attack vectors. If your application lends itself to App Engine it has several security benefits over GCE or GKE, and it can also be used to autoscale up quickly, damping the impact of a DDOS attack. If you are using GCE VMs, consider the use of API rate limits to ensure that the number of requests to a given VM does not increase in an uncontrolled fashion. Use NAT gateways and avoid public IPs wherever possible to ensure network isolation. Use Google CDN as a way to offload incoming requests for static content. In the event of a storm of incoming user requests, the CDN servers will be on the edge of the network, and traffic into the core infrastructure will be reduced. Summary In this article, you learned that the GCP benefits from Google's long experience countering cyber-threats and security attacks targeted at other Google services, such as Google search, YouTube, and Gmail. There are several built-in security features that already protect users of the GCP from several threats that might not even be recognized as existing in an on-premise world. In addition to these in-built protections, all GCP users have various tools at their disposal to scan for security threats and to protect their data. To know more in-depth about the Google Cloud Platform (GCP), head over to the book, Google Cloud Platform for Architects. Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Build Hadoop clusters using Google Cloud Platform [Tutorial] Machine learning APIs for Google Cloud Platform
Read more
  • 0
  • 0
  • 8572

article-image-winnti-malware-chinese-hacker-group-attacks-major-german-corporations-for-years
Fatema Patrawala
26 Jul 2019
9 min read
Save for later

Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals

Fatema Patrawala
26 Jul 2019
9 min read
German public broadcasters, Bavarian Radio & Television Network (BR) and Norddeutscher Rundfunk (NDR), have published a joint investigation report on a hacker group spying on certain businesses since years. Security researchers, Hakan Tanriverdi, Svea Eckert, Jan Strozyk, Maximilian Zierer and Rebecca Ciesielski have contributed to this report. They shed light on how this group of hackers operate and how widespread they are. The investigation started with one of the reporters receiving this code daa0 c7cb f4f0 fbcf d6d1 which eventually led to the team discovering a hacking group with Chinese origins operating on Winnti Malware. BR and NDR reporters, in collaboration with several IT security experts, have analyzed the Winnti malware. Moritz Contag of Ruhr University Bochum extracted information from different varieties of the malware and wrote a script for this analysis. Silas Cutler, an IT security expert with US-based Chronicle Security, confirmed it. The report analyses cases from the below listed targeted companies: Gaming: Gameforge, Valve Software: Teamviewer Technology: Siemens, Sumitomo, Thyssenkrupp Pharma: Bayer, Roche Chemical: BASF, Covestro, Shin-Etsu Hakan Tanriverdi one of the reporters wrote on Twitter, “We looked at more than 250 samples, wrote Yara rules, conducted nmap scans.” Yara rules is a tool primarily used in malware research and detection. Nmap is a free and open source network scanner used to discover hosts and services on a computer network. Additionally in the report, the team has presented ways to find out if one is infected by the Winnti malware. To learn about these methods in detail, check out the research report. Winnti malware is complex, created by “digital mercenaries” of Chinese origin Winnti is a highly complex structure that is difficult to penetrate. The term denotes both a sophisticated malware and an actual group of hackers. IT security experts like to call them digital mercenaries. According to a Kasperky Lab research held in 2011, the Winnti group has been active for several years and in their initial days, specialized in cyber-attacks against the online video game industry. However, according to this investigation the hacker group has now honed in on Germany and its blue-chip DAX corporations. BR and NDR reporters analyzed hundreds of malware versions used for unsavory purposes. They found that the hacker group has targeted at least six DAX corporations and stock-listed top companies of the German industry. In October 2016, several DAX corporations, including BASF and Bayer, founded the German Cyber Security Organization (DCSO). The job of DCSO’s IT security experts was to observe and recognize hacker groups like Winnti and to get to the bottom of their motives. In Winnti’s case, DCSO speaks of a “mercenary force” which is said to be closely linked with the Chinese government. The reporters of this investigation also interviewed few company staff, IT security experts, government officials, and representatives of security authorities. An IT security expert who has been analyzing the attacks for years said, “Any DAX corporation that hasn’t been attacked by Winnti must have done something wrong.” A high-ranking German official said to the reporters, “The numbers of cases are mind-boggling.” And claims that the group continues to be highly active—to this very day. Winnti hackers are audacious and “don’t care if they’re found out” The report points out that the hackers choose convenience over anonymity. Working with Moritz Contag the reporters found that the hackers wrote the names of the companies they want to spy on directly into their malware. Contag has analyzed more than 250 variations of the Winnti malware and found them to contain the names of global corporations. According to reporters, hackers usually take precautions, which experts refer to as Opsec. But the Winnti group’s Opsec was dismal to say the least. Somebody who has been keeping an eye on Chinese hackers on behalf of a European intelligence service believes that they didn’t really care: “These hackers don’t care if they’re found out or not. They care only about achieving their goals." The reporters believed that every hacking operation leaves digital traces. They also believe that if you notice hackers carefully, each and every step can be logged. To decipher the traces of the Winnti hackers, they took a closer look at the program code of the malware itself. They used a malware research engine known as “VirusTotal” created by Google. The hacker group initially attacked the gaming industry for financial gain In the early days, the Winnti group of hackers were mainly interested in money making. Their initial target was Gameforge, a gaming company based in the German town of Karlsruhe. In 2011, an email message found its way into Gameforge’s mailbox. A staff member opened the attached file and unaware to him started the Winnti program. Shortly afterwards, the administrators became aware that someone was accessing Gameforge’s databases and raising the account balance. Gameforge decided to implement Kaspersky antivirus software and  arranged for Kaspersky's IT security experts to visit the office.The security experts found suspicious files and analyzed them. They noticed that the system had been infiltrated by hackers acting like Gameforge’s administrators. It turned out that the hackers had taken over a total of 40 servers. “They are a very, very persistente group,” says Costin Raiu, who has been watching Winnti since 2011 and was in charge of Kaspersky’s malware analysis team. “Once the Winnti hackers are inside a network, they take their sweet time to really get a feel for the infrastructure,” he says. The hackers will map a company’s network and look for strategically favorable locations for placing their malware. They keep tabs on which programs are used in a company and then exchange a file in one of these programs. The modified file looks like the original, but was secretly supplemented by a few extra lines of code. Thereafter the manipulated file does the attackers’ bidding. Raiu and his team have been following the digital tracks left behind by some of the Winnti hackers. “Nine years ago, things were much more clear-cut. There was a single team, which developed and used Winnti. It now looks like there is at least a second group that also uses Winnti.” This view is shared by many IT security companies. And it is this second group which is getting the German security authorities worried. One government official says, “Winnti is very specific to Germany. It is the attacker group that's being encountered most frequently." Second group of Winnti hackers focused on industrial espionage The report says that by 2014, the Winnti malware code was no longer limited to game manufacturers. The second group’s job was mainly industrial espionage. Hackers targeted high-tech companies as well as chemical and pharmaceutical companies. They also attacked companies in Japan, France, the U.S. and Germany. The report sheds light on how Winnti hackers broke into Henkel’s network in 2014. The reporters present three files containing the website belonging to Henkel and the name of the hacked server. For example, one starts with the letter sequence DEDUSSV. They realized that server names can be arbitrary, but it is highly probable that DE stands for Germany and DUS for Düsseldorf, where the Henkel headquarters are located. The hackers were able to monitor all activities running on the web server and reached systems which didn't have direct internet access: The company also confirmed the Winnti incident and issued the following statement: “The cyberattack was discovered in the summer of 2014 and Henkel promptly took all necessary precautions.” Henkel claims that a “very small portion” of its worldwide IT systems had been affected— the systems in Germany. According to Henkel, there was no evidence suggesting that any sensitive data had been diverted. Other than Henkel, Winnti also targeted companies like Covestro, manufacturers of adhesives, lacquers and paints, Japan’s biggest chemical company, Shin-Etsu Chemical, Roche, one of the largest pharmaceutical companies in the world. Winnti hackers also penetrated the BASF and Siemens networks. A BASF spokeswoman says that in July 2015, hackers had successfully overcome “the first levels” of defense. “When our experts discovered that the attacker was attempting to get around the next level of defense, the attacker was removed promptly and in a coordinated manner from BASF’s network.” She added that no business relevant information had been lost at any time. According to Siemens, they were penetrated by the hackers in June 2016. “We quickly discovered and thwarted the attack,” Siemens spokesperson said. Winnti hackers also involved in political espionage The hacker group also is interested in penetrating political groups and there were several such indicators according to the report. The Hong Kong government was spied on by the Winnti hackers. The reporters found four infected systems with the help of the nmap network scan, and proceeded to inform the government by email. The reporters also found out a telecommunications provider from India had been infiltrated, the company happens to be located in the region where the Tibetan government has its headquarters. Incidentally, the relevant identifier in the malware is called “CTA.” A file which ended up on VirusTotal in 2018 contains a straightforward keyword: “tibet”. Other than this the report also throws light on attacks which were not directly related to political espionage but had connection among them. For example, the team found out Marriott hotels in USA was attacked by hackers. The Indonesian airline Lion Air networks were also penetrated by them. They wanted to get to the data of where people travel and where they were located, at any given time. The team confirmed this by showing the relevant coded files in the report. To read the full research report, check out the official German broadcsaster’s website. Hackers steal bitcoins worth $41M from Binance exchange in a single go! VLC media player affected by a major vulnerability in a 3rd library, libebml; updating to the latest version may help An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices
Read more
  • 0
  • 0
  • 4722

article-image-a-cybersecurity-primer-for-mid-sized-businesses
Guest Contributor
26 Jul 2019
7 min read
Save for later

A cybersecurity primer for mid sized businesses

Guest Contributor
26 Jul 2019
7 min read
The decision to which information security measures should be used across the company’s IT infrastructure and which ones should be left out may be a tough one for midsized companies. The financial resources of a midsized company cannot allow applying all the existing cybersecurity elements to protect the network. At the same time, midsized businesses are big enough to be targeted by cybercriminals. In this article, our information security consultants describe cybersecurity measures a midsized business can’t do without if it wants to ensure an appropriate network protection level and show how to implement them and arrange their management. Basic information security measures Among the range of existing cybersecurity measures the following ones are essential for all mid sized businesses irrespective of the type of business: A firewall is responsible for scanning incoming and outgoing network traffic. If set properly, the firewall prevents malicious traffic from reaching your network and possibly damaging it. Antivirus software checks each file your company’s employees download from external resources like the internet or USB flash drives for virus signatures. Regular updates to your antivirus will give an alarm each time ransomware, viruses, Trojan horses, and other types of malware tries to reach your company’s network. Network segmentation implies the division of the entire company’s network into separate fragments. As a result, the networks of your company’s departments are separated from each other. In case hackers reach the computer in one segment, they won’t be able to access the computers in the other network segments separated from the infected network. Thus, cyberattacks can’t move between the network segments and damage them, and you significantly reduce the risk of facing corporate data theft or leakage. Email security techniques include filtering spam and applying password rotations. An email security solution is designed to make sure that only verified letters reach their addresses in the process of communication between interacting parties. It aims at keeping corporate data secure from malware, spoofing attacks, and other cyberthreats in the communication happening both inside and outside the company’s network. Intrusion detection (IDS) and intrusion prevention system (IPS) are responsible for analyzing all the incoming and outgoing network traffic. Using pattern matching or anomaly detection, IDS identifies possible cybersecurity threats, while IPS blocks the identified information security attacks, thus not allowing them to turn into major threats and spread across the entire network. Advanced information security measures To strengthen the protection of a midsized company operating in a regulated industry (such as banking, healthcare) and having the need to comply with security regulations and standards like PCI DSS, HIPAA, SOX, GDPR, the following information security measures can’t be omitted: Endpoint security is responsible for defending each entry point like desktops or mobile devices connecting to the company’s network from attacks before harmful activities spread all over the network. When installed both on the corporate network management server and end users’ devices, endpoint security software provides your company’s system administrators with transparency over the actions that can potentially damage the network. Data loss prevention (DLP) allows to avoid the leakage of confidential data, such as clients’ bank account details. DLP systems scan the data passing through a network to ensure that no sensitive information was leaked and got into the hands of cybercriminals’. DLP is designed to avoid the cases when your employees deliberately or unintentionally send an email with proprietary corporate data outside the corporate network. Security information and event management (SIEM) software gathers and aggregates the logs from the servers, domain controllers, and all other sources located in your network to analyze them and provide you with a report highlighting suspicious activities. Thus, you can use these reporting results to know whether your systems need special attention and curative measures. Implementing and managing information security measures There are three options to implement and manage information security measures. The choice will depend on the nature of industry you operate in (regulated/non-regulated) and available financial and human resources. Arranging your own information security department This method provides you with transparency of security activities happening within your network. However, it implies large expenses on organizing the work of a skilled security team, as well as buying necessary cybersecurity software. Thus, this option is most suitable for a midsized company that is rapidly expanding. Turning to a managed security service provider (MSSP) Deciding to work with an MSSP may be a more time and cost-effective option than arranging your own information security department. You entrust your company’s information security protection to a third party and stay within your financial capabilities. However, this option is not suitable for companies in regulated industries since they may find it risky to give a third-party security services provider control over all aspects of their corporate network security. Joining the efforts of your security department and an MSSP This option is an apt choice for those midsized companies that have to comply with security regulations and standards. While a reliable MSSP will provide you with a security monitoring service and report on suspicious activities or system errors happening across the network, your information security department can focus on eliminating the detected information security issues that can damage the corporate confidential data and customer personal information. Ensuring the robustness of information security measures Regardless of the set of measures applied to protect your IT infrastructure and their management option, your information security strategy should provide for the ongoing assessment of their efficiency. Vulnerability assessment that is usually followed by penetration testing should be conducted quarterly or annually (depending on the necessity of a company to comply with security regulations and standards). When combined, they not only help you to stay constantly aware of any security gap in your company’s network but also assist in reacting to the detected information security issues promptly. As a supplementary practice necessary for midsized businesses from regulated industries, threat monitoring must be ensured to check the network for indicators of cyber protection breaches like data exfiltration attempts. You’ll also need a structured incident response (IR) plan to identify the root causes of the cyber protection incidents that have already happened and remediate them rapidly not to cope with system outages or data losses in the future. Finally, train your staff regularly to increase their cybersecurity consciousness, and determine the appropriate behavior for your employees, such as an obligatory use of complex passwords and an awareness of how to dodge spamming or phishing attacks. In a nutshell Midsized companies can ensure effective cyber protection within their limited budget by employing such cybersecurity measures as antiviruses, firewalls, and email security. In case they need to stay compliant with security standards and regulations, they should also implement such protection measures as network segmentation, install IDS/IPS, SIEM and DLP, and ensure endpoint security. Either the company’s information security department and/or an MSSP can organize these measures in the network. Last but not least, the CIOs of CISOs of midsized companies must ensure that the security of their networks is monitored and regularly assessed to identify suspicious activities and cybersecurity breaches, and close security gaps. Author Bio Uladzislau Murashka is a Certified Ethical Hacker at ScienceSoft with 5+ years of experience in penetration testing. Uladzislau’s spheres of competence include reverse engineering, black box, white box and gray box penetration testing of web and mobile applications, bug hunting and research work in the area of Information Security. An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic
Read more
  • 0
  • 0
  • 4190
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-are-apis-why-should-businesses-invest-in-api-development
Packt Editorial Staff
25 Jul 2019
9 min read
Save for later

What are APIs? Why should businesses invest in API development?

Packt Editorial Staff
25 Jul 2019
9 min read
Application Programming Interfaces (APIs) are like doors that provide access to information and functionality to other systems and applications. APIs share many of the same characteristics as doors; for example, they can be as secure and closely monitored as required. APIs can add value to a business by allowing the business to monetize information assets, comply with new regulations, and also enable innovation by simply providing access to business capabilities previously locked in old systems. This article is an excerpt from the book Enterprise API Management written by Luis Weir. This book explores the architectural decisions, implementation patterns, and management practices for successful enterprise APIs. In this article, we’ll define the concept of APIs and see what value APIs can add to a business. APIs, however, are not new. In fact, the concept goes way back in time and has been present since the early days of distributed computing. However, the term as we know it today refers to a much more modern type of APIs, known as REST or web APIs. The concept of APIs Modern APIs started to gain real popularity when, in the same year of their inception, eBay launched its first public API as part of its eBay Developers Program. eBay's view was that by making the most of its website functionality and information also accessible via a public API, it would not only attract, but also encourage communities of developers worldwide to innovate by creating solutions using the API. From a business perspective, this meant that eBay became a platform for developers to innovate on and, in turn, eBay would benefit from having new users that perhaps it couldn't have reached before. eBay was not wrong. In the years that followed, thousands of organizations worldwide, including known brands, such as Salesforce.com, Google, Twitter, Facebook, Amazon, Netflix, and many others, adopted similar strategies. In fact, according to the programmableweb.com (a well-known public API catalogue), the number of publicly available APIs has been growing exponentially, reaching over 20k as of August 2018. Figure 1: Public APIs as listed in programmableweb.com in August 2018 It may not sound like much, but considering that each of the listed APIs represents a door to an organization's digital offerings, we're talking about thousands of organizations worldwide that have already opened their doors to new digital ecosystems, where APIs have become the product these organizations sell and developers have become the buyers of them. Figure: Digital ecosystems enabled by APIs In such digital ecosystems, communities of internal, partner, or external developers can rapidly innovate by simply consuming these APIs to do all sorts of things: from offering hotel/flight booking services by using the Expedia API, to providing educational solutions that make sense of the space data available through the NASA API. There are ecosystems where business partners can easily engage in business-to-business transactions, either to resell goods or purchase them, electronically and without having to spend on Electronic Data Interchange (EDI) infrastructure. Ecosystems where an organization's internal digital teams can easily innovate as key enterprise information assets are already accessible. So, why should businesses care about all this? There is, in fact, not one answer but multiple, as described in the subsequent sections. APIs as enablers for innovation and bimodal IT What is innovation? According to a common definition, innovation is the process of translating an idea or invention into a good or service that creates value or for which customers will pay. In the context of businesses, according to an article by HBR, innovation manifests itself in two ways: Disruptive innovation: Described as the process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Sustaining innovation: When established businesses (incumbents) improve their goods and services in the eyes of existing customers. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers. Why is this relevant? It is well known that established businesses struggle with disruptive innovation. The Netflix vs Blockbuster example reminds us of this fact. By the time disruptors are able to catch up with an incumbent's portfolio of goods and services, they are able to do so with lower prices, better business models, lower operation costs, and far more agility, and speed to introduce new or enhanced features. At this point, sustaining innovation is not good enough to respond to the challenge. With all the recent advances in technology and the internet, the rate at which disruptive innovation is challenging incumbents has only grown exponentially. Therefore, in order for established businesses to endure the challenge put upon them, they must somehow also become disruptors. The same HBR article describes a point of view on how to achieve this from a business standpoint. From a technology standpoint, however, unless the several systems that underpin a business are "enabled" to deliver such disruption, no matter what is done from a business standpoint, this exercise will likely fail. Perhaps by mere coincidence, or by true acknowledgment of the aforesaid, Gartner introduced the concept of bimodal IT in December 2013, and this concept is now mainstream. Gartner defined bimodal IT as the following: "The practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed." Figure: Gartner's bimodal IT According to Gartner, Mode 1 (or slow) IT organizations focus on delivering core IT services on top of more traditional and hard-to-change systems of record, which in principle are changed and improved in longer cycles, and are usually managed with long-term waterfall project mechanisms. Whereas for Mode 2 (or fast) IT organizations, the main focus is to deliver agility and speed, and therefore they act more like a startup (or digital disruptor in HBR terms) inside the same enterprise. However, what is often misunderstood is how fast IT organizations can disruptively innovate, when most of the information assets, which are critical to bringing context to any innovation, reside in backend systems, and any sort of access has to be delivered by the slowest IT sibling. This dilemma means that the speed of innovation is constrained to the speed by which the relevant access to core information assets can be delivered. As the saying goes, "Where there's a will there's a way." APIs could be implemented as the means for the fast IT to access core information assets and functionality, without the intervention of the slow IT. By using APIs to decouple the fast IT from the slow IT, innovation can occur more easily. However, as with everything, it is easier said than done. In order to achieve such organizational decoupling using APIs, organizations should first build an understanding about what information assets and business capabilities are to be exposed as APIs, so fast IT can consume them as required. This understanding must also articulate the priorities of when different assets are required and by whom, so the creation of APIs can be properly planned for and delivered. Luckily for those organizations that already have mature service-oriented architectures (SOA), some of this work will probably already be in place. For organizations without such luck, this activity should be planned for and should be a fundamental component of the digital strategy. Then the remaining question would be: which team is responsible for defining and implementing such APIs; the fast IT or slow IT? Although the long answer to this question is addressed throughout the different chapters of this book, the short answer is neither and both. It requires a multi-disciplinary team of people, with the right technology capabilities available to them, so they can incrementally API-enable the existing technology landscape, based on business-driven priorities. APIs to monetize on information assets Many experts in the industry concur that an organization's most important asset is its information. In fact, a recent study by Massachusetts Institute of Technology (MIT) suggests that data is the single most important asset for organizations "Data is now a form of capital, on the same level as financial capital in terms of generating new digital products and services. This development has implications for every company's competitive strategy, as well as for the computing architecture that supports it." If APIs act as doors to such assets, then APIs also provide businesses with an opportunity to monetize them. In fact, some organizations are already doing so. According to another article by HBR, 50% of the revenue that Salesforce.com generates comes from APIs, while eBay generates about 60% of its revenue through its API. This is perhaps not such a huge surprise, given that both of these organizations were pioneers of the API economy. Figure: The API economy in numbers What's even more surprising is the case of Expedia. According to the same article, 90% of Expedia's revenue is generated via APIs. This is really interesting, as it basically means that Expedia's main business is to indirectly sell electronic travel services via its public API. However, it's not all that easy. According to the previous study by MIT, most of the CEOs for Fortune 500 companies don't yet fully acknowledge the value of APIs. An intrinsic reason for this could be the lack of understanding and visibility over how data is currently being (or not being) used. Assets that sit hidden on systems of record, only being accessed via traditional integration platforms, will not, in most cases, give insight to the business on how information is being used, and the business value it adds. APIs, on the other hand, are better suited to providing insight about how/by whom/when/why information is being accessed, therefore giving the business the ability to make better use of information to, for example, determine which assets have better capital potential. In this article we provided a short description of APIs, and how they act as an enabler to digital strategies. Define the right organisation model for business-driven APIs with Luis Weir’s upcoming release Enterprise API Management. To create effective API documentation, know how developers use it, says ACM GraphQL API is now generally available Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more
Read more
  • 0
  • 0
  • 3964

article-image-why-do-it-teams-need-to-transition-from-devops-to-devsecops
Guest Contributor
13 Jul 2019
8 min read
Save for later

Why do IT teams need to transition from DevOps to DevSecOps?

Guest Contributor
13 Jul 2019
8 min read
Does your team perform security testing during development? If not, why not? Cybercrime is on the rise, and formjacking, ransomware, and IoT attacks have increased alarmingly in the last year. This makes security a priority at every stage of development. In this kind of ominous environment, development teams around the globe should take a more proactive approach to threat detection. This can be done in a number of ways. There are some basic techniques that development teams can use to protect their development environments. But ultimately, what is needed is an integration of threat identification and management into the development process itself. Integrated processes like this are referred to as DevSecOps, and in this guide, we’ll take you through some of the advantages of transitioning to DevSecOps. Protect Your Development Environment First, though, let’s look at some basic measures that can help to protect your development environment. For both individuals and enterprises, online privacy is perhaps the most valuable currency of all. Proxy servers, Tor, and virtual private networks (VPN) have slowly crept into the lexicon of internet users as cost-effective privacy tools to consider if you want to avoid drawing the attention of hackers. But what about enterprises? Should they use the same tools? They would prefer to avoid hackers as well. This answer is more complicated. Encryption and authentication should be addressed early in the development process, especially given the common practice of using open source libraries for app coding. The advanced security protocols that power many popular consumer VPN services make it a good first step to protecting coding and any proprietary technology. Additional controls like using 2-factor authentication and limiting who has access will further protect the development environment and procedures. Beyond these basic measures, though, it is also worth looking in detail at your entire development process and integrating security management at every stage. This is sometimes referred to as integrating DevOps and DevSecOps. DevOps vs. DevSecOps: What's the Difference? DevOps and DevSecOps are not separate entities, but different facets of the development process. Traditionally, DevOps teams work to integrate software development and implementation in order to facilitate the rapid delivery of new business applications. Since this process omits security testing and solutions, many security flaws and vulnerabilities aren't addressed early enough in the development process. With a new approach, DevSecOps, this omission is addressed by automating security-related tasks and integrating controls and functions like composition analysis and configuration management into the development process. Previously, DevSec focused only on automating security code testing, but it is gradually transitioning to incorporate an operations-centric approach. This helps in reconciling two environments that are opposite by nature. DevOps is forward-looking because it's toward rapid deployment, while development security looks backward to analyze and predict future issues. By prioritizing security analysis and automation, teams can still improve delivery speed without the need to retroactively find and deal with threats. Best Practices: How DevSecOps Should Work The goal of current DevSecOps best practices is to implement a shift towards real-time threat detection rather than undergoing a historical analysis. This enables more efficient application development that recognizes and deals with issues as they happen rather than waiting until there's a problem. This can be done by developing a more effective strategy while adopting DevSecOps practices. When all areas of concern are addressed, it results in: Automatic code procurement: Automatic code procurement eliminates the problem of human error and incorporating weak or flawed coding. This benefits developers by allowing vulnerabilities and flaws to be discovered and corrected earlier in the process. Uninterrupted security deployment: Uninterrupted security deployment through the use of automation tools that work in real time. This is done by creating a closed-loop testing and reporting and real-time threat resolution. Leveraged security resources: Leveraged security resources through automation. Using automated DevSecOps typically address areas related to threat assessment, event monitoring, and code security. This frees your IT or security team to focus in other areas, like threat remediation and elimination. There are five areas that need to be addressed in order for DevSecOps to be effective: Code analysis By delivering code in smaller modules, teams are able to identify and address vulnerabilities faster. Management changes Adapting the protocol for changes in management or admins allows users to improve on changes faster as well as enabling security teams to analyze their impact in real time. This eliminates the problem of getting calls about problems with system access after the application is deployed. Compliance Addressing compliance with Payment Card Industry Digital Security Standard (PCI DSS) and the new General Data Protection Regulations (GDPR) earlier, helps prevent audits and heavy fines. It also ensures that you have all of your reporting ready to go in the event of a compliance audit. Automating threat and vulnerability detection Threats evolve and proliferate fast, so security should be agile enough to deal with emerging threats each time coding is updated or altered. Automating threat detection earlier in the development process improves response times considerably. Training programs Comprehensive security response begins with proper IT security training. Developers should craft a training protocol that ensures all personnel who are responsible for security are up to date and on the same page. Organizations should bring security and IT staff into the process sooner. That means advising current team members of current procedures and ensuring that all new staff is thoroughly trained. Finding the Right Tools for DevSecOps Success Does a doctor operate with a chainsaw? Hopefully not. Likewise, all of the above points are nearly impossible to achieve without the right tools to get the job done with precision. What should your DevSec team keep in their toolbox? Automation tools Automation tools provide scripted remediation recommendations for security threats detected. One such tool is Automate DAST, which scans new or modified code against security vulnerabilities listed on the Open Web Application Security Project's (OWASP) list of the most common flaws, such as a SQL injection errors. These are flaws you might have missed during static analysis of your application code. Attack modeling tools Attack modeling tools create models of possible attack matrices and map their implications. There are plenty of attack modeling tools available, but a good one for identifying cloud vulnerabilities is Infection Monkey, which simulates attacks against the parts of your infrastructure that run on major public cloud hosts like Google Cloud, AWS, and Azure, as well as most cloud storage providers like Dropbox and pCloud. Visualization tools Visualization tools are used for evolving, identifying, and sharing findings with the operations team. An example of this type of tool is PortVis, developed by a team led by professor Kwan-Liu Ma at the University of California, Davis. PortVis is designed to display activity by host or port in three different modes: a grid visualization, in which all network activity is displayed on a single grid; a volume visualization, which extends the grid to a three-dimensional volume; and a port visualization, which allows devs to visualize the activity on specific ports over time. Using this tool, different types of attack can be easily distinguished from each other. Alerting tools  Alerting tools prioritize threats and send alerts so that the most hazardous vulnerabilities can be addressed immediately. WhiteSource Bolt, for instance, is a useful tool of this type, designed to improve the security of open source components. It does this by checking these components against known security threats, and providing security alerts to devs. These alerts also auto-generate issues within GitHub. Here, devs can see details such as references for the CVE, its CVSS rating, a suggested fix, and there is even an option to assign the vulnerability to another team member using the milestones feature. The Bottom Line Combining DevOps and DevSec is not a meshing of two separate disciplines, but rather the natural transition of development to a more comprehensive approach that takes security into account earlier in the process, and does it in a more meaningful way. This saves a lot of time and hassles by addressing enterprise security requirements before deployment rather than probing for flaws later. The sooner your team hops on board with DevSecOps, the better. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Does it make sense to talk about DevOps engineers or DevOps tools? How Visual Studio Code can help bridge the gap between full-stack development and DevOps
Read more
  • 0
  • 0
  • 6692

article-image-how-do-aws-developers-manage-web-apps
Guest Contributor
04 Jul 2019
6 min read
Save for later

How do AWS developers manage Web apps?

Guest Contributor
04 Jul 2019
6 min read
When it comes to hosting and building a website on cloud, Amazon Web Services (AWS) is one of the most preferred choices for developers. According to Canalys, AWS is dominating the global public cloud market, holding around one-third of the total market share. AWS offers numerous services that can be used for compute power, content delivery, database storage, and more. Developers can use it to build a high-availability production website, whether it is a WordPress site, Node.js web app, LAMP stack web app, Drupal website, or a Python web app. AWS developers, need to set up, maintain and evolve the cloud infrastructure of web apps. Aside from these, they are also responsible for applying best practices related to security and scalability. Having said that, let’s take a deep dive into how AWS developers manage a web application. Deploying a website or web app with Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) offers developers a secure and scalable computing capacity in the cloud. For hosting a website or web app, the developers need to use virtual app servers called instances. With Amazon EC2 instances, developers gain complete control over computing resources. They can scale the capacity on the basis of requirements and pay only for the resources they actually use. There are tools like AWS lambda, Elastic Beanstalk and Lightsail that allow the isolation of web apps from common failure cases. Amazon EC2 supports a number of main operating systems, including Amazon Linux, Windows Server 2012, CentOS 6.5, and Debian 7.4. Here is how developers get themselves started with Amazon EC2 for deploying a website or web app. The first step is to set up an AWS account and log into it.   Select “Launch Instance” from the Amazon EC2 Dashboard. It will enable the creation of VM. Now configure the instance by choosing an Amazon Machine Image (AMI), instance type and security group.   Click on Launch. In the next step, choose ‘Create a new key pair’ and name it. A key pair file gets downloaded automatically, which needs to be saved. It will be needed for logging in to the instance. Click on ‘Launch Instances’ to finish the set-up process. Once the instance is ready, it can be used to build high availability websites or web app. Using Amazon S3 for cloud storage Amazon Simple Storage Service, or Amazon S3 is a secure and highly scalable cloud storage solution that makes web-scale computing seamless for developers. It is used for the objects that are required to build a website, such as HTML pages, images, CSS files, videos and JavaScript. S3 comes with a simple interface so that developers can fetch and store large amounts of data from anywhere on the internet, at any time. The storage infrastructure provided with Amazon S3 is known for scalability, reliability, and speed. Amazon itself uses this storage option to host its own websites. Within S3, the developers need to create buckets for data storage. Each bucket can store a large amount of data, allowing developers to upload a high number of objects into it. The amount of data an object can contain, is up to 5 TB. The objects are stored and fetched from the bucket using a unique key. There are several purposes of a bucket. It can be used to organize the S3 namespace, recognize the accounts assigned for storage and data transfer, as well as work as the aggregation unit for usage. Elastic load balancing Load balancing is a critical part of a website or web app to distribute and balance the traffic load accordingly to multiple targets. AWS provides elastic load balancing to its developers, which allows them to distribute the traffic across a number of services, like Amazon EC2 instances, IP addresses, Lambda functions and containers. With Elastic load balancing, developers can ensure that their projects run efficiently even when there is heavy traffic. There are three kinds of load balancers available with AWS elastic load balancing— Application Load Balancer, Network Load Balancer and Classic Load Balancer. Application Load Balancer is an ideal option for HTTP and HTTPS traffic. It provides advanced routing for the requests meant for the delivery of microservices and containers. For balancing the load of Transmission Control Protocol (TCP), Transport Layer Security (TLS) and User Datagram Protocol (UDP), developers opt for Network Load Balancer. Whereas, the Classic Load Balancer is best suited for typical load distribution across EC2 instances. It works for both requests and connections. Debugging and troubleshooting A web app or website can include numerous features and components. Often, a few of them might face issues or not work as expected, because of coding errors or other bugs. In such cases, AWS developers follow a number of processes and techniques and check the useful resources that help them to debug a recipe or troubleshoot the issues.   See the service issue at Common Debugging and Troubleshooting Issues.   Check the Debugging Recipes for issues related to recipes.   Check the AWS OpsWorks Stack Forum. It is a forum where other developers discuss their issues. AWS team also monitors these issues and helps in finding the solutions.   Get in touch with AWS OpsWorks Stacks support team to solve the issue.  Traffic monitoring and analysis Analysing and monitoring the traffic and network logs help in understanding the way websites and web apps perform on the internet.  AWS provides several tools for traffic monitoring, which includes Real-Time Web Analytics with Kinesis Data Analytics, Amazon Kinesis, Amazon Pinpoint, Amazon Athena, etc.  For tracking of website metrics, the Real-Time Web Analytics with Kinesis Data Analytics is used by developers. This tool provides insights into visitor counts, page views, time spent by visitors, actions taken by visitors, channels driving the traffic and more. Additionally, the tool comes with an optional dashboard which can be used for monitoring of web servers. Developers can see custom metrics of the servers to know about the performance of servers, average network packets processing, errors, etc. Wrapping up Management of a web application is a tedious task and requires quality tools and technologies. Amazon Web Services makes things easier for web developers, providing them with all the tools required to handle the app.  Author Bio Vaibhav Shah is the CEO of Techuz, a mobile app and web development company in India and the USA. He is a technology maven, a visionary who likes to explore innovative technologies and has empowered 100+ businesses with sophisticated Web solutions
Read more
  • 0
  • 0
  • 5218

article-image-10-reasons-to-love-raspberry-pi
Vincy Davis
26 Jun 2019
9 min read
Save for later

10+ reasons to love Raspberry Pi

Vincy Davis
26 Jun 2019
9 min read
It’s 2019 and unless you’ve been living under a rock, you know what a Raspberry Pi is. A series of credit-card-sized board computers, initially developed to promote computer science in schools, has now released its Raspberry Pi 4 Model B in the market yesterday. Read More: Raspberry Pi 4 is up for sale at $35, with 64-bit ARM core, up to 4GB memory, full-throughput gigabit Ethernet and more! Since its release in 2012, Raspberry Pi has had several iterations and variations. Today it has become a phenomenon, it’s the world's third best-selling, general-purpose computer. It's inside laptops, tablets, and robots. This year its offering students and young people an opportunity to conduct scientific investigations in space, by writing computer programs that run on Raspberry Pi computers aboard the International Space Station. Developers around the world are using different models of this technology to implement varied applications. What do you do with your Raspberry Pi? Following the release of Raspberry Pi 4, an interesting HN thread on applications of the Raspberry Pi exploded with over a thousand comments and over 1.5k votes. The original thread poster asked, “I have Raspberry Pi and I mainly use it for VPN and piHole. I’m curious if you have one, have you found it useful? What do you do with your Raspberry Pi?” Below are some select use cases from the thread. Innovative: Raspberry Pi Zero transformed a braille display into a full-feature Linux laptop A braille user transformed a braille display into a full-feature Linux laptop, using a Raspberry Pi Zero. The user used a braille display which featured a small compartment with micro-USB and converted it into a ARM-based, monitorless, Linux laptop with a keyboard and a braille display. It can be charged/powered via USB so it can also be run from a power bank or a solar charger, thus potentially being able to run for days, rather than just hours, without needing a standard wall-jack. This helped the user to save space, power and weight. Monitor Climate change effects Changes in climate have been affecting each and everyone of us, in some way or the other. Some developers are using Raspberry Pi innovatively to tackle these climatic changes. Monitoring inhouse CO2 levels A developer working with the IBM Watson Group states that he uses several Raspberry Pis to monitor CO2 levels in his house. Each Raspberry Pi has a CO2 sensor, with a Python script to retrieve data from sensor and upload it to a server, which is also a Raspberry Pi. Later, on detecting that his bedroom has high level of CO2, he improved ventilation and reduced the CO2 levels around. Measuring conditions of coral reefs Nemo Pi is a Nemo foundation’s technology, which works as an underground weather station. It uses Raspberry Pi computers to protect coral reefs from climate change by measuring temperature, visibility, pH levels, and the concentration of CO2 and nitrogen oxide at each anchor point. Checking weather updates remotely You can also use the Raspberry Pi for ‘Weather Monitoring’, to check the changes in the weather remotely using a smartphone. The main conditions in the weather monitor are the temperature, humidity, and the air quality. Raspberry Pi 3 model B, can be programmed such that it takes data from Arduino, and depending on the data acquired, the cameras are actuated. The Pi receives data from sensors and uploads it to the cloud so that appropriate action can be taken. Making Home Automation feasible Raspberry Pi has been designed to let you create whatever you can dream of, and of course developers are making full use of it. There are many instances of developers using Raspberry Pi to make their home automation more feasible. Automatic pet door drive A developer have used this technology to install a fire-protection-approved door drive for their pets. It is used along with another Raspberry Pi which analyzes a video stream and detects the pet. If the pet is in the frame for ‘n’ amount of time, a message is sent to the Pi connected to the door drive, which opens up slightly, to let the pet in. Home automation Raspberry Pi 3 model works with the Home Assistant with a Z-Wave USB Dongle, and provides climate, covers, lights, locks, sensors, switches, and thermostats information. There are many takers of the RaZberry card, which is a tiny daughter card that sits on top of the Raspberry PI GPIO connector. It is powered by the Raspberry PI board with 3.3 V and communicates using UART TTL signals. It supports home automation and is not only compatible with all models of Raspberry Pi, but also with all third party software. Watering a plant via a reddit bot! There’s another simple instance where a subreddit has control over the watering of a live plant. The Pi runs a reddit bot that reads the votes, and switch on the pump to water. It also collects data about sunlight, moisture, temp and humidity to help form the decision about watering. Build easy electronic projects Raspberry Pi can be used to learn coding and to build electronics projects, and for many of the things that your desktop PC does, like spreadsheets, word processing, and browsing the internet to learn programming and execute projects. Make a presentation Rob Reilly, an independent consultant states that he uses Raspberry Pi in his Steampunk conference badge while giving tech talks. He plugs it in the HDMI, powers up the badge and runs slides with a nano-keyboard/mousepad and LibreOffice. He says that this works great for him as it displays a promotional video on it's 3.5" touch-screen and runs on a cell phone power pack. Control a 3D printer, a camera or even IoT apps A user of Raspberry Pi states that he makes use of the Raspberry Pi 3 model to use OctoPrint.  It is an open source web interface for 3D printers which allows to control and monitor all aspects of printer and print jobs. A system architect says that he regularly uses Raspberry Pi for digital signage, controlled servos, and as cameras. Currently, he also uses a Pi Zero W model for a demo Azure IoT solutions. Raspberry Pi is also used as a networked LED marquee controller. Read More: Raspberry Pi Zero W: What you need to know and why it’s great FullPageOS is a Raspberry Pi distribution to display one webpage in full screen. It includes Chromium out of the box and the scripts necessary to load it at boot. This repository contains the source script to generate the distribution out of an existing Raspbian distro image. Also a developer, who’s also the Former VP of Engineering at Blekko Inc search engine states that he uses Raspberry Pi for several purposes such as running the waveforms live software from Digilent and hooks to an Analog Discovery on his workbench. He also uses Raspberry Pi for driving a display which showcases a dashboard of various things like Nagios alerts, data trends, etc. Read More: Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial] Enjoy Gaming with Raspberry Pi There are many Raspberry Pi-Exclusive Games, available for its users. Minecraft PE is one such game, which comes preinstalled with Raspbian. Most games designed to run natively on the Raspberry Pi are written in Python. Raspberry Pi is being used to stream PlayStation to backups over SMB by networking the onboard Ethernet port of the Pi, to allow access to a Samba Share service running on the Pi. It allows seamless playback of games with heavy Full Motion Video sequences. When an additional support for Xlink Kai is provided to play LAN enabled games over the Pi’s WiFi connection, it enables smooth connection for lag-free multiplayer on original hardware. A user on Hacker News comments that he uses RetroPie, which has a library of many interesting games. Loved not only by developers, but also by the general public These 35$ masterpieces of Raspberry Pi give big power in the hands of someone with little imagination and a spare of electronics. With its fast processing and better network connectivity, even beginners can use Raspberry Pi for practical purposes. A college student, on Hacker News claims that he uses a Raspberry Pi 3b+ model to automate his data entry job by using Python and Selenium, which is a portable framework for testing web applications and provides a playback tool for authoring functional tests. He says that since its automated, it allows him to take long coffee breaks and not worry about it, while travelling. Kevin Smith, the co-founder of Vault states that his office uses a Raspberry Pi and blockchain NFTs to control the coffee machine. An owner of the NFT, once authenticated, can select the coffee type on their phone which then signals the Raspberry Pi to make the particular coffee type, by jumping the contacts that was previously used to be pressed by the machine's buttons. Another interesting use of Raspberry Pi is by a user who used the Raspberry Pi technology to get real-time information from the local transit authority, and the GPS installed buses to help those stranded at the bus station. Raspberry Pi 3 models can also be installed in a Tesla car within the internal network as a bastion box, to run a software which provides interaction with the car’s entertainment system. Read More: Build your first Raspberry Pi project Last year, the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT, which lets you decode and stream live TV. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. Tensorflow 1.9 has also announced that they will officially support Raspberry Pi, thus enabling users to try their hand on live machine learning projects. There’s no doubt that, with all its varied features and low priced computers, developers and the  general public have many opportunities to experiment with Raspberry Pi and get their work done. From students to international projects, Raspberry Pi is being used assuredly in many cases. You can now install Windows 10 on a Raspberry Pi 3 Raspberry Pi opens its first offline store in England Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial]
Read more
  • 0
  • 0
  • 5014
article-image-3-cybersecurity-lessons-for-e-commerce-website-administrators
Guest Contributor
25 Jun 2019
8 min read
Save for later

3 cybersecurity lessons for e-commerce website administrators

Guest Contributor
25 Jun 2019
8 min read
In large part, the security of an ecommerce company is the responsibility of its technical support team and ecommerce software vendors. In reality, cybercriminals often exploit the security illiteracy of the staff to hit a company. Of all the ecommerce team, web administrators are often targeted for hacker attacks as they control access to the admin panel with lots of sensitive data. Having broken into the admin panel, criminals can take over an online store, disrupt its operation, retrieve customer confidential data, steal credit card information, transfer payments to their own account, and do more harm to business owners and customers. Online retailers contribute to the security of their company greatly when they educate web administrators where security threats can come from and what measures they can take to prevent breaches. We have summarized some key lessons below. It’s time for a quick cybersecurity class! Lesson 1. Mind password policy Starting with the basis of cybersecurity, we will proceed to more sophisticated rules in the lessons that follow. The importance of secure password policy may seem obvious, it's still shocking how careless people can be with choosing a password. In e-commerce, web administrators set credentials for accessing the admin panel and they can “help” cybercriminals greatly if they neglect basic password rules. Never use similar or alike passwords to log into different systems. In general, sticking to the same patterns when creating passwords (for example, using a date of birth) is risky. Typically, people have a number of personal profiles in social networks and email services. If they use identical passwords to all of them, cybercriminals can steal credentials just to one social media profile to crack the others. If employees are that negligent about accessing corporate systems, they endanger the security of the company. Let’s outline the worst-case scenario. Criminals take advantage of the leaked database of 167 million LinkedIn accounts to hack a large online store. As soon as they see the password of its web administrator (the employment information is stated in the profile just for hackers’ convenience), they try to apply the password to get access to the admin panel. What luck! The way to break into this web store was too easy. Use strong and impersonalized passwords. We need to introduce the notion of doxing to fully explain the importance of this rule. Doxing is the process of collecting pieces of information from social accounts to ultimately create a virtual profile of a person. Cybercriminals engage doxing to crack a password to an ecommerce platform by using an admin’s personal information in it. Therefore, a strong password shouldn’t contain personal details (like dates, names, age, etc.) and must consist of eight or more characters featuring a mix of letters, numbers, and unique symbols. Lesson 2. Watch out for phishing attacks With the wealth of employment information people leave in social accounts, hackers hold all the cards for implementing targeted, rather than bulk, phishing attacks. When planning a malicious attack on an ecommerce business, criminals can search for profiles of employees, check their position and responsibilities, and conclude what company information they have access to. In such an easy way, hackers get to know a web store administrator and follow with a series of phishing attacks. Here are two possible scenarios of attacks: When hackers target a personal computer. Having found a LinkedIn profile of a web administrator and got a personal email, hackers can bombard them with disguised messages, for example, from bank or tax authorities. If the admin lets their guard down and clicks a malicious link, malware installs itself on their personal computer. Should they remotely log in the admin panel, hackers steal their credentials and immediately set a new password. From this moment, they take over the control over a web store. Hackers can also go a different way. They target a personal email of the web administrator with a phishing attack and succeed in taking it over. Let’s say they have already found out a URL to the admin panel by that time. All they have to do now is to request to change the password to the panel, click the confirmation link from the admin’s email and set a new password. In the described scenario, the web administrator has made three security mistakes of using a personal email for work purposes, not changing the default admin URL, and taking the bait of a phishing email. When hackers target a work computer. Here is how a cyberattack may unfold if web administrators have been reckless to disclose a work email online. This time, hackers create a targeted malicious email related to work activities. Let’s say, the admin can get a legitimate-looking email from FedEx informing about delivery problems. Not alarmed, they open the email, click the link to know the details, and compromise the security of the web store by giving away the credentials to the admin panel to hackers. The main mistake in dealing with phishing attacks is to expect a fraudulent email to look suspicious. However, phishers falsify emails from real companies so it can be easy to fall into the trap. Here are recommendations for ecommerce web administrators to follow: Don’t use personal emails to log in to the admin panel. Don’t make your work email publicly available. Don’t use work email for personal purposes (e.g., for registration in social networks). Watch out for links and downloads in emails. Always hover over the link prior to click it – in malicious emails, the destination URL doesn’t match the expected destination website. Remember that legitimate companies never ask for your credentials, credit card details or any other sensitive information in emails. Be wary of emails with urgent notifications and deadlines – hackers often try to allay suspicions by provoking anxiety and panic among their victims. Engage two-step verification for an ecommerce admin panel. Lesson 3.  Stay alert while communicating with a hosting provider Web administrators of companies that have chosen a hosted ecommerce platform for their e-shop will need to contact the technical support of their hosting provider now and then. Here, a cybersecurity threat comes unexpected. If hackers have compromised the security of the web hosting company, they can target its clients (e-commerce websites) as well. Admins are in serious danger if the hosting company stores their credentials unencrypted. In this case, hackers can get direct access to the admin panel of a web store. Otherwise, more sophisticated attacks are developed. Cybercriminals can mislead web administrators by speaking for tech support agents. When communicating with their hosting provider, web administrators should mind several rules to protect their confidential data and the web store from hacking. Use unique email and password to log in your web hosting account. The usage of similar credentials for different work services or systems leads to a company security breach in case the hosting company has been hacked. Never reveal any credentials on request of tech support agents. Having shared their password to the admin panel, web administrators can no longer authenticate themselves by using it. Track your company communication with tech support. Web administrators can set email notifications to track requests from team members to the tech support and control what information is shared. Time for an exam As a rule, ecommerce software vendors and retailers do their best for the security of ecommerce businesses. Thus, software vendors take the major role in providing for the security of SaaS ecommerce solutions (like Shopify or Salesforce Commerce Cloud), including the security of servers, databases and the application itself. In IaaS solutions (like Magento), retailers need to put more effort in maintaining the security of the environment and system, staying current on security updates, conducting regular audits and more (you can see the full list of Magento security measures as an example). Still, cybercriminals often target company employees to hack an online store. Retailers are responsible for educating their team what security rules are compulsory to follow and how to identify malicious intents. In our article, we have outlined the fundamental security lessons for web administrators to learn in order to protect a web store against illicit access. In short, they should be careful with personal information they publish online (in their social media profiles) and use unique credentials for different services and systems. There are no grades in our lessons – rather an admin’s contribution to the security of their company can become the evaluation of knowledge they have gained. About the Author Tanya Yablonskaya is Ecommerce Industry Analyst at ScienceSoft, an IT consulting and software development company headquartered in McKinney, Texas. After 2+ years of exploring the cryptocurrency and blockchain sphere, she has shifted the focus of interest to ecommerce industry. Delving into this enormous world, Tanya covers key challenges online retailers face and unveils a wealth of tools they can use to outpace competitors. The US launched a cyber attack on Iran to disable its rocket launch systems; Iran calls it unsuccessful All Docker versions are now vulnerable to a symlink race attack 12,000+ unsecured MongoDB databases deleted by Unistellar attackers
Read more
  • 0
  • 0
  • 6237

article-image-how-not-to-get-hacked-by-state-sponsored-actors
Guest Contributor
19 Jun 2019
11 min read
Save for later

How not to get hacked by state-sponsored actors

Guest Contributor
19 Jun 2019
11 min read
News about Russian hackers creating chaos in the European Union or Chinese infiltration of a US company has almost become routine. In March 2019, a Russian hacking group was discovered operating on Czech soil by Czech intelligence agencies. Details are still unclear, however, speculations state that the group is part of a wider international network based out of multiple EU countries and was operating under Russian diplomatic cover. The cybercriminal underground is complex, multifaceted, and by its nature, difficult to detect. On top of this, hackers are incentivized not to put their best foot forward in order to evade detection. One of the most common tactics is to disguise an attack so that it looks like the work of another group. These hackers frequently prefer to use the most basic hacking software available because it avoids the unique touches of more sophisticated software. Both of these processes make it more difficult to trace a hack back to its source. Tracing high-level hacking is not impossible; however, there are some clear signs investigators use to determine the origin of a hacking group. Different hacker groups have distinct motivations, codes of conduct, tactics, and payment methods. Though we will be using Russian and Chinese hacking as our main examples, the tips we give can be applied to protecting yourself from any state-sponsored attack. Chinese and Russian hacking – knowing the difference Russian speaking hacker forums are being exposed with increasing frequency, revealing not just the content shared in their underground network, but the culture that members have built up. They first gained notoriety during the 90s when massive economic changes saw the emergence of vast criminal networks – online and offline. These days, Russian hacks typically have two different motivations: geopolitical and financial. Geopolitical attacks are generally designed to create confusion. The role of Russian hackers in the 2016 US election was one of the most covered stories by international media. However, these attacks are most effective and most common in countries with weak government institutions. Many of them are also former Soviet territories where Russia has a pre-existing geopolitical interest. For example, the Caucasus region and the Baltic states have long been targeted by state-sponsored hackers. The tactics of these “active measures” are multivariate and highly complex. Hacking and other digital attacks are just one arm of this hybrid war. However, the hacks that affect average web users the most, tend to be financially motivated. Russian language forums on the dark web have vast sections devoted to “carder” communities. Carder forums are where hackers go to buy and sell everything from identity details, credit card details, data dumps, or any other information that has been stolen. For hackers looking to make a quick buck, carder forums are bread and butter. These forums and subforums include detailed tutorials on how to spoof a credit card number. The easiest way to steal from unsuspecting people is to buy a fake card. However, card scanners that steal a person’s credit card number and credentials are becoming increasingly popular. Unlike geopolitical hacks, financial attacks are not necessarily state-sponsored. Though individual Western hackers may be more skilled when it comes to infiltrating more complex system, Russian hackers have several distinct advantages. Unlike in Western countries, Russian authorities tend to turn a blind eye to hacking that targets either Western countries or former Soviet states. This allows hackers to work together in groups, something they’re discouraged from doing in countries that crack down on cyber attacks. This means Russian hackers can target more people at a greater speed than individual bad actors working in other countries. Why the Chinese do it? There are a number of distinct differences when it comes to Chinese hacking projects. The goal of state-sponsored Chinese attacks is to catch up to the US and European level of technological expertise in fields ranging from AI, biomedicine, alternative energy to robotics, and space technology. These goals were outlined in Xi Jinping's Made in China 2025 announcement. This means, the main target for Chinese hackers is economic and intellectual property, which can be corporate or government. In the public sector, targeting US defense forces yields profitable designs for state-of-the-art technology. The F-22 and F-35, two fighter aircraft developed for the US military, were copied and produced almost identically by China’s People’s Liberation Army. In the private sector, Chinese agents target large scale industries that use and develop innovative technology, like oil and gas companies. For example, a group might attack an oil firm to get details about exploration and steal geological assessments. This information would then be used to underbid their US competitor. After a bilateral no-hacking agreement between the US and Chinese leaders was signed in 2016, attacks dropped significantly. However, since mid-2018, these attacks have begun to increase again. The impact of these new Chinese-sponsored cyber attacks has been farther reaching than initially expected. Chinese hacking groups aren’t simply taking advantage of system vulnerabilities in order to steal corporate secrets. Many top tech companies believed they were compromised by a possible supply chain attack that saw Chinese microchips secretly inserted into servers. Though Chinese and Russian hackers may have different motivations, one thing is certain: they have the numbers on their side. So how can you protect yourself from these specific hacking schemes? How to stay safe – tips for everyday online security Cyber threats are a part of life connected to the internet. While there’s not a lot you can do to stop someone else from launching an attack, there are steps you can take to protect yourself as much as possible.  Of course, no method is 100% foolproof, but it’s likely that you can be protected. Hackers look for vulnerabilities and flaws to exploit. Unless you are the sole gatekeeper of a top-secret and lucrative information package that you’ve placed under heavy security, you may find yourself the target of a hacking scheme at some point or another.  Nevertheless, if a hacker tries to infiltrate your network or device and finds it too difficult, they will probably move onto an easier target. There are some easy steps you can take to bolster your safety online. This is not an exhaustive list. Rather, it’s a round-up of some of the best tools available to bolster your security and make yourself a difficult – and therefore unattractive – target. Make use of security and scanning tools The search tool Have I Been Pwned is a great resource for checking if your accounts have been caught up in a recent data breach. You can enter your email address or your password for any account to see whether either has been exposed. You can also set up notifications on your accounts or domains that will tell you immediately if they are caught in a data breach. This kind of software can be especially helpful for small business networks, which are more likely to find themselves on the receiving end of a Chinese hack. Hackers know that small businesses have fewer resources than large corporations, which can make their attacks even more devastating. Read Also: ‘Have I Been Pwned’ up for acquisition; Troy Hunt code names this campaign ‘Project Svalbard’ Manage your passwords One of the most common security mistakes is also one of the most dangerous. You should use a unique, complicated password for each one of your accounts. The best way to manage a lot of complicated passwords is with a password manager. There are browser extensions but they have an obvious drawback if you lose your device. It’s best to use a separate application. Use a passphrase, rather than a password, to access your password manager. A passphrase is exactly what it sounds like. Rather than trusting that hackers won’t be able to figure out a single word, using multiple words to create a full phrase is both easier to remember and harder to hack. If your device offers biometric access (like fingerprint), switch it on. Many financial apps also offer an additional layer of biometric security before you send money. Use a VPN A VPN encrypts your traffic, making it unreadable to outsiders. It also spoofs your IP address, which conceals your true location. This prevents sensitive information from falling into the hands of unscrupulous users and prevents your location details being used to identify you. Some of the premium VPNs integrate advanced security features into their applications. For example, malware blockers will protect your device from malware and spyware. Some also contain ad-blockers. Read Also: How to protect your VPN from Data Leaks Keep in mind that free VPNs can themselves be a threat to your online privacy. In fact, some free VPNs have been used by the Chinese government to spy on their citizens. That’s why you should only use a high-quality VPN like CyberGhost to protect yourself from hackers and online trackers. If you’re looking for the fastest VPN on the market, ExpressVPN has consistently been the best competition in speed tests. NordVPN is our pick for best overall VPN when comparing it based on price, security, and speed. VPNs are an important tool for both individuals and businesses. However, because Russian hackers prefer individual targets, using a VPN while dealing with any sensitive data, such as a bank, will help keep your money in your own account. Learn to identify and deal with phishing Phishing for passwords is one of the most common and most effective ways to extract sensitive information from a target. Russian hackers were famously able to sabotage Hillary Clinton’s presidential campaign when they leaked emails from campaign manager John Podesta. Thousands of emails on that server were stolen via a phishing scam. Phishing scams are an easy way for hackers to infiltrate companies especially. Many times, employee names and email addresses are easy to access online. Hackers then use those names for false email accounts, which tricks their coworkers into open an email that contains a malware file. The malware then opens a direct line into the company’s system. Crucially, phishing emails will ask for your passwords or sensitive information. Reputable companies would never do that. One of the best ways to prevent a phishing attack is to properly train yourself, and everyone in your company, on how to detect a phishing email. Typically – but not always – phishing emails use badly translated English with grammatical errors. Logos and icons may also appear ill-defined. Another good practice is to simply hover your mouse over the email, which will generally reveal the actual sender. Check the hosting platform and the spelling of the company name as these are both techniques used by hackers to confuse unwitting employees. You can also use a client-based anti-phishing software, like one from avast! or Kaspersky Labs, which will flag suspicious emails. VPNs with an anti-malware feature also offer reliable protection against phishing scams. Read Also: Using machine learning for phishing domain detection [Tutorial] Keep your apps and devices up-to-date Hackers commonly take advantage of flaws in old systems. Usually, when an update is released, it fixes these vulnerabilities. Make a habit of installing each update to keep your devices protected. Disable Flash Flash is a famously insecure piece of software that hackers can infiltrate easily. Most websites have moved away from flash, but just to be sure, you should disable it in your browser. If you need it later you can give Flash permission to run for just video at a time. What to do if you have been hacked If you do get a notice that your accounts have been breached, don’t panic. Follow the steps given below: Notify your workplace Notify your bank Order credit reports to keep track of any activity Get identity theft insurance Place a credit freeze on your accounts or a fraud alert Chinese and Russian hackers may seem impossible to avoid, but the truth is, we are probably not protecting ourselves as well as we should be. Though individuals are less likely to find themselves the target of Chinese hacks, most hackers are out for financial gain above all else. That makes it is more crucial to protect our private data. The simple tips provided above are a great baseline to secure your devices and protect your privacy, whether you want to protect against state-sponsored hacking or individual actors. Author Bio Ariel Hochstadt is a successful international speaker and author of 3 published books on computers and the internet. He’s an ex-Googler where he was the Global Gmail Marketing Manager and today he is the co-founder of vpnMentor and an advocate of online privacy. He’s also very passionate about traveling around the world with his wife and three kids.   Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns How to beat Cyber Interference in an Election process The most asked questions on Big Data, Privacy, and Democracy in last month’s international hearing by Canada Standing Committee
Read more
  • 0
  • 0
  • 4248

article-image-what-is-the-future-of-on-demand-e-commerce-apps
Guest Contributor
18 Jun 2019
6 min read
Save for later

What is the future of on-demand e-commerce apps?

Guest Contributor
18 Jun 2019
6 min read
On-demand apps almost came as a movement in the digital world and transformed the way we avail services and ready-to-use business deliverables. -E-commerce stores like Amazon and eBay were the first on-demand apps and over time the business model penetrated across other niches. Now, from booking a taxi ride online to booking food delivery to booking accommodation in a distant city, on-demand apps are making spaces for every different customer interaction. As these on-demand apps are gradually building the foundation for a fully-fledged on-demand economy, the future of e-commerce will depend on how new and cutting-edge features are introduced and how the user experience can be boosted with new UI and UX elements. But before taking a look into the future of on-demand e-commerce, it is essential to understand the evolution of the on-demand apps in recent years.   Let us have a brief look at various facets of this ongoing evolution.   Mobile-push for change: Already mobile search has surpassed desktop search in both volume and frequency. Moreover, mobile has become a lifestyle factor allowing instant access to services and contents. It is a mobile device’s round the clock connectivity and ease of keeping in constant touch that has made it a key to the thriving on-demand economy.   Overwhelming Social Media penetration: The penetration of social media across all spheres of life has helped people staying connected while communicating almost on anything and everything, giving businesses a never-before opportunity to cater to the customer demands. Addressing value as well as a convenience: With the proliferation of on-demand apps, we can see two gross categories of consumers- the value-oriented and the convenience-oriented consumers. Besides giving priority to more value at a lesser cost, the on-demand apps are now facilitating more convenient and timely delivery of products. Frictionless business process: Allowing easy and smooth purchase with least friction in the business process has become the subject of demand for most consumers. Frictionless and smooth customer experience and delivery are the two most important criteria that on-demand apps fulfill.   How to cater to customers with on-demand e-commerce apps? If as a business you want to cater to your customers with on-demand apps, there are several ways you can do that. When providing customers more value is your priority, you can only ensure this with easier, connected and smooth e-shopping experience. 4 specific ways you can cater to your customers with on-demand e-commerce apps. By trying and testing various services, you can easily get a first-hand feel of how these services work. Next, evaluate what the services do best and what they don’t. Now, think about how you can deliver a better service for your customers. To transform your existing business into an on-demand business, you can also partner with a service provider who can ensure same-day delivery of your products to the customers. You can partner with services like Google Express, Instacart, Amazon, PostMates, Google Express, Uber Rush, etc. You can also utilize the BOPUS (by online, pick up in store) model to cater to many customers who find this helpful. Always make sure to minimize the time and trouble for the customers to pick up products from your store. Providing on-site installation of the product can also boost customer experience. You can partner with a service provider to install the product and guide the customers about its usage. How on-Demand apps are transforming the face of business? The on-demand economy is experiencing a never-before boom and there are too many examples of how it has transformed businesses. The emergence of Uber and Airbnb is an excellent example of how on-demand apps deliver popular service for several daily needs. Just as Uber transformed the way we think of transport, Airbnb transformed the way we conceive booking accommodations and hotels in places of travel. Similarly, apps like Swiggy, Just Eat and Uber Eats are continuing to change the way we order foods from restaurants and food chains. The same business model is slowly penetrating across other niches and products. From the daily consumable goods to the groceries, now almost everything is being delivered through on-demand apps to our doorstep. Thanks to customer-centric UI and UX elements in mobile apps and an increasing number of businesses paving the way for unique and innovative shop fronts, personalization has become one of the biggest driving factors for on-demand mobile apps. Consumers also have got the taste of personalized shopping experience, and they are increasingly demanding products, services and shopping experience that suit their specific needs and preferences. This is one area where on-demand apps within the same niche are competitive in a bid to deliver better customer experience and win more business. The Future of On-demand eCommerce Apps The future of the on-demand e-commerce apps will mainly revolve around new concepts and breakthrough ideas of providing customers more ease and convenience. From gesture-based checkout and payment processing to product search through images to video chat, a lot of breakthrough features will shape the future of on-demand e-commerce apps. Conversational Marketing Unlike the conventional marketing channels that follow the one-way directive, in the new era of on-demand e-commerce apps, conversational marketing will play a bigger role. From intelligent Chatbots to real-time video chat communication, we have a lot of avenues to utilise conversational marketing methods. Image-Based Product Search By integrating image search technology with the e-commerce interfaces customers can be provided with an easy and effortless ways of searching for products online. They can take photos of nearby objects and can search for those items across e-commerce stores.   Real-time Shopping Apps What about getting access to products just when and where you need them? Well, such ease of shopping in real-time may not be a distant thing of the future, thanks to real-time shopping apps. Just when you need a particular product, you can shop it then and there and based upon availability, the order can be accepted and delivered from the nearest store in time. Gesture-Based Login Biometrics is already part and parcel of smart user experience. Gestures are also used in the latest mobile handsets for login and authentication. So, those days are not far when the gestures will be used for customer login and authentication in the e-commerce store. This will make the entire shopping experience easier, effortless and least time-consuming. Conclusion The future of on-demand e-commerce apps is bright. In the years to come, the on-demand apps are going to be more mainstream and commonplace to transform the business process and the way customers are served by retailers across the niches. Author Bio Atman Rathod is the Co-founder at CMARIX TechnoLabs Pvt. Ltd. with 13+ years of experience. He loves to write about technology, startups, entrepreneurship and business. His creative abilities, academic track record and leadership skills made him one of the key industry influencers as well. You can find him on Linkedin, Twitter, and Medium. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter What Elon Musk can teach us about Futurism & Technology Forecasting 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 6880
article-image-the-most-asked-questions-on-big-data-privacy-and-democracy-in-last-months-international-hearing-by-canada-standing-committee
Savia Lobo
16 Jun 2019
16 min read
Save for later

The most asked questions on Big Data, Privacy and Democracy in last month’s international hearing by Canada Standing Committee

Savia Lobo
16 Jun 2019
16 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing took place on May 28, and includes the following witnesses: - Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry - Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe - Shoshana Zuboff, Author of The Age of Surveillance Capitalism - Maria Ressa, CEO and Executive Editor, Rappler Witnesses were asked various questions based on data privacy, data regulation, the future of digital tech considering current data privacy model, and much more. Why we cannot enforce independent regulators to oversee user rights data privacy Damion Collins to McNamee:  “In your book you said as far as I can tell Zack has always believed that users value privacy more than they should. On that basis, do you think we will have to establish in law the standards we want to see enforced in terms of users rights data privacy with independent regulators to oversee them? because the companies will never do that effectively themselves because they just don't share the concerns we have about how the systems are being abused” Roger McNamee: “I believe that it's not only correct in terms of their philosophy, as Professor Zuboff points out, but it is also baked into their business model--this notion--that any data that exists in the world, claimed or otherwise, they will claim for their own economic use and framing. How you do that privacy, I think is extremely difficult and in my opinion, would be best done by simply banning the behaviors that are used to gather the data.” Zuckerberg is more afraid of privacy regulation Jo Stevens, Member of Parliament for Cardiff Central, asked McNamee,  “What you think Mark Zuckerberg is more frightened about privacy regulation or antitrust action?” McNamee replied saying that Zuckerberg is more afraid of privacy.  He further adds, “to Lucas I would just say the hardest part of this is setting the standard of what the harm is these guys have hidden behind the fact that's very hard to quantify many of these things.” In the future can our homes be without digital tech? Michel Picard, Member of the Canadian House of Commons asked Zuboff, “your question at the beginning is, can the digital future be our home? My reaction to that was, in fact, the question should be in the future home be without digital.” Zubov replied, “that's such an important distinction because I don't think there's a single one of us in this room that is against the digital per se. It's this is not about being anti-technology, it's about technology being hijacked by a rogue economic logic that has turned it to its own purposes. We talked about the idea that conflating the digital with surveillance capitalism is a dangerous category error. What we need is to be able to free the potential of the digital to get back to those values of Democritus democratization of knowledge and individual emancipation and empowerment that it was meant to serve and that it still can serve.” Picard further asks, “compared to the Industrial Revolution where somewhere although we were scared of the new technology, this technology was addressed to people for them to be beneficiaries of that progress, now, it's we're not beneficiary at all. The second step of this revolution, it is a situation where people become a producer of the raw material and as you mentioned as you write “Google's invention reveals new capabilities to infer and deduce the thoughts feelings intention interests of individual and groups with an automated architecture that operates as a one-way mirror irrespective of a person's awareness. So like people connected to the machine and matrix.” Zuboff replies, “From the very beginning the data scientists at Google, who are inventing surveillance capitalism, celebrated in their written patterns and in their research, published research, the fact that they could hunt and capture behavioral surplus without users ever being aware of these backstage operations. Surveillance was baked into the DNA of this economic logic essential to its strange form of value creation. So it's with that kind of sobriety and gravitas that it is called surveillance capitalism because without the surveillance piece it cannot exist.” Can Big data be simply pulled out of jurisdictions in the absence of harmonized regulation across democracies? Peter Kent, Member of Parliament Thornhill, asked Balsillie, “with regards to what we've seen that Google has said in response to the new federal elections, the education on advertising will simply withdraw from accepting advertising. Is it possible that big data could simply pull out of jurisdictions where regulations, in the absence of harmonized regulation, across the democracies are present?” To this, Balsillie replies, “ well that's the best news possible because as everyone's attested here. The purpose of surveillance capitalism is to undermine personal autonomy and yet elections democracy are centered on the sovereign self exercised their sovereign will. Now, why in the world would you want to undermine the core bedrock of election in a non-transparent fashion to the highest bidder at the very time your whole citizenry is on the line and in fact, the revenue for is immaterial to these companies. So one of my recommendations is, just banning personalized online ads during elections. We have a lot of things you're not allowed to do for six or eight weeks just put that into the package it's simple and straightforward.” McNamee further adds his point on the question by saying, “point that I think is being overlooked here which is really important is, if these companies disappeared tomorrow, the services they offer would not disappear from the marketplace. In a matter of weeks, you could replicate Facebook, which would be the harder one. There are substitutes for everything that Google does that are done without surveillance capitalism. Do not in your mind allow any kind of connection between the services you like and the business model of surveillance capitalism. There is no inherent link, none at all this is something that has been created by these people because it's wildly more profitable.” Committee lends a helping hand as an ‘act of Solidarity’ to press freedom Charlie Angus, a member of the Canada House of Commons, “Facebook and YouTube transformed the power of indigenous communities to speak to each other, to start to change the dynamic of how white society spoke about them. So I understand its incredible power for the good. I see more and more thought in my region which has self-radicalized people like the flat earthers, anti-vaxxers, 9/11 truthers and I've seen its effect in our elections through the manipulation of anti-immigrant anti-muslim materials. People are dying in Asia for the main implication of these platforms. I want to ask you is there some in an act of solidarity with our Parliament with our legislators if there are statements that should be made public through our Parliament to give you support so that we can maintain a link with you as an important ally on the front line.” Ressa replied, “Canada has been at the forefront of holding fast to the values of human rights of press freedom. I think the more we speak about this then the more the values are reiterated especially since someone like president Trump truly likes president detective and vice versa it's very personal. But sir, when you talked about  where people are dying you've seen this all over Asia there's Myanmar there is the drug war here in the Philippines, India and Pakistan just instances when this tool for empowerment just like in your district it is something that we do not want to go away not shut down and despite the great threats that we face that I face and my company faces Facebook the social media platforms still give us the ability to organize to create communities of action that had not been there before.” Do fear, outrage, hate speech, conspiracy theories sell more than truths? Edwin Tong, a member of the Singapore parliament asked McNamee, on the point McNamee made during his presentation that “the business model of these platforms really is focussed on algorithms that drive content to people who think they want to see this content. And you also mentioned that fear outraged hate speech conspiracy theories is what sells more and I assume what you mean to say by that is it sells more than truths, would that be right?” McNamee replied, “So there was a study done at MIT in Cambridge Massachusetts that suggested, disinformation spreads 70% further and six times faster than fact and there are actually good human explanations for why hate speech and conspiracy theories move so rapidly it's just it's about treating the flight-or-fight reflex.” Tong further highlighted what Ressa said about how this information is spread through the use of BOTS. “I think she said 26 fake accounts is translating the 3 million different accounts which spread the information. I think we are facing a situation where disinformation if not properly checked gets exponentially viral. People get to see it all the time and overtime unchecked this leads to a serious erosion of trust serious undermining of institutions we can't trust elections and fundamentally democracy becomes marginalized and eventually demolished.”   To this, McNamee said, “I agree with that statement completely to me the challenge is in how you manage it so if you think about this censorship and moderation were never designed to handle things at the scale that these Internet platforms operate at. So in my view, the better strategy is to do the interdiction upstream to either ask the fundamental question of what is the role of platforms like this in society right and then secondly what's the business model associated with them. So to me, what you really want to do my partner Renee de resto who's a researcher in this area it talks about the issue of freedom of speech versus freedom of reach. The latter being the amplification mechanism and so what's really going on on these platforms is the fact that the algorithms find what people engage with and amplify that more and sadly hate speech disinformation conspiracy theories are, as I said the catnip that's what really gets the algorithms humming and gets people to react and so in that context eliminating that amplification is essential and the question is how you're gonna go about doing that and how are you gonna how are you going to essentially verify that it's been done and in my mind the simplest way to do that's to prevent the data from getting in there in the first place.” Tong further said, “I think you must go upstream to deal with it fundamentally in terms of infrastructure and I think some witnesses also mentioned that we need to look at education which I totally agree with but when it does happen and when you have that proliferation of false information there must be a downstream or an end result kind of reach and that's where I think your example of Sri Lanka is very pertinent because it shows and demonstrates that left uncheck the platforms to do nothing about they're about the false information is wrong and what we do need is to have regulators and governments be clothed with powers and levers to intervene, intervene swiftly, and to disrupt the viral spread of online falsehoods very quickly would you agree as a generalization.” McNamee said, “I would not be in favor of the level of government intervention I have recommended here I simply don't see alternatives at the moment that in order to do what Shoshanna's talked about in order to do what Jim is talking about you have to have some leverage and the only leverage governments have today is their ability to shut these things down well nothing else works quickly enough.” Sun Xueling, another member from the Parliament of Singapore asked McNamee, “I like to make reference to the Christchurch shooting on the 15th of March 2019 after which the New York Times had published an article by Kevin Roos.” She quoted what Roos mentioned in his article, “We do know that the design of Internet platforms can create and reinforce extremist beliefs. Their recommendation algorithms often steer users towards a jeer content, a loop that results in more time spent on the app, and more advertising revenue for the company.” McNamee said, “not only do I agree with that I would like to make a really important point which is that the design of the Internet itself is part of the problem that I'm of the generation as Jim is as well that were around when the internet was originally conceived in design and the notion in those days was that people could be trusted with anonymity and that was a mistake because bad actors use anonymity to do bad things and the Internet is essentially enabled disaffected people to find each other in a way they could never find each other in the road and to organize in ways they could not in the real world so when we're looking at Christchurch we have to recognize that the first step this was this was a symphonic work this man went in and organized at least a thousand co-conspirators prior to the act using the anonymous functions of the internet to gather them and prepare for this act. It was then and only then after all that groundwork had been laid that the amplification processes of the system went to work but keep in mind those same people kept reposting the film; it is still up there today.” How can one eliminate the tax deductibility of specific categories of online ads? Jens Zimmermann, from the Republic of Germany asked Jim Basse to explain a bit more deeply “ the question of taxation”, which he mentioned in one of his six recommendations. To this Balsillie said, “I'm talking about those that are buying the ads. The core problem here is when your ad driven you've heard extremely expert testimony that they'll do whatever it takes to get more eyeballs and the subscription-based model is a much safer place to be because it's not attention driven and one of the purposes of taxes to manage externalities if you don't like the externalities that we're grappling with that are illuminated here then disadvantage those and many of these platforms are moving more towards subscription-based models anyway. So just use tax as a vehicle to do that and the good benefit is it gives you revenue this the second thing it could do is also begin to shift towards more domestic services. I think it attacks has not been a lever that's been used and it's right there for you all right.” Thinking beyond behavioral manipulation, data surveillance-driven business models Keit Pentus, the representative from Estonia asked McNamee, “If you were sitting in my chair today, what would be the three steps you would recommend or you would do if we leave those shutting down the platforms aside for a second.” McNamee said, “In the United States or in North America roughly 70% of all the artificial intelligence professionals are working at Google, Facebook, Microsoft, or Amazon and to a first approximation they're all working on behavioral manipulation. There are at least a million great applications of artificial intelligence and behavioral manipulation is not on them. I would argue that it's like creating time-release anthrax or cloning human babies. It's just a completely inappropriate and morally repugnant idea and yet that is what these people are doing. I would simply observe that it is the threat of shutting them down and the willingness to do it for brief periods of time that creates the leverage to do what I really want to do which is, to eliminate the business model of behavioral manipulation and data surveillance.” “I don't think this is about putting the toothpaste back into tubes, this is about formulating toothpaste that doesn't poison people. I believe this is directly analogous to what happened with the chemical industry in the 50s. The chemical industry used to pour its waste products, mercury, chromium, and things like that direct into freshwater, which left mine tailings on the side of hills. State petrol stations would pour spent oil into sewers and there were no consequences. So the chemical industry grew like crazy, had incredibly high marches. It was the internet platform industry of its era. And then one day society woke up and realized that those companies should be responsible for the externalities that they were creating. So, this is not about stopping progress this is my world this is what I do.” “I just think we should stop hurting people we should stop killing people in Myanmar, we should stop killing people in the Philippines, and we should stop destroying democracy everywhere else. We can do way better than that and it's all about the business model, and I don't want to pretend I have all the solutions what I know is the people in this room are part of the solution and our job is to help you get there. So don't view anything I say as a fixed point of view.” “This is something that we're gonna work on together and you know the three of us are happy to take bullets for all of you okay because we recognize it's not easy to be a public servant with these issues out there. But do not forget you're not gonna be asking your constituents to give up the stuff they love. The stuff they love existed before this business model and it'll exist again after this business pop.” To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? UK lawmakers to social media: “You’re accessories to radicalization, accessories to crimes”, hearing on spread of extremist content Key Takeaways from Sundar Pichai’s Congress hearing over user data, political bias, and Project Dragonfly
Read more
  • 0
  • 0
  • 2650

article-image-maria-ressa-on-astroturfs-that-turns-make-believe-lies-into-facts
Savia Lobo
15 Jun 2019
4 min read
Save for later

Maria Ressa on Astroturfs that turns make-believe lies into facts

Savia Lobo
15 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday May 27 to Wednesday May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Maria Ressa, CEO and Executive Editor, Rappler, who talks about how information is powerful and if molded into make-believe lies can turn these lies into facts. In her previous presentation, Maria gave a glimpse of this presentation where she said, “Information is power and if you can make people believe lies, then you can control them. Information can be used for commercial benefits as well as a means to gain geopolitical power.” She resumes by saying, the Philippines, we, here are a cautionary tale for you. An example of how quickly democracy crumbles, is eroded from within and how these information operations can take over the entire ecosystem and transform lies in the facts. If you can make people believe lies are facts then you can control them. “Without facts, you don't have the truth, without truth you don't have trust”, she says. Journalists have long been the gatekeepers for facts. When we come under attack, democracy is under attack and when this situation happens the voice with the loudest megaphone wins. She says that the Philippines is a petri dish for social media. She stated, as of January 2019, HootSuite has said that Filipinos spent the most time online and the most time on social media, globally. Facebook is our internet; however, it’s about introducing a virus into our information ecosystem. Over time, that virus lies masquerading as fact, that virus takes over the body politic and you need to develop a vaccine. That's what we're in search of and she says she does see a solution. “If social networks are your family and friends in the physical world, social media is your family and friends on steroids; no boundaries of time and space.” She showed that astroturfing is typically a three-prong attack. She has also demonstrated certain examples of how she was subject to an astroturf attack. In the long term, it’s education and you've heard from the other three witnesses before me exactly some of the things that can be done in the medium term i.e. media literacy. However, in the short term, it's only the social media platforms that can do something immediately and we're on the front lines, we need immediate help and immediate solution. She said, her company Rappler, is one of three fact-checking partners of Facebook in the Philippines and they do take that response really seriously. She further says, “We don't look at the content alone. Once we check to make sure that it is a lie, we look at the network that spreads the lies”. She says, the first step is to stop new viruses from entering the ecosystem. It is whack-a-mole if one only looks at the content. But when you begin to look at the networks that spread it and you have something that you can pull out. “It's very difficult to go through 90 hate messages per hour sustained over days and months,'' she said. That is what we're going through the kind of Astroturfing thing that turns lies into truth for us this is a matter of survival. To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Zuckberg just became the target of the world's first high profile white hat deepfake op. Can Facebook come out unscathed? Facebook bans six toxic extremist accounts and a conspiracy theory organization
Read more
  • 0
  • 0
  • 1291