Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Cybersecurity

90 Articles
article-image-bloombergs-big-hack-expose-says-china-had-microchips-on-servers-for-covert-surveillance-of-big-tech-and-big-brother-big-tech-deny-supply-chain-compromise
Savia Lobo
05 Oct 2018
4 min read
Save for later

Bloomberg's Big Hack Exposé says China had microchips on servers for covert surveillance of Big Tech and Big Brother; Big Tech deny supply chain compromise

Savia Lobo
05 Oct 2018
4 min read
According to an in-depth report by Bloomberg yesterday, Chinese spies secretly inserted microchips within servers at Apple, Amazon, the US Department of Defense, the Central Intelligence Agency, the Navy, among others. What Bloomberg’s Big Hack Exposé revealed? The tiny chips were made to be undetectable without specialist equipment. These were later implanted on to the motherboards of servers on the production line in China. These servers were allegedly assembled by Super Micro Computer Inc., a San Jose-based company, one of the world’s biggest suppliers of server motherboards. Supermicro's customers include Elemental Technologies, a streaming services startup which was acquired by Amazon in 2015 and provided the foundation for the expansion of the Amazon Prime Video platform. According to the report, the Chinese People's Liberation Army (PLA) used illicit chips on hardware during the manufacturing process of server systems in factories. How did Amazon detect these microchips? In late 2015, Elemental’s staff boxed up several servers and sent them to Ontario, Canada, for the third-party security company to test. The testers found a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design, nested on the servers’ motherboards. Following this, Amazon reported the discovery to U.S. authorities which shocked the intelligence community. This is because, Elemental’s servers are ubiquitous and used across US key government agencies such as in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships. And Elemental was just one of the hundreds of Supermicro customers. According to the Bloomberg report, “The chips were reportedly built to be as inconspicuous as possible and to mimic signal conditioning couplers. It was determined during an investigation, which took three years to conclude, that the chip "allowed the attackers to create a stealth doorway into any network that included the altered machines.” The report claims Amazon became aware of the attack during moves to purchase streaming video compression firm Elemental Technologies in 2015. Elemental's services appear to have been an ideal target for Chinese state-sponsored attackers to conduct covert surveillance. According to Bloomberg, Apple was one of the victims of the apparent breach. Bloomberg says that Apple found the malicious chips in 2015 subsequently cutting ties with Supermicro in 2016. Amazon, Apple, and Supermicro deny supply chain compromise Amazon and Apple have both strongly denied the results of the investigation. Amazon said, "It's untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications when acquiring Elemental. It's also untrue that AWS knew about servers containing malicious chips or modifications in data centers based in China, or that AWS worked with the FBI to investigate or provide data about malicious hardware." Apple affirms that internal investigations have been conducted based on Bloomberg queries, and no evidence was found to support the accusations. The only infected driver was discovered in 2016 on a single Supermicro server found in Apple Labs. It was this incident which may have led to the severed business relationship back in 2016, rather than the discovery of malicious chips or a widespread supply chain attack. Supermicro confirms that they were not aware of any investigation regarding the topic nor were they contacted by any government agency in this regard. Bloomberg says the denials are in direct contrast to the testimony of six current and former national security officials, as well as confirmation by 17 anonymous sources which said the nature of the Supermicro compromise was accurate. Bloomberg's investigation has not been confirmed on the record by the FBI. To know about this news in detail, visit Bloomberg News. “Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers Amazon increases the minimum wage of all employees in the US and UK Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal
Read more
  • 0
  • 0
  • 3162

article-image-redhat-shares-what-to-expect-from-next-weeks-first-ever-dnssec-root-key-rollover
Melisha Dsouza
05 Oct 2018
4 min read
Save for later

RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover

Melisha Dsouza
05 Oct 2018
4 min read
On Thursday, October 11, 2018, at 16:00 UTC, ICANN will change the cryptographic key that is at the center of the DNS security system- what is called  DNSSEC. The current key has been in place since July 15, 2010. This switching of the central security key for DNS is called the "Root Key Signing Key (KSK) Rollover". The above replacement was originally planned a year back. The procedure was previously postponed because of data that suggested that a significant number of resolvers where not ready for the rollover. However, the question is 'Are your systems prepared so that DNS will keep functioning for your networks?' DNSSEC is a system of digital signatures to prevent DNS spoofing. Maintaining an up-to-date KSK is essential to ensuring DNSSEC-validating DNS resolvers continue to function following the rollover. If the KSK isn’t up to date, the DNSSEC-validating DNS resolvers will be unable to resolve any DNS queries. If the switch happens smoothly, users will not notice any visible changes and their systems will work as usual. However, if their DNS resolvers are not ready to use the new key, users may not be able to reach many websites! That being said, there’s good news for those who have been keeping up with recent DNS updates. Since this rollover was delayed for almost a year, most of the DNS resolver software has been shipping for quite some time with the new key. Users who have been up to date with this news will have their systems all set for this change. If not, we’ve got you covered! But first, let’s do a quick background check. As of today, the root zone is still signed by the old KSK with the ID 19036 also called KSK-2010. The new KSK with the ID 20326 was published in the DNS in July 2017 and is called KSK-2017. The KSK-2010 is currently used to sign itself, to sign the Root Zone Signing Key (ZSK) and to sign the new KSK-2017. The rollover only affects the KSK. The ZSK gets updated and replaced more frequently without the need to update the trust anchor inside the resolver. Source: Suse.com You can go ahead and watch ICANN’s short video to understand all about the switch Now that you have understood all about the switch, let’s go through all the checks you need to perform #1 Check if you have the new DNSSEC Root Key installed Users need to ensure they have the new DNSSEC Root key ( KSK-2017) has Key ID 20326. To check this, perform the following: - bind - see /etc/named.root.key - unbound / libunbound - see /var/lib/unbound/root.key - dnsmasq - see /usr/share/dnsmasq/trust-anchors.conf - knot-resolver - see /etc/knot-resolver/root.keys There is no need for them to restart their DNS service unless the server has been running for more than a year and does not support RFC 5011. #2 A Backup plan in case of unexpected problems The RedHat team assures users that there is very less probability of occurrence of DNS issues. Incase they do encounter a problem, they have been advised to restart their DNS server. They can try to use the  dig +dnssec dnskey. If all else fails, they should temporarily switch to a public DNS operator. #3 Check for an incomplete RFC 5011 process When the switch happens, containers or virtual machines that have old configuration files and new software would only have the soon to be removed DNSSEC Root Key. This is logged via RFC 8145 to the root name servers which provides ICANN with the above statistics. The server then updates the key via RFC 5011. But there are possibilities that it shuts down before the 30 day hold timer has been reached. It can also happen that the hold timer is reached and the configuration file cannot be written to or the file could be written to, but the image is destroyed on shutdown/reboot and a restart of the container that does not contain the new key. RedHat believes the switch of this cryptographic key will lead to an improved level of security and trust that users can have online! To know more about this rollover, head to RedHat’s official Blog. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices Red Hat infrastructure migration solution for proprietary and siloed infrastructure  
Read more
  • 0
  • 0
  • 3359

article-image-intel-me-has-a-manufacturing-mode-vulnerability-and-even-giant-manufacturers-like-apple-are-not-immune-say-researchers
Savia Lobo
03 Oct 2018
4 min read
Save for later

“Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers

Savia Lobo
03 Oct 2018
4 min read
Yesterday, a group of European information security researchers announced that they have discovered a vulnerability in Intel’s Management Engine (Intel ME) INTEL-SA-00086. They say that the root of this problem is an undocumented Intel ME mode, specifically known as the Manufacturing Mode. Undocumented commands enable overwriting SPI flash memory and implementing the doomsday scenario. The vulnerability could locally exploit of an ME vulnerability (INTEL-SA-00086). What is Manufacturing Mode? Intel ME Manufacturing Mode is intended for configuration and testing of the end platform during manufacturing. However, this mode and its potential risks are not described anywhere in Intel's public documentation. Ordinary users do not have the ability to disable this mode since the relevant utility (part of Intel ME System Tools) is not officially available. As a result, there is no software that can protect, or even notify, the user if this mode is enabled. This mode allows configuring critical platform settings stored in one-time-programmable memory (FUSEs). These settings include those for BootGuard (the mode, policy, and hash for the digital signing key for the ACM and UEFI modules). Some of them are referred to as FPFs (Field Programmable Fuses). An output of the -FPFs option in FPT In addition to FPFs, in Manufacturing Mode the hardware manufacturer can specify settings for Intel ME, which are stored in the Intel ME internal file system (MFS) on SPI flash memory. These parameters can be changed by reprogramming the SPI flash. The parameters are known as CVARs (Configurable NVARs, Named Variables). CVARs, just like FPFs, can be set and read via FPT. Manufacturing mode vulnerability in Intel chips within Apple laptops The researchers analyzed several platforms from a number of manufacturers, including Lenovo and Apple MacBook Prо laptops. The Lenovo models did not have any issues related to Manufacturing Mode. However, they found that the Intel chipsets within the Apple laptops are running in Manufacturing Mode and was found to include the vulnerability CVE-2018-4251. This information was reported to Apple and the vulnerability was patched in June, in the macOS High Sierra update 10.13.5. By exploiting CVE-2018-4251, an attacker could write old versions of Intel ME (such as versions containing vulnerability INTEL-SA-00086) to memory without needing an SPI programmer and without physical access to the computer. Thus, a local vector is possible for exploitation of INTEL-SA-00086, which enables running arbitrary code in ME. The researchers have also stated, in the notes for the INTEL-SA-00086 security bulletin, Intel does not mention enabled Manufacturing Mode as a method for local exploitation in the absence of physical access. Instead, the company incorrectly claims that local exploitation is possible only if access settings for SPI regions have been misconfigured. How can users save themselves from this vulnerability? To keep users safe, the researchers decided to describe how to check the status of Manufacturing Mode and how to disable it. Intel System Tools includes MEInfo in order to allow obtaining thorough diagnostic information about the current state of ME and the platform overall. They demonstrated this utility in their previous research about the undocumented HAP (High Assurance Platform) mode and showed how to disable ME. The utility, when called with the -FWSTS flag, displays a detailed description of status HECI registers and the current status of Manufacturing Mode (when the fourth bit of the FWSTS status register is set, Manufacturing Mode is active). Example of MEInfo output They also created a program for checking the status of Manufacturing Mode if the user for whatever reason does not have access to Intel ME System Tools. Here is what the script shows on affected systems: mmdetect script To disable Manufacturing Mode, FPT has a special option (-CLOSEMNF) that allows setting the recommended access rights for SPI flash regions in the descriptor. Here is what happens when -CLOSEMNF is entered: Process of closing Manufacturing Mode with FPT Thus, the researchers demonstrated that Intel ME has a Manufacturing Mode problem. Even major commercial manufacturers such as Apple are not immune to configuration mistakes on Intel platforms. Also, there is no public information on the topic, leaving end users in the dark about weaknesses that could result in data theft, persistent irremovable rootkits, and even ‘bricking’ of hardware. To know about this vulnerability in detail, visit Positive research’s blog. Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison  
Read more
  • 0
  • 0
  • 7547

article-image-tim-berners-lee-plans-to-decentralize-the-web-with-solid-an-open-source-project-for-personal-empowerment-through-data
Natasha Mathur
01 Oct 2018
4 min read
Save for later

Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”

Natasha Mathur
01 Oct 2018
4 min read
Tim Berners-Lee, the creator of the world wide web, revealed his plans of changing the web for the better, as he launched his new startup, Inrupt, last Friday. His major goal with Inrupt is to decentralize the web and get rid of the tech giant’s monopolies (Facebook, Google, Amazon, etc) on the user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. Tim has been working on Solid over the recent years with collaboration from people in MIT. “I’ve always believed the web is for everyone. That's why I and others fight fiercely to protect it. The changes we’ve managed to bring have created a better and more connected world. But for all the good we’ve achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas”, said Tim on his announcement blog post. Solid to provide users with more control over their data Solid will be working on completely changing the current web model that involves users handing over their personal data to digital giants “in exchange for perceived value”. “Solid is how we evolve the web in order to restore balance - by giving every one of us complete control over data, personal or not, in a revolutionary way,” says Tim. Solid offers every user a choice regarding where data gets stored, which specific people and groups can access the select elements in a data, and which apps you use. It will enable you, your family and colleagues, to link and share the data with anyone. Also, people can look at the same data with different apps at the same time, with Solid. Solid technology is built on existing web formats, and developers will have to integrate Solid into their apps and sites. Solid hopes to empower individuals, developers and businesses across the globe with totally new ways to build and find innovative and trusted applications. “I see multiple market possibilities, including Solid apps and Solid data storage”, adds Tim. According to Katrina Brooker, writer at FastCompany, “every bit of data you create or add-on Solid exists within a Solid pod. Solid Pod is a personal online data store. These pods provide Solid users with control over their applications and information on the web. Whoever uses the Solid Platform gets a Solid identity and Solid pod”. This is how Solid will enforce “personal empowerment through data” by helping users take back the power of web from gigantic corporations. Additionally, Tim told Brooker that he’s currently working on a way to create a decentralized version of Alexa, Amazon’s popular digital assistant. “He calls it Charlie. Unlike with Alexa, on Charlie people would own all their data. That means they could trust Charlie with, for example, health records, children’s school events, or financial records. That is the kind of machine Berners-Lee hopes will spring up all over Solid to flip the power dynamics of the web from corporations to individuals”, writes Brooker. Given the recent Facebook security breach that compromised 50M user accounts, and past data misuse scandals such as the Cambridge Analytica scandal, Solid seems to be an angel in disguise that will transfer the data control power into the hands of users. However, there’s no guarantee if Solid will be widely accepted by everyone as a lot of companies on the web are extremely sensitive when it comes to their data. They might not be interested in losing control over that data. But, it’s definitely going to take us one step ahead, to a more free and open Internet. Tim has taken a sabbatical from MIT and has reduced his day-to-day involvement with the World Wide Web Consortium (W3C) to work full time on Inrupt. “It is going to take a lot of effort to build the new Solid platform and drive broad adoption but I think we have enough energy to take the world to a new tipping point,” says Tim. For more coverage on this news, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 2879

article-image-facebook-witnesses-the-biggest-security-breach-since-cambridge-analytica-50m-accounts-compromised
Sugandha Lahoti
01 Oct 2018
4 min read
Save for later

Facebook’s largest security breach in its history leaves 50M user accounts compromised

Sugandha Lahoti
01 Oct 2018
4 min read
Facebook has been going through a massive decline of trust in recent times. And to make matters worse, it has witnessed another massive security breach, last week. On Friday, Facebook announced that nearly 50M Facebook accounts have been compromised by an attack that gave hackers the ability to take over users’ accounts. This security breach has not only affected user’s Facebook accounts but also impacted other accounts linked to Facebook. This means that a hacker could have accessed any account of yours that you log into using Facebook. This security issue was first discovered by Facebook on Tuesday, September 25. The hackers have apparently exploited a series of interactions between three bugs related to Facebook’s “View As” feature that lets people see what their own profile looks like to someone else. The hackers stole Facebook access tokens to take over people’s accounts. These tokens allow an attacker to take full control of the victim’s account, including logging into third-party applications that use Facebook Login. “I’m glad we found this and fixed the vulnerability,” Mark Zuckerberg said on a conference call with reporters on Friday morning. “But it definitely is an issue that this happened in the first place. I think this underscores the attacks that our community and our services face.” As of now, this vulnerability has been fixed and Facebook has contacted law enforcement authorities. The vice-president of product management, Guy Rosen, said that Facebook was working with the FBI, but he did not comment on whether national security agencies were involved in the investigation. As a security measure, Facebook has automatically logged out 90 million Facebook users from their accounts. These included the 50 million that Facebook knows were affected and an additional 40 million that potentially could have been. This attack exploited the complex interaction of multiple issues in Facebook code. It originated from a change made to Facebook’s video uploading feature in July 2017, which impacted “View As.” Facebook says that the affected users will get a message at the top of their News Feed about the issue when they log back into the social network. The message reads, "Your privacy and security are important to us, We want to let you know about recent action we've taken to secure your account." The message is followed by a prompt to click and learn more details. Facebook has also publicly apologized stating that, “People’s privacy and security is incredibly important, and we’re sorry this happened. It’s why we’ve taken immediate action to secure these accounts and let users know what happened.” This is not the end of misery for Facebook. Some users have also tweeted that they are unable to post Facebook’s security breach coverage from The Guardian and Associated Press. When trying to share the story to their news feed, they were met with the error message which prevented them from sharing the story. The error reads, “Our security systems have detected that a lot of people are posting the same content, which could mean that it’s spam. Please try a different post.” People have criticized Facebook’s automated content flagging tools. This is an example of how it tags legitimate content as illegitimate, calling it spam. It has also previously failed to detect harassment and hate speech. However, according to updates on Facebook’s Twitter account, the bug has now been resolved. https://twitter.com/facebook/status/1045796897506516992 The security breach comes at a time when the social media company is already facing multiple criticisms over issues such as foreign election interference, misinformation and hate speech, and data privacy. Recently, an Indie Taiwanese hacker also gained popularity with his plan to take down Mark Zuckerberg’s Facebook page and broadcast it live. However, soon he grew cold feet and said he’ll refrain from doing so after receiving global attention following his announcement. "I am canceling my live feed, I have reported the bug to Facebook and I will show proof when I get a bounty from Facebook," he told Bloomberg News. It’s high time that Facebook began taking it’s user privacy seriously, probably even going in the lines of rethinking it’s algorithm and platform entirely. They should also take responsibility for the real-world consequences of actions enabled by Facebook. How far will Facebook go to fix what it broke: Democracy, Trust, Reality. WhatsApp co-founder reveals why he left Facebook; is called ‘low class’ by a Facebook senior executive. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma
Read more
  • 0
  • 0
  • 3039

article-image-did-you-know-facebook-shares-the-data-you-share-with-them-for-security-reasons-with-advertisers
Natasha Mathur
28 Sep 2018
5 min read
Save for later

Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?

Natasha Mathur
28 Sep 2018
5 min read
Facebook is constantly under the spotlight these days when it comes to controversies regarding user’s data and privacy. A new research paper published by the Princeton University researchers states that Facebook shares the contact information you handed over for security purposes, with their advertisers. This study was first brought to light by a Gizmodo writer, Kashmir Hill. “Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information”, writes Hill. Recently, Facebook introduced a new feature called custom audiences. Unlike traditional audiences, the advertiser is allowed to target specific users. To do so, the advertiser uploads user’s PII (personally identifiable information) to Facebook. After the uploading is done, Facebook then matches the given PII against platform users. Facebook then develops an audience that comprises the matched users and allows the advertiser to further track the specific audience. Essentially with Facebook, the holy grail of marketing, which is targeting an audience of one, is practically possible; nevermind whether that audience wanted it or not. In today’s world, different social media platforms frequently collect various kinds of personally identifying information (PII), including phone numbers, email addresses, names and dates of birth. Majority of this PII often represent extremely accurate, unique, and verified user data. Because of this, these services have the incentive to exploit and use this personal information for other purposes. One such scenario includes providing advertisers with more accurate audience targeting. The paper titled ‘Investigating sources of PII used in Facebook’s targeted advertising’ is written by Giridhari Venkatadri, Elena Lucherini, Piotr Sapiezynski, and Alan Mislove. “In this paper, we focus on Facebook and investigate the sources of PII used for its PII-based targeted advertising feature. We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user and use this technique to study how Facebook’s advertising service obtains users’ PII,” reads the paper. The researchers developed a novel methodology, which involved studying how Facebook obtains the PII to provide custom audiences to advertisers. “We test whether PII that Facebook obtains through a variety of methods (e.g., directly from the user, from two-factor authentication services, etc.) is used for targeted advertising, whether any such use is clearly disclosed to users, and whether controls are provided to users to help them limit such use,” reads the paper. The paper uses size estimates to study what sources of PII are used for PII-based targeted advertising. Researchers used this methodology to investigate which range of sources of PII was actually used by Facebook for its PII-based targeted advertising platform. They also examined what information gets disclosed to users and what control users have over PII. What sources of PII are actually being used by Facebook? Researchers found out that Facebook allows its users to add contact information (email addresses and phone numbers) on their profiles. While any arbitrary email address or phone number can be added, it is not displayed to other users unless verified (through a confirmation email or confirmation SMS message, respectively). This is the most direct and explicit way of providing PII to advertisers. Researchers then further moved on to examine whether PII provided by users for security purposes such as two-factor authentication (2FA) or login alerts are being used for targeted advertising. They added and verified a phone number for 2FA to one of the authors’ accounts. The added phone number became targetable after 22 days. This proved that a phone number provided for 2FA was indeed used for PII-based advertising, despite having set the privacy controls to the choice. What control do users have over PII? Facebook allows users the liberty of choosing who can see each PII listed on their profiles, the current list of possible general settings being: Public, Friends, Only Me.   Users can also restrict the set of users who can search for them using their email address or their phone number. Users are provided with the following options: Everyone, Friends of Friends, and Friends. Facebook provides users a list of advertisers who have included them in a custom audience using their contact information. Users can opt out of receiving ads from individual advertisers listed here. But, information about what PII is used by advertisers is not disclosed. What information about how Facebook uses PII gets disclosed to the users? On adding mobile phone numbers directly to one’s Facebook profile, no information about the uses of that number is directly disclosed to them. This Information is only disclosed to users when adding a number from the Facebook website. As per the research results, there’s very little disclosure to users, often in the form of generic statements that do not refer to the uses of the particular PII being collected or that it may be used to allow advertisers to target users. “Our paper highlights the need to further study the sources of PII used for advertising, and shows that more disclosure and transparency needs to be provided to the user,” says the researchers in the paper. For more information, check out the official research paper. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma How far will Facebook go to fix what it broke: Democracy, Trust, Reality Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference
Read more
  • 0
  • 0
  • 3266
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-time-facebook-twitter-other-social-media-take-responsibility-or-face-regulation
Sugandha Lahoti
01 Aug 2018
9 min read
Save for later

Time for Facebook, Twitter and other social media to take responsibility or face regulation

Sugandha Lahoti
01 Aug 2018
9 min read
Of late, the world has been shaken over the rising number of data related scandals and attacks that have overshadowed social media platforms. This shakedown was experienced in Wall Street last week when tech stocks came crashing down after Facebook’s Q2 earnings call on 25th July and then further down after Twitter’s earnings call on 27th July. Social media regulation is now at the heart of discussions across the tech sector. The social butterfly effect is real 2018 began with the Cambridge Analytica scandal where the data analytics company was alleged to have not only been influencing the outcome of UK and US Presidential elections but also of harvesting copious amounts of data from Facebook (illegally).  Then Facebook fell down the rabbit hole with Muller’s indictment report that highlighted the role social media played in election interference in 2016. ‘Fake news’ on Whatsapp triggered mob violence in India while Twitter has been plagued with fake accounts and tweets that never seem to go away. Fake news and friends crash the tech stock party Last week, social media stocks fell in double digits (Facebook by 20% and Twitter by 21%) bringing down the entire tech sector; a fall that continues to keep tech stocks in a bearish market and haunt tech shareholders even today. Wall Street has been a nervous wreck this week hoping for the bad news to stop spirally downwards with good news from Apple to undo last week’s nightmare. Amidst these reports, lawmakers, regulators and organizations alike are facing greater pressure for regulation of social media platforms. How are lawmakers proposing to regulate social media? Even though lawmakers have started paying increased attention to social networks over the past year, there has been little progress made in terms of how much they actually understand them. This could soon change as Axios’ David McCabe published a policy paper from the office of Senator Mark Warner. This paper describes a comprehensive regulatory policy covering almost every aspect of social networks. The paper-proposal is designed to address three broad categories: combating misinformation, privacy and data protection, and promoting competition in tech space. Misinformation, disinformation, and the exploitation of technology covers ideas such as: Networks are to label automated bots. Platforms are to verify identities, Platforms are to make regular disclosures about how many fake accounts they’ve deleted. Platforms are to create APIs for academic research. Privacy and data protection include policies such as: Create a US version of the GDPR. Designate platforms as information fiduciaries with the legal responsibility of protecting user’s data. Empowering the Federal Trade Commission to make rules around data privacy. Create a legislative ban on dark patterns that trick users into accepting terms and conditions without reading them. Allow the government to audit corporate algorithms. Promoting competition in tech space that requires: Tech companies to continuously disclose to consumers how their data is being used. Social network data to be made portable. Social networks to be interoperable. Designate certain products as essential facilities and demand that third parties get fair access to them. Although these proposals and more of them (British parliamentary committee recommended imposing much stricter guidelines on social networks) remain far from becoming the law, they are an assurance that legal firms and lawmakers are serious about taking steps to ensure that social media platforms don’t go out of hand. Taking measures to ensure data regulations by lawmakers and legal authorities is only effective if the platforms themselves care about the issues themselves and are motivated to behave in the right way. Losing a significant chunk of their user base in EU lately seems to have provided that very incentive. Social network platforms, themselves have now started seeking ways to protecting user data and improve their platforms in general to alleviate some of the problems they helped create or amplify. How is Facebook planning to course correct it’s social media Frankenstein? Last week, Mark Zuckerberg started the fated earnings call by saying, “I want to start by talking about all the investments we've made over the last six months to improve safety, security, and privacy across our services. This has been a lot of hard work, and it's starting to pay off.” He then goes on to elaborate key areas of focus for Facebook in the coming months, the next 1.5 years to be more specific. Ad transparency tools: All ads can be viewed by anyone, even if they are not targeted at them. Facebook is also developing an archive of ads with political or issue content which will be labeled to show who paid for them, what the budget was and how many people viewed the ads, and will also allow one to search ads by an advertiser for the past 7 years. Disallow and report known election interference attempts: Facebook will proactively look for and eliminate fake accounts, pages, and groups that violated their policies. This could minimize election interference, says Zuckerberg. Fight against misinformation: Remove the financial incentives for spammers to create fake news.  Stop pages that repeatedly spread false information from buying ads. Shift from reactive to proactive detection with AI: Use AI to prevent fake accounts that generate a lot of the problematic content from ever being created in the first place.  They can now remove more bad content quickly because we don't have to wait until after it's reported. In Q1, for example, almost 90% of graphic violence content that Facebook removed or added a warning label to was identified using AI. Invest heavily in security and privacy. No further elaboration on this aspect was given on the call. This week, Facebook reported that they’d  detected and removed 32 pages and fake accounts that had engaged in a coordinated inauthentic behavior. These accounts and pages were of a political influence campaign that was potentially built to disrupt the midterm elections. According to Facebook’s Head of Cybersecurity, Nathaniel Gleicher, “So far, the activity encompasses eight Facebook Pages, 17 profiles and seven accounts on Instagram.” Facebook’s action is a change from last year when it was widely criticized for failing to detect Russian interference in the 2016 presidential election. Although the current campaign hasn’t been linked to Russia (yet), Facebook officials pointed out that some of the tools and techniques used by the accounts were similar to those used by the Russian government-linked Internet Research Agency. How Twitter plans to make its platform a better place for real and civilized conversation “We want people to feel safe freely expressing themselves and have launched new tools to address problem behaviors that distort and distract from the public conversation. We’re also continuing to make it easier for people to find and follow breaking news and events…” said  Jack Dorsey, Twitter's CEO, at Q2 2018 Earnings call. The letter to Twitter shareholders further elaborates on this point: We continue to invest in improving the health of the public conversation on Twitter, making the service better by integrating new behavioral signals to remove spammy and suspicious accounts and continuing to prioritize the long-term health of the platform over near-term metrics. We also acquired Smyte, a company that specializes in spam prevention, safety, and security.   Unlike Facebook’s explanatory anecdotal support for the claims made, Twitter provided quantitative evidence to show the seriousness of their endeavor. Here are some key metrics from the shareholders’ letter this quarter. Results from early experiments on using new tools to address behaviors that distort and distract from the public conversation show a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations More than 9 million potentially spammy or automated accounts identified and challenged per week 8k fewer average spam reports per day Removing more than 2x the number of accounts for violating Twitter’s spam policies than they did last year It is clear that Twitter has been quite active when it comes to looking for ways to eliminate toxicity from the website’s network. CEO Jack Dorsey in a series of tweets stated that the company did not always meet users’ expectations. “We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough, with the company needing a “systemic framework.” Back in March 2018, Twitter invited external experts,  to measure the health of the company in order to encourage a more healthy conversation, debate, and critical thinking. Twitter asked them to create proposals taking inspiration from the concept of measuring conversation health defined by a non-profit firm Cortico. As of yesterday, they now have their dream team of researchers finalized and ready to take up the challenge of identifying echo chambers on Twitter for unhealthy behavior and then translating their findings into practical algorithms down the line. [dropcap]W[/dropcap]ith social media here to stay, both lawmakers and social media platforms are looking for new ways to regulate. Any misstep by these social media sites will have solid repercussions which include not only closer scrutiny by the government and private watchdogs but also losing out on stock value, a bad reputation, as well as being linked to other forms of data misuse and accusations of political bias. Lastly, let’s not forget the responsibility that lies with the ‘social’ side of these platforms. Individuals need to play their part in being proactive in reporting fake news and stories, and they also need to be more selective about the content they share on social. Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call Facebook must stop discriminatory advertising in the US, declares Washington AG, Ferguson Facebook is investigating data analytics firm Crimson Hexagon over misuse of data
Read more
  • 0
  • 0
  • 2834

article-image-why-wall-street-unfriended-facebook-stocks-lost-over-120-billion-in-market-value-after-q2-2018-earnings-call
Natasha Mathur
27 Jul 2018
6 min read
Save for later

Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call

Natasha Mathur
27 Jul 2018
6 min read
After been found guilty of providing discriminatory advertisements on its platform earlier this week, Facebook hit yet another wall yesterday as its stock closed falling down by 18.96% on Thursday with shares selling at $176.26. This means that the company lost around $120 billion in market value overnight, making it the largest loss of value ever in a day for a US-traded company since Intel Corp’s two-decade-old crash. Intel had lost a little over $18 billion in one day, 18 years back. Despite the 41.24% revenue growth compared to last year, this was Facebook’s biggest stock market drop ever. Here’s the stock chart from NASDAQ showing the figures:   Facebook’s market capitalization was worth $629.6 on Wednesday. As soon as Facebook’s Earnings calls concluded by the end of market trading on Thursday, it’s worth dropped to $510 billion after the close. Also, as Facebook’s market shares continued to drop down during Thursday’s market, it left its CEO, Mark Zuckerberg with less than $70 billion, wiping out nearly $17 billion of his personal stake, according to Bloomberg. Also, he was demoted from the third to the sixth position on Bloomberg’s Billionaires Index. Active user growth starting to stagnate in mature markets According to David Wehner, CFO at Facebook, “the Daily active users count on Facebook reached 1.47 billion, up 11% compared to last year, led by growth in India, Indonesia, and the Philippines. This number represents approximately 66% of the 2.23 billion monthly active users in Q2”. Facebook’s daily active users He also mentioned that  “MAUs (monthly active users) were up 228M or 11% compared to last year. It is worth noting that MAU and DAU in Europe were both down slightly quarter-over-quarter due to the GDPR rollout, consistent with the outlook we gave on the Q1 call”. Facebook’s Monthly Active users In fact, Facebook has implemented several privacy policy changes in the last few months. This is due to the European Union's General Data Protection Regulation ( GDPR ) as the company's earnings report revealed the effects of the GDPR rules. Revenue Growth Rate is falling too Speaking of revenue expectations, Wehner gave investors a heads up that revenue growth rates will decline in the third and fourth quarters. Wehner states that the company’s “total revenue growth rate decelerated approximately 7 percentage points in Q2 compared to Q1. Our total revenue growth rates will continue to decelerate in the second half of 2018, and we expect our revenue growth rates to decline by high single-digit percentages from prior quarters sequentially in both Q3 and Q4.”  Facebook reiterated further that these numbers won’t get better anytime soon.                                                 Facebook’s Q2 2018 revenue Wehner further spoke explained the reasons for the decline in revenue,“There are several factors contributing to that deceleration..we expect the currency to be a slight headwind in the second half ...we plan to grow and promote certain engaging experiences like Stories that currently have lower levels of monetization. We are also giving people who use our services more choices around data privacy which may have an impact on our revenue growth”. Let’s look at other performance indicators Other financial highlights of Q2 2018 are as follows: Mobile advertising revenue represented 91% of advertising revenue for q2 2018, which is up from approx. 87% of the advertising revenue in Q2 2017. Capital expenditures for Q2 2018 were $3.46 billion which is up from $1.4 billion in Q2 2017. Headcount was 30,275 around June 30, which is an increase of 47% year-over-year. Cash, Cash equivalents, and marketable securities were $42.3 billion at the end of Q2 2018, an increase from $35.45 billion at the end of the Q2 2017. Wehner also mentioned that the company “continue to expect that full-year 2018 total expenses will grow in the range of 50-60% compared to last year. In addition to increases in core product development and infrastructure -- growth is driven by increasing investments -- safety & security, AR/VR, marketing, and content acquisition”. Another reason for the overall loss is that Facebook has been dealing with criticism for quite some time now over its content policies, its issues regarding user’s private data and its changing rules for advertisers. In fact, it is currently investigating data analytics firm Crimson Hexagon over misuse of data. Mark Zuckerberg also said over a conference call with financial analysts that Facebook has been investing heavily in “safety, security, and privacy” and that how they’re “investing - in security that it will start to impact our profitability, we’re starting to see that this quarter - we run this company for the long term, not for the next quarter”. Here’s what the public feels about the recent wipe-out: https://twitter.com/TeaPainUSA/status/1022586648155054081 https://twitter.com/alistairmilne/status/1022550933014753280 So, why did Facebook’s stocks crash? As we can see, Facebook’s performance itself in Q2 2018 has been better than its performance last year for the same quarter as far as revenue goes. Ironically, scandals and lawsuits have had little impact on Facebook’s growth. For example, Facebook recovered from the Cambridge Analytica scandal fully within two months as far share prices are concerned. The Mueller indictment report released earlier this month managed to arrest growth for merely a couple of days before the company bounced back. The discriminatory advertising verdict against Facebook, had no impact on its bullish growth earlier this week. This brings us to conclude that the public sentiments and market reactions against Facebook have very different underlying reasons. The market’s strong reactions are mainly due to concerns over the active user growth slowdown, the lack of monetization opportunities on the more popular Instagram platform, and Facebook’s perceived lack of ability to evolve successfully to new political and regulatory policies such as the GDPR. Wall Street has been indifferent to Facebook’s long list of scandals, in some ways, enabling the company’s ‘move fast and break things’ approach. In his earnings call on Thursday, Zuckerberg hinted that Facebook may not be keen on ‘growth at all costs’ by saying things like “we’re investing so much in security that it will significantly impact our profitability” and then Wehner adding, “Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019.” And that has got Wall street unfriending Facebook with just a click of the button! Is Facebook planning to spy on you through your mobile’s microphones? Facebook to launch AR ads on its news feed to let you try on products virtually Decoding the reasons behind Alphabet’s record high earnings in Q2 2018  
Read more
  • 0
  • 0
  • 2868

article-image-furthering-the-net-neutrality-debate-gop-proposes-the-21st-century-internet-act
Sugandha Lahoti
18 Jul 2018
3 min read
Save for later

Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act

Sugandha Lahoti
18 Jul 2018
3 min read
GOP Rep. Mike Coffman has proposed a new bill to solidify the principles of net neutrality into law, rather than them being a set of rules to be modified by the FCC every year. The bill known as the 21st Century Internet Act would ban providers from blocking, throttling, or offering paid fast lanes. It will also forbid them from participating in paid prioritization and charging access fees from edge providers. It could take some time for this amendment to be voted on in the Congress. It mostly depends on the makeup of Congress after the midterm elections. The 21st Century Internet Act modifies the Communications Act of 1934 and adds a new Title VIII section full of conditions specific to internet providers. This new title permanently codifies into law the ‘four corners’ of net neutrality”. The amendment proposes these measures: No Blocking A broadband internet access service provider can not block lawful content, or charge an edge provider a fee to avoid blocking of content. No Throttling The service provider cannot degrade and enhance (slow down or speed up) the internet traffic. No Paid prioritization The internet access provider may not engage in paid preferential treatment. No unreasonable Interference The service provider cannot interfere with the ability of end users to select, the internet access service of their choice. This bill aims to settle the long ongoing debate over whether internet access is an information service or a telecommunications service. In his letter to FCC Chairman Ajit Pai, Coffman mentions, “The Internet has been and remains a transformative tool, and I am concerned any action you may take to alter the rules under which it functions may well have significant unanticipated negative consequences.” As far as the FCC’s role is concerned, The 21st Century Internet act will solidify the rules of net neutrality, barring the FCC from modifying it. The commision will solely be responsible for watching over the bill’s implementation and enforce the law. This would include investigating unfair acts or practices, such as false advertising, misrepresenting the product etc. The Senate has already voted to save net neutrality, by passing the CRA measure back in May 2018. The Congressional Review Act, or CRA received 52-47 vote, overturning the FCC and taking net neutrality rules off the books. The 21st century Internet Act is being seen in a good light by The Internet Association, which represents Google, Facebook, Netflix and others, who commended Coffman on his bill and called it a "step in the right direction." For the rest of us, it will be quite interesting to see the bill’s progress and its fate as it goes through the voting process and then into the White House for final approval. 5 reasons why the government should regulate technology DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections Tech, unregulated: Washington Post warns Robocalls could get worse
Read more
  • 0
  • 0
  • 2264

article-image-dcleaks-and-guccifer-2-0-how-hackers-used-social-engineering-to-manipulate-the-2016-u-s-elections
Savia Lobo
16 Jul 2018
5 min read
Save for later

DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections

Savia Lobo
16 Jul 2018
5 min read
It’s been more than a year since the Republican party’s Donald Trump won the U.S Presidential elections against Democrat Hillary Clinton. However, Robert Mueller recently indicted 12 Russian military officers who meddled with the 2016 U.S. Presidential elections. These had hacked into the Democratic National Committee, the Democratic Congressional Campaign Committee, and the Clinton campaign. Mueller evoked that the Russians did it using Guccifer 2.0 and DCLeaks following which Twitter decided to suspend both the accounts on its platform. According to the San Diego Union-Tribune, Twitter found the handles of both Guccifer 2.0 and DCLeaks dormant for more than a year and a half. It also verified the accounts being loaded with disseminated emails which were stolen from both Clinton’s camp and the Democratic Party’s organization. Subsequently, it has suspended both accounts. Guccifer 2.0 is an online persona created by the conspirators and was falsely claimed to be a Romanian hacker to avoid evidence of Russian involvement in the 2016 U.S. presidential elections. They are also associated with the leaks of documents from the Democratic National Committee(DNC) through Wikileaks. DCLeaks, with the website, “dcleaks.com” published the stolen data. The DCLeaks site is known to be a front for Russian cyberespionage group Fancy Bear. Mueller, in his indictment, stated that both Guccifer 2.0 and DCLeaks have ties with the GRU (Russian military intelligence unit called the Main Intelligence Directorate) hackers. How hackers social engineered their way into the Clinton Campaign? The attackers or hackers have been said to use the hacking technique known as Spear phishing and the malware used in the process is the X-agent. It is a tool that can collect documents, keystrokes and other information from computers and smartphones through encrypted channels back to servers owned by hackers. As per Mueller’s indictment report, the conspirator created an email account on the name of a known member of the Clinton Campaign. This email id had one letter deviation and looked almost like the original one. Spear Phishing emails were sent across work accounts of more than thirty different Clinton Campaign employees using this fake account. The embedded link within these spear fished emails directed the recipient to a document titled ‘hillary-clinton-favorable-rating.xlsx.’. However, in reality, the recipients were being directed to a GRU-created website. X-agent uses an encrypted “tunneling protocol” tool known as X-Tunnel, that connected to known GRU-associated servers. Using the malware, X-agent, the GRU agents had targeted more than 300 individuals within the Clinton campaign, Democratic National Committee, and Democratic Congressional Campaign Committee, by March 2016.  At the same time, hackers stole nearly 50,000 emails from the Clinton campaign, and by June 2016 they had gained a control over 33 DNC computers and infected them with the malware. The indictment further explains that although the attackers ensured to hide their tracks by erasing the activity logs, the Linux-based version of X-agent programmed with the GRU_registered domain “linuxkrnl.net” were discovered in these networks. Was the Trump campaign impervious to such an attack? Roger Stone, Former Trump campaign adviser was said to be in contact with Guccifer 2.0 during the presidential campaign. As per Mueller ’s indictment, Guccifer 2.0 sent Stone this message, “Please tell me if i can help u anyhow...it would be a great pleasure for me...What do u think about the info on the turnout model for the democrats entire presidential campaign.”. “Pretty standard” was the reply given to Guccifer 2.0. However, Stone said that his conversations with the person behind the account were “innocuous.” Stone further added, “This exchange is entirely public and provides no evidence of collaboration or collusion with Guccifer 2.0 or anyone else in the alleged hacking of the DNC emails.” Stone also stated that he never discussed such innocuous communication with Trump or his presidential campaign. Rod Rosenstein, Deputy Attorney General, U.S  had indicated about this Russian attack since he was a part of Kremlin’s multifaceted approach to boost Trump’s 2016 campaign and on the other hand depreciating Clinton’s campaign. Twitter stated that both the accounts have been suspended for being connected to a network of accounts previously suspended for operating in violation of their rules. However, Hillary Clinton’s supporters expressed their anger by stating that Twitter’s responsiveness to stimuli was too slow and too little. With the midterm elections arriving soon in some months, one question on everyone’s mind is: how are the intelligence department and the department of justice going to ensure the elections are fair and secure? There are high chances of such attacks recurring. On being asked a similar question at a cybersecurity conference, Kirstjen Nielsen, Homeland Security Secretary responded, “Today I can say with confidence that we know whom to contact in every state to share threat information,” Nielsen said. “That ability did not exist in 2016.” Following is Rod Rosenstein’s interview with PBS NewsHour.   Social engineering attacks – things to watch out while online! YouTube has a $25 million plan to counter fake news and misinformation Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news    
Read more
  • 0
  • 0
  • 5615
article-image-timehop-suffers-data-breach-21-million-users-data-compromised
Richard Gall
09 Jul 2018
3 min read
Save for later

Timehop suffers data breach; 21 million users' data compromised

Richard Gall
09 Jul 2018
3 min read
Timehop, the social media application that brings old posts into your feed, experienced a data breach on July 4. In a post published yesterday (July 8) the team explained that 'an access credential to our cloud computing enterprise was compromised'. Timehop believes 21 million users have been affected by the breach. However, it was keen to state that "we have no evidence that any accounts were accessed without authorization." Timehop has already acted to make necessary changes. Certain application features have been temporarily disabled, and users have been logged out of the app. Users will also have to re-authenticate Timehop on social media accounts. The team has deactivated the keys that allow the app to read and show users social media posts on their feeds. Timehop explained that the gap between the incident and the public statement was due to the need to "contact with a large number of partners." The investigation needed to be thorough in order for the response to be clear and coordinated. How did the Timehop data breach happen? For transparency, Timehop published a detailed technical report on how it believes the hack happened. An unauthorized user first accessed Timehop's cloud computing environment using an authorized users credentials. This user then conducted 'reconnaisance activities' once they had created a new administrative account. This user logged in to the account on numerous occasions after this in March and June 2018. It was only on July 4 that the attacker then attempted to access the production database. Timehop then states that they "conducted a specific action that triggered an alarm" which allowed engineers to act quickly to stop the attack from continuing. Once this was done, there was a detailed and thorough investigation. This included analyzing the attacker's activity on the network and auditing all security permissions and processes. A measured response to a potential crisis It's worth noting just how methodical Timehop's response has been. Yes, there will be question marks over the delay, but it does make a lot of sense. Timehop revealed that the news was provided to some journalists "under embargo in order to determine the most effective ways to communicate what had happened while neither causing panic nor resorting to bland euphemism." The incident demonstrates that effective cybersecurity is as much about a robust communication strategy as it is about secure software.  Read next: Did Facebook just have another security scare? What security and systems specialists are planning to learn in 2018
Read more
  • 0
  • 0
  • 2436

article-image-how-to-secure-elasticcache-in-aws
Savia Lobo
11 May 2018
5 min read
Save for later

How to secure ElasticCache in AWS

Savia Lobo
11 May 2018
5 min read
AWS offers services to handle the cache management process. Earlier, we were using Memcached or Redis installed on VM, which was a very complex and tough task to manage in terms of ensuring availability, patching, scalability, and security. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,'Cloud Security Automation'. In this book, you'll learn the basics of why cloud security is important and how automation can be the most effective way of controlling cloud security.[/box] On AWS, we have this service available as ElastiCache. This gives you the option to use any engine (Redis or Memcached) to manage your cache. It's a scalable platform that will be managed by AWS in the backend. ElastiCache provides a scalable and high-performance caching solution. It removes the complexity associated with creating and managing distributed cache clusters using Memcached or Redis. Now, let's look at how to secure ElastiCache. Secure ElastiCache in AWS For enhanced security, we deploy ElastiCache clusters inside VPC. When they are deployed inside VPC, we can use a security group and NACL to add a level of security on the communication ports at network level. Apart from this, there are multiple ways to enable security for ElastiCache. VPC-level security Using a security group at VPC—when we deploy AWS ElastiCache in VPC, it gets associated with a subnet, a security group, and the routing policy of that VPC. Here, we define a rule to communicate with the ElastiCache cluster on a specific port. ElastiCache clusters can also be accessed from on-premise applications using VPN and Direct Connect. Authentication and access control We use IAM in order to implement the authentication and access control on ElastiCache. For authentication, you can have the following identity type: Root user: It's a superuser that is created while setting up an AWS account. It has super administrator privileges for all the AWS services. However, it's not recommended to use the root user to access any of the services. IAM user: It's a user identity in your AWS account that will have a specific set of permissions for accessing the ElastiCache service. IAM role: We also can define an IAM role with a specific set of permissions and associate it with the services that want to access ElastiCache. It basically generates temporary access keys to use ElastiCache. Apart from this, we can also specify federated access to services where we have an IAM role with temporary credentials for accessing the service. To access ElastiCache, service users or services must have a specific set of permissions such as create, modify, and reboot the cluster. For this, we define an IAM policy and associate it with users or roles. Let's see an example of an IAM policy where users will have permission to perform system administration activity for ElastiCache cluster: { "Version": "2012-10-17", "Statement":[{ "Sid": "ECAllowSpecific", "Effect":"Allow", "Action":[ "elasticache:ModifyCacheCluster", "elasticache:RebootCacheCluster", "elasticache:DescribeCacheClusters", "elasticache:DescribeEvents", "elasticache:ModifyCacheParameterGroup", "elasticache:DescribeCacheParameterGroups", "elasticache:DescribeCacheParameters", "elasticache:ResetCacheParameterGroup", "elasticache:DescribeEngineDefaultParameters"], "Resource":"*" } ] } Authenticating with Redis authentication AWS ElastiCache also adds an additional layer of security with the Redis authentication command, which asks users to enter a password before they are granted permission to execute Redis commands on a password-protected Redis server. When we use Redis authentication, there are the following few constraints for the authentication token while using ElastiCache: Passwords must have at least 16 and a maximum of 128 characters Characters such as @, ", and / cannot be used in passwords Authentication can only be enabled when you are creating clusters with the in-transit encryption option enabled The password defined during cluster creation cannot be changed To make the policy harder or more complex, there are the following rules related to defining the strength of a password: A password must include at least three characters of the following character types: Uppercase characters Lowercase characters Digits Non-alphanumeric characters (!, &, #, $, ^, <, >, -) A password must not contain any word that is commonly used A password must be unique; it should not be similar to previous passwords Data encryption AWS ElastiCache and EC2 instances have mechanisms to protect against unauthorized access of your data on the server. ElastiCache for Redis also has methods of encryption for data run-in on Redis clusters. Here, too, you have data-in-transit and data-at-rest encryption methods. Data-in-transit encryption ElastiCache ensures the encryption of data when in transit from one location to another. ElastiCache in-transit encryption implements the following features: Encrypted connections: In this mode, SSL-based encryption is enabled for server and client communication Encrypted replication: Any data moving between the primary node and the replication node are encrypted Server authentication: Using data-in-transit encryption, the client checks the authenticity of a connection—whether it is connected to the right server Client authentication: After using data-in-transit encryption, the server can check the authenticity of the client using the Redis authentication feature Data-at-rest encryption ElastiCache for Redis at-rest encryption is an optional feature that increases data security by encrypting data stored on disk during sync and backup or snapshot operations. However, there are the following few constraints for data-at-rest encryption: It is supported only on replication groups running Redis version 3.2.6. It is not supported on clusters running Memcached. It is supported only for replication groups running inside VPC. Data-at-rest encryption is supported for replication groups running on any node type. During the creation of the replication group, you can define data-at-rest encryption. Data-at-rest encryption once enabled, cannot be disabled. To summarize, we learned how to secure ElastiCache and ensured security for PaaS services, such as database and analytics services. If you've enjoyed reading this article, do check out 'Cloud Security Automation' for hands-on experience of automating your cloud security and governance. How to start using AWS AWS Sydney Summit 2018 is all about IoT AWS Fargate makes Container infrastructure management a piece of cake    
Read more
  • 0
  • 0
  • 11742

article-image-secure-private-cloud-iam
Savia Lobo
10 May 2018
11 min read
Save for later

How to secure a private cloud using IAM

Savia Lobo
10 May 2018
11 min read
In this article, we look at securing the private cloud using IAM. For IAM, OpenStack uses the Keystone project. Keystone provides the identity, token, catalog, and policy services, which are used specifically by OpenStack services. It is organized as a group of internal services exposed on one or many endpoints. For example, an authentication call validates the user and project credentials with the identity service. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,'Cloud Security Automation'. In this book, you'll learn how to work with OpenStack security modules and learn how private cloud security functions can be automated for better time and cost-effectiveness.[/box] Authentication Authentication is an integral part of an OpenStack deployment and so we must be careful about the system design. Authentication is the process of confirming a user's identity, which means that a user is actually who they claim to be. For example, providing a username and a password when logging into a system. Keystone supports authentication using the username and password, LDAP, and external authentication methods. After successful authentication, the identity service provides the user with an authorization token, which is further used for subsequent service requests. Transport Layer Security (TLS) provides authentication between services and users using X.509 certificates. The default mode for TLS is server-side only authentication, but we can also use certificates for client authentication. However, in authentication, there can also be the case where a hacker is trying to access the console by guessing your username and password. If we have not enabled the policy to handle this, it can be disastrous. For this, we can use the Failed Login Policy, which states that a maximum number of attempts are allowed for a failed login; after that, the account is blocked for a certain number of hours and the user will also get a notification about it. However, the identity service provided in Keystone does not provide a method to limit access to accounts after repeated unsuccessful login attempts. For this, we need to rely on an external authentication system that blocks out an account after a configured number of failed login attempts. Then, the account might only be unlocked with further side-channel intervention, or on request, or after a certain duration. We can use detection techniques to the fullest only when we have a prevention method available to save them from damage. In the detection process, we frequently review the access control logs to identify unauthorized attempts to access accounts. During the review of access control logs, if we find any hints of a brute force attack (where the user tries to guess the username and password to log in to the system), we can define a strong username and password or block the source of the attack (IP) through firewall rules. When we define firewall rules on Keystone node, it restricts the connection, which helps to reduce the attack surface. Apart from this, reviewing access control logs also helps to examine the account activity for unusual logins and suspicious actions, so that we can take corrective actions such as disabling the account. To increase the level of security, we can also utilize MFA for network access to the privileged user accounts. Keystone supports external authentication services through the Apache web server that can provide this functionality. Servers can also enforce client-side authentication using certificates. This will help to get rid of brute force and phishing attacks that may compromise administrator passwords. Authentication methods – internal and external Keystone stores user credentials in a database or may use an LDAP-compliant directory server. The Keystone identity database can be kept separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. When we use the username and password to authenticate, identity does not apply policies for password strength, expiration, or failed authentication attempts. For this, we need to implement external authentication services. To integrate an external authentication system or organize an existing directory service to manage users account management, we can use LDAP. LDAP simplifies the integration process. In OpenStack authentication and authorization, the policy may be delegated to another service. For example, an organization that is going to deploy a private cloud and already has a database of employees and users in an LDAP system. Using this LDAP as an authentication authority, requests to the Identity service (Keystone) are transferred to the LDAP system, which allows or denies requests based on its policies. After successful authentication, the identity service generates a token for access to the authorized services. Now, if the LDAP has already defined attributes for the user such as the admin, finance, and HR departments, these must be mapped into roles and groups within identity for use by the various OpenStack services. We need to define this mapping into Keystone node configuration files stored at /etc/keystone/keystone.conf. Keystone must not be allowed to write to the LDAP used for authentication outside of the OpenStack Scope, as there is a chance to allow a sufficiently privileged Keystone user to make changes to the LDAP directory, which is not desirable from a security point of view. This can also lead to unauthorized access of other information and resources. So, if we have other authentication providers such as LDAP or Active Directory, then user provisioning always happens at other authentication provider systems. For external authentication, we have the following methods: MFA: The MFA service requires the user to provide additional layers of information for authentication such as a one-time password token or X.509 certificate (called MFA token). Once MFA is implemented, the user will have to enter the MFA token after putting the user ID and password in for a successful login. Password policy enforcement: Once the external authentication service is in place, we can define the strength of the user passwords to conform to the minimum standards for length, diversity of characters, expiration, or failed login attempts. Keystone also supports TLS-based client authentication. TLS client authentication provides an additional authentication factor, apart from the username and password, which provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords are compromised. However, TLS-based authentication is not cost effective as we need to have a certificate for each of the clients. Authorization Keystone also provides the option of groups and roles. Users belong to groups where a group has a list of roles. All of the OpenStack services, such as Cinder, Glance, nova, and Horizon, reference the roles of the user attempting to access the service. OpenStack policy enforcers always consider the policy rule associated with each resource and use the user’s group or role, and their association, to determine and allow or deny the service access. Before configuring roles, groups, and users, we should document your required access control policies for the OpenStack installation. The policies must be as per the regulatory or legal requirements of the organization. Additional changes to the access control configuration should be done as per the formal policies. These policies must include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. One needs to review these policies from time to time and ensure that the configuration is in compliance with the approved policies. For user creation and administration, there must be a user created with the admin role in Keystone for each OpenStack service. This account will provide the service with the authorization to authenticate users. Nova (compute) and Swift (object storage) can be configured to use the Identity service to store authentication information. For the test environment, we can have tempAuth, which records user credentials in a text file, but it is not recommended for the production environment. The OpenStack administrator must protect sensitive configuration files from unauthorized modification with mandatory access control frameworks such as SELinux or DAC. Also, we need to protect the Keystone configuration files, which are stored at /etc/keystone/keystone.conf, and also the X.509 certificates. It is recommended that cloud admin users must authenticate using the identity service (Keystone) and an external authentication service that supports two-factor authentication. Getting authenticated with two-factor authentication reduces the risk of compromised passwords. It is also recommended in the NIST guideline called NIST 800-53 IA-2(1). Which defines MFA for network access to privileged accounts, when one factor is provided by a separate device from the system being accessed. Policy, tokens, and domains In OpenStack, every service defines the access policies for its resources in a policy file, where a resource can be like an API access, it can create and attach Cinder volume, or it can create an instance. The policy rules are defined in JSON format in a file called policy.json. Only administrators can modify the service-based policy.json file, to control the access to the various resources. However, one has to also ensure that any changes to the access control policies do not unintentionally breach or create an option to breach the security of any resource. Any changes made to policy.json are applied immediately and it does not need any service restart. After a user is authenticated, a token is generated for authorization and access to an OpenStack environment. A token can have a variable lifespan, but the default value is 1 hour. It is also recommended to lower the lifespan of the token to a certain level so that within the specified timeframe the internal service can complete the task. If the token expires before task completion, the system can be unresponsive. Keystone also supports token revocation. For this, it uses an API to revoke a token and to list the revoked tokens. In OpenStack Newton release, there are four supported token types: UUID, PKI, PKIZ, and fernet. After the OpenStack Ocata release, there are two supported token types: UUID and fernet. We'll see all of these token types in detail here: UUID: These tokens are persistent tokens. UUID tokens are 32 bytes in length, which must be persisted in the backend. They are stored in the Keystone backend, along with the metadata for authentication. All of the clients must pass their UUID token to the Keystone (identity service) in order to validate it. PKI and PKIZ: These are signed documents that contain the authentication content, as well as the service catalog. The difference between the PKI and PKIZ is that PKIZ tokens are compressed to help mitigate the size issues of PKI (sometimes PKI tokens becomes very long). Both of these tokens have become obsolete after the Ocata release. The length of PKI and PKIZ tokens typically exceeds 1,600 bytes. The Identity service uses public and private key pairs and certificates in order to create and validate these tokens. Fernet: These tokens are the default supported token provider for OpenStack Pike Release. It is a secure messaging format explicitly designed for use in API tokens. They are nonpersistent, lightweight (fall in the range of 180 to 240 bytes), and reduce the operational overhead. Authentication and authorization metadata is neatly bundled into a message-packed payload, which is then encrypted and signed in as a fernet token. In the OpenStack, the Keystone Service domain is a high-level container for projects, users, and groups. Domains are used to centrally manage all Keystone-based identity components. Compute, storage, and other resources can be logically grouped into multiple projects, which can further be grouped under a master account. Users of different domains can be represented in different authentication backends and have different attributes that must be mapped to a single set of roles and privileges in the policy definitions to access the various service resources. Domain-specific authentication drivers allow the identity service to be configured for multiple domains, using domain-specific configuration files stored at keystone.conf. Federated identity Federated identity enables you to establish trusts between identity providers and the cloud environment (OpenStack Cloud). It gives you secure access to cloud resources using your existing identity. You do not need to remember multiple credentials to access your applications. Now, the question is, what is the reason for using federated identity? This is answered as follows: It enables your security team to manage all of the users (cloud or noncloud) from a single identity application It enables you to set up different identity providers on the basis of the application that somewhere creates an additional workload for the security team and leads the security risk as well It gives ease of life to users by proving them a single credential for all of the apps so that they can save the time they spend on the forgot password page Federated identity enables you to have a single sign-on mechanism. We can implement it using SAML 2.0. To do this, you need to run the identity service provider under Apache. We learned about securing your private cloud and the authentication process therein. If you've enjoyed this article, do check out 'Cloud Security Automation' for a hands-on experience of automating your cloud security and governance. Top 5 cloud security threats to look out for in 2018 Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)
Read more
  • 0
  • 0
  • 5032
article-image-amazon-s3-security-access-policies
Savia Lobo
03 May 2018
7 min read
Save for later

Amazon S3 Security access and policies

Savia Lobo
03 May 2018
7 min read
In this article, you will get to know about Amazon S3, and the security access and policies associated with it.  AWS provides you with S3 as the object storage, where you can store your object files from 1 KB to 5 TB in size at a low cost. It's highly secure, durable, and scalable, and has unlimited capacity. It allows concurrent read/write access to objects by separate clients and applications. You can store any type of file in AWS S3 storage. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,' Cloud Security Automation', written by Prashant Priyam.[/box] AWS keeps multiple copies of all the data stored in the standard S3 storage, which are replicated across devices in the region to ensure the durability of 99.999999999%. S3 cannot be used as block storage. AWS S3 storage is further categorized into three different sections: S3 Standard: This is suitable when we need durable storage for files with frequent access. Reduced Redundancy Storage: This is suitable when we have less critical data that is persistent in nature. Infrequent Access (IA): This is suitable when you have durable data with nonfrequent access. You can opt for Glacier. However, in Glacier, you have a very long retrieval time. So, S3 IA becomes a suitable option. It provides the same performance as the S3 Standard storage. AWS S3 has inbuilt error correction and fault tolerance capabilities. Apart from this, in S3 you have an option to enable versioning and cross-region replication (cross-origin resource sharing (CORS)). If you want to enable versioning on any existing bucket, versioning will be enabled only for new objects in that bucket, not for existing objects. This also happens in the case of CORS, where you can enable cross-region replication, but it will be applicable only for new objects. Security in Amazon S3 S3 is highly secure storage. Here, we can enable fine-grained access policies for resource access and encryption. To enable access-level security, you can use the following: S3 bucket policy IAM access policy MFA for object deletion The S3 bucket policy is a JSON code that defines what will be accessed by whom and at what level: { "Version": "2008-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::prashantpriyam/*" ] } ] } In the preceding JSON code, we have just allowed read-only access to all the objects (as defined in the Action section) for an S3 bucket named prashantpriyam (defined in the Resource section). Similar to the S3 bucket policy, we can also define an IAM policy for S3 bucket access: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } ] } In the preceding policy, we want to give the user full permissions on the S3 bucket from the AWS console as well. In the following section of policy (JSON code), we have granted permission to the user to get the bucket location and list all the buckets for traversal, but here we cannot perform other operations, such as getting object details from the bucket: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, While in the second section of the policy (specified as follows), we have given permission to users to traverse into the prashantpriyam bucket and perform PUT, GET, and DELETE operations on the object: { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } MFA enables additional security on your account where, after password-based authentication, it asks you to provide the temporary code generated from AWS MFA. We can also use a virtual MFA such as Google Authenticator. AWS S3 supports MFA-based API, which helps to enforce MFA-based access policy on S3 bucket. Let's look at an example where we are giving users read-only access to a bucket while all other operations require an MFA token, which will expire after 600 seconds: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"Null": {"aws:MultiFactorAuthAge": true }} }, { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 600 }} }, { "Sid": "", "Effect": "Allow", "Principal": "*", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::prashantpriyam/*" } ] } In the preceding code, you can see that we have allowed all the operations on the S3 bucket if they have an MFA token whose life is less than 600 seconds. Apart from MFA, we can enable versioning so that S3 can automatically create multiple versions of the object to eliminate the risk of unwanted modification of data. This can be enabled with the AWS Console only. You can also enable cross-region replication so that the S3 bucket content can be replicated to the other selected regions. This option is mostly used when you want to deliver static content into two different regions, but it also gives you redundancy. For infrequently accessed data you can enable a lifecycle policy, which helps you to transfer the objects to a low-cost archival storage called Glacier. Let's see how to secure the S3 bucket using the AWS Console. To do this, we need to log in to the S3 bucket and search for S3. Now, click on the bucket you want to secure: In the screenshot, we have selected the bucket called velocis-manali-trip-112017 and, in the bucket properties, we can see that we have not enabled the security options that we have learned so far. Let's implement the security. Now, we need to click on the bucket and then on the Properties tab. From here, we can enable Versioning, Default encryption, Server access logging, and Object-level logging: To enable Server access logging, you need to specify the name of the bucket and a prefix for the logs: To enable encryption, you need to specify whether you want to use AES 256 or AWS KMS based encryption. Now, click on the Permission tab. From here, you will be able to define the Access Control List, Bucket Policy, and CORS configuration: In Access Control List, you can define who will access what and to what extent, in Bucket Policy you define resource-based permissions on the bucket (like we have seen in the example for bucket policy), and in CORS configuration we define the rule for CORS. Let's look at a sample CORS file: <!-- Sample policy --> <CORSConfiguration> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>Authorization</AllowedHeader> </CORSRule> </CORSConfiguration> It's an XML script that allows read-only permission to all the origins. In the preceding code, instead of a URL, the origin is the wildcard *, which means anyone. Now, click on the Management section. From here, we define the life cycle rule, replication, and so on: In life cycle rules, an S3 bucket object is transferred to the Standard-IA tier after 30 days and transferred to Glacier after 60 days. This is how we define security on the S3 bucket. To summarize, we learned about security access and policies in Amazon S3. If you've enjoyed reading this, do check out this book, 'Cloud Security Automation' to know how private cloud security functions can be automated for better time and cost-effectiveness. Creating and deploying an Amazon Redshift cluster Amazon Sagemaker makes machine learning on the cloud easy How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 8078

article-image-getting-started-metasploit
Packt
22 Jun 2017
10 min read
Save for later

Getting Started with Metasploit

Packt
22 Jun 2017
10 min read
In this article by Nipun Jaswal, the author of the book Metasploit Bootcamp, we will be covering the following topics: Fundamentals of Metasploit Benefits of using Metasploit (For more resources related to this topic, see here.) Penetration testing is an art of performing a deliberate attack on a network, web application, server or any device that require a thorough check up from the security perspective. The idea of a penetration test is to uncover flaws while simulating real world threats. A penetration test is performed to figure out vulnerabilities and weaknesses in the systems so that vulnerable systems can stay immune to threats and malicious activities. Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered as one of the most practical tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, a great exploit development environment, information gathering and web testing capabilities, and much more. The fundamentals of Metasploit Now that we have completed the setup of Kali Linux let us talk about the big picture: Metasploit. Metasploit is a security project that provides exploits and tons of reconnaissance features to aid a penetration tester. Metasploit was created by H.D Moore back in 2003, and since then, its rapid development has led it to be recognized as one of the most popular penetration testing tools. Metasploit is entirely a Ruby-driven project and offers a great deal of exploits, payloads, encoding techniques, and loads of post-exploitation features. Metasploit comes in various editions, as follows: Metasploit pro: This edition is a commercial edition, offers tons of great features such as web application scanning and exploitation, automated exploitation and is quite suitable for professional penetration testers and IT security teams. Pro edition is used for advanced penetration tests and enterprise security programs. Metasploit express: The Express edition is used for baseline penetration tests. Features in this version of Metasploit include smart exploitation, automated brute forcing of the credentials, and much more. This version is quite suitable for IT security teams to small to medium size companies. Metasploit community: This is a free version with reduced functionalities of the express edition. However, for students and small businesses, this edition is a favorable choice. Metasploit framework: This is a command-line version with all manual tasks such as manual exploitation, third-party import, and so on. This release is entirely suitable for developers and security researchers. You can download Metasploit from the following link: https://www.rapid7.com/products/metasploit/download/editions/ We will be using the Metasploit community and framework version.Metasploit also offers various types of user interfaces, as follows: The graphical user interface(GUI): This has all the options available at a click of a button. This interface offers a user-friendly interface that helps to provide a cleaner vulnerability management. The console interface: This is the most preferred interface and the most popular one as well. This interface provides an all in one approach to all the options offered by Metasploit. This interface is also considered one of the most stable interfaces. The command-line interface: This is the more potent interface that supports the launching of exploits to activities such as payload generation. However, remembering each and every command while using the command-line interface is a difficult job. Armitage: Armitage by Raphael Mudge added a neat hacker-style GUI interface to Metasploit. Armitage offers easy vulnerability management, built-in NMAP scans, exploit recommendations, and the ability to automate features using the Cortanascripting language. Basics of Metasploit framework Before we put our hands onto the Metasploit framework, let us understand basic terminologies used in Metasploit. However, the following modules are not just terminologies but modules that are heart and soul of the Metasploit project: Exploit: This is a piece of code, which when executed, will trigger the vulnerability at the target. Payload: This is a piece of code that runs at the target after a successful exploitation is done. It defines the type of access and actions we need to gain on the target system. Auxiliary: These are modules that provide additional functionalities such as scanning, fuzzing, sniffing, and much more. Encoder: Encoders are used to obfuscate modules to avoid detection by a protection mechanism such as an antivirus or a firewall. Meterpreter: This is a payload that uses in-memory stagers based on DLL injections. It provides a variety of functions to perform at the target, which makes it a popular choice. Architecture of Metasploit Metasploit comprises of various components such as extensive libraries, modules, plugins, and tools. A diagrammatic view of the structure of Metasploit is as follows: Let's see what these components are and how they work. It is best to start with the libraries that act as the heart of Metasploit. Let's understand the use of various libraries as explained in the following table: Library name Uses REX Handles almost all core functions such as setting up sockets, connections, formatting, and all other raw functions MSF CORE Provides the underlying API and the actual core that describes the framework MSF BASE Provides friendly API support to modules We have many types of modules in Metasploit, and they differ regarding their functionality. We have payload modules for creating access channels to exploited systems. We have auxiliary modules to carry out operations such as information gathering, fingerprinting, fuzzing an application, and logging into various services. Let's examine the basic functionality of these modules, as shown in the following table: Module type Working Payloads Payloads are used to carry out operations such as connecting to or from the target system after exploitation or performing a particular task such as installing a service and so on. Payload execution is the next step after the system is exploited successfully. Auxiliary Auxiliary modules are a special kind of module that performs specific tasks such as information gathering, database fingerprinting, scanning the network to find a particular service and enumeration, and so on. Encoders Encoders are used to encode payloads and the attack vectors to (or intending to) evade detection by antivirus solutions or firewalls. NOPs NOP generators are used for alignment which results in making exploits stable. Exploits The actual code that triggers a vulnerability Metasploit framework console and commands Gathering knowledge of the architecture of Metasploit, let us now run Metasploit to get a hands-on knowledge about the commands and different modules. To start Metasploit, we first need to establish database connection so that everything we do can be logged into the database. However, usage of databases also speeds up Metasploit's load time by making use of cache and indexes for all modules. Therefore, let us start the postgresql service by typing in the following command at the terminal: root@beast:~# service postgresql start Now, to initialize Metasploit's database let us initialize msfdb as shown in the following screenshot: It is clearly visible in the preceding screenshot that we have successfully created the initial database schema for Metasploit. Let us now start the Metasploit's database using the following command: root@beast:~# msfdb start We are now ready to launch Metasploit. Let us issue msfconsole in the terminal to startMetasploit as shown in the following screenshot: Welcome to the Metasploit console, let us run the help command to see what other commands are available to us: The commands in the preceding screenshot are core Metasploit commands which are used to set/get variables, load plugins, route traffic, unset variables, printing version, finding the history of commands issued, and much more. These commands are pretty general. Let's see module based commands as follows: Everything related to a particular module in Metasploit comes under module controls section of the Help menu. Using the preceding commands, we can select a particular module, load modules from a particular path, get information about a module, show core, and advanced options related to a module and even can edit a module inline. Let us learn some basic commands in Metasploit and familiarize ourselves to the syntax and semantics of these commands: Command Usage Example use [auxiliary/exploit/payload/encoder] To select a particular msf>use exploit/unix/ftp/vsftpd_234_backdoor msf>use auxiliary/scanner/portscan/tcp show[exploits/payloads/encoder/auxiliary/options] To see the list of available modules of a particular type msf>show payloads msf> show options set [options/payload] To set a value to a particular object msf>set payload windows/meterpreter/reverse_tcp msf>set LHOST 192.168.10.118 msf> set RHOST 192.168.10.112 msf> set LPORT 4444 msf> set RPORT 8080 setg [options/payload] To assign a value to a particular object globally, so the values do not change when a module is switched on msf>setgRHOST 192.168.10.112 run To launch an auxiliary module after all the required options are set msf>run exploit To launch an exploit msf>exploit back To unselect a module and move back msf(ms08_067_netapi)>back msf> Info To list the information related to a particular exploit/module/auxiliary msf>info exploit/windows/smb/ms08_067_netapi msf(ms08_067_netapi)>info Search To find a particular module msf>search hfs check To check whether a particular target is vulnerable to the exploit or not msf>check Sessions To list the available sessions msf>sessions [session number]   Meterpreter commands Usage Example sysinfo To list system information of the compromised host meterpreter>sysinfo ifconfig To list the network interfaces on the compromised host meterpreter>ifconfig meterpreter>ipconfig (Windows) Arp List of IP and MAC addresses of hosts connected to the target meterpreter>arp background To send an active session to background meterpreter>background shell To drop a cmd shell on the target meterpreter>shell getuid To get the current user details meterpreter>getuid getsystem To escalate privileges and gain system access meterpreter>getsystem getpid To gain the process id of the meterpreter access meterpreter>getpid ps To list all the processes running at the target meterpreter>ps If you are using Metasploit for the very first time, refer to http://www.offensive-security.com/metasploit-unleashed/Msfconsole_Commandsfor more information on basic commands Benefits of using Metasploit Metasploit is an excellent choice when compared to traditional manual techniques because of certain factors which are listed as follows: Metasploit framework is open source Metasploit supports large testing networks by making use of CIDR identifiers Metasploit offers quick generation of payloads which can be changed or switched on the fly Metasploit leaves the target system stable in most of the cases The GUI environment provides a fast and user-friendly way to conduct penetration testing Summary Throughout this article, we learned the basics of Metasploit. We learned about various syntax and semantics of Metasploit commands. We also learned the benefits of using Metasploit. Resources for Article: Further resources on this subject: Approaching a Penetration Test Using Metasploit [article] Metasploit Custom Modules and Meterpreter Scripting [article] So, what is Metasploit? [article]
Read more
  • 0
  • 0
  • 3093