Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Cybersecurity

90 Articles
article-image-securing-vcloud-using-vcloud-networking-and-security-app-firewall
Packt
12 Nov 2013
6 min read
Save for later

Securing vCloud Using the vCloud Networking and Security App Firewall

Packt
12 Nov 2013
6 min read
(For more resources related to this topic, see here.) Creating a vCloud Networking and Security App firewall rule In this article, we will create a VMware vCloud Networking and Security App firewall rule that restricts inbound HTTP traffic destined for a web server: Open the vCloud Networking and Security Manager URL in a supported browser, or it can also be accessed from the vCenter client. Log in to vCloud Networking and Security as admin. In the vCloud Networking and Security Manager inventory pane, go to Datacenters | Your Datacenter. In the right-hand pane, click on the App Firewall tab. Click on the Networks link. On the General tab, click on the + link. Point to the new rule Name cell and click on the + icon. In the rule Name panel, type Deny HTTP in the textbox and click on OK. Point to the Destination cell and click on the + icon. In the input panel, perform the following actions: Go to IP Addresses from the drop-down menu. At the bottom of the panel, click on the New IP Addresses link. In the Add IP Addresses panel, configure an address set that includes the web server. Click on OK. Point to the Service cell and click on the + icon. In the input panel, perform the following actions: Sort the Available list by name. Scroll down and go to the HTTP service checkbox. Click on the blue right-arrow to move the HTTP service from the Available list to the Selected list. Click on OK. Go to the Action cell and click on the + icon. In the input panel, click on Block and Log. Click on OK. Click on the Publish Changes button, located above the rules list, on the green bar. In general, create firewall rules that meet your business needs. In addition, you might consider the following guidelines: Where possible, when identifying the source and destination, take advantage of vSphere groupings in your vCenter Server inventory, such as the datacenter, cluster, and vApp. By writing rules in terms of these groupings, the number of firewall rules is reduced, which makes the rules easier to track and less prone to configuration errors. If a vSphere grouping does not suit your needs because you need to create a more specialized group, take advantage of security groups. Like vSphere groupings, security groups reduce the number of rules that you need to create, making the rules easier to track and maintain. Finally, set the action on the default firewall rules based on your business policy. For example, as a security best practice, you might deny all traffic by default. If all traffic is denied, vCloud Networking and Security App drops all incoming and outgoing traffic. Allowing all traffic by default makes your datacenter very accessible, but also insecure. vCloud Networking and Security App – flow monitoring Flow monitoring is a traffic analysis tool that provides a detailed view of the traffic on your virtual network and that passed through a vCloud Networking and Security App. The flow monitoring output defines which machines are exchanging data and over which application. This data includes the number of sessions, packets, and bytes transmitted per session. Session details include sources, destinations, direction of sessions, applications, and ports used. Session details can be used to create firewall rules to allow or block traffic. You can use flow monitoring as a forensic tool to detect rogue services and examine outbound sessions. The main advantages of flow monitoring are: You can easily analyze inter-VM traffic Dynamic rules can be created right from the flow monitoring console You can use it for debugging network related problems as you can enable logging for every individual virtual machine on an as-needed basis You can view traffic sessions inspected by a vCloud Networking and Security App within the specified time span. The last 24 hours of data are displayed by default; the minimum time span is 1 hour, and the maximum is 2 weeks. The bar at the top of the page shows the percentage of allowed traffic in green and blocked traffic in red. Examining flow monitoring statistics Let us examine the statistics for the Top Flows, Top Destinations, and Top Sources categories. Open the vCloud Networking and Security Manager URL in a supported browser. Log in to vCloud Networking and Security as admin. In the vCloud Networking and Security Manager inventory pane, go to Datacenters | Your Datacenter. In the right-hand pane, click on the Network Virtualization link. Click on the Networks link. In the networks list, click on the network where you want to monitor the flow. Click on the Flow Monitoring button. Verify that Flow Monitoring | Summary is selected. On the far right side of the page, across from the Summary and Details links, click on the Time Interval Change link. On the Time Interval panel, select the Last 1 week radio button and click on Update. Verify that the Top Flows button is selected. Use the Top Flows table to determine which flow has the highest volume of bytes and which flow has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Point to the apex of three different colored lines and determine which network protocol is reported. Scroll to the top of the form and click on the Top Destinations button. Use the Top Destinations table to determine which destination has the highest volume of incoming bytes and which destination has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Scroll to the top of the form and click on the Top Sources button. Use the Top Sources table to determine which source has the highest volume of bytes and which source has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Summary In this article we learned how to create access control policies based on logical constructs such as VMware vCenter Server containers and VMware vCloud Networking and Security Security Groups, but not just physical constructs such as IP addresses. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 1586

article-image-general-considerations
Packt
25 Oct 2013
9 min read
Save for later

General Considerations

Packt
25 Oct 2013
9 min read
(For more resources related to this topic, see here.) Building secure Node.js applications will require an understanding of the many different layers that it is built upon. Starting from the bottom, we have the language specification that defines what JavaScript consists of. Next, the virtual machine executes your code and may have differences from the specification. Following that, the Node.js platform and its API have details in their operation that affect your applications. Lastly, third-party modules interact with our own code and need to be audited for secure programming practices. First, JavaScript's official name is ECMAScript. The international European Computer Manufacturers Association (ECMA) first standardized the language as ECMAScript in 1997. This ECMA-262 specification defines what comprises JavaScript as a language, including its features, and even some of its bugs. Even some of its general quirkiness has remained unchanged in the specification to maintain backward compatibility. While I won't say the specification itself is required reading, I will say that it is worth considering. Second, Node.js uses Google's V8 virtual machine to interpret and execute your source code. While developing for the browser, you have to consider all the other virtual machines (not to mention versions), when it comes to available features. In a Node.js application, your code only runs on the server, so you have much more freedom, and you can use all the features available to you in V8. Additionally, you can also optimize for the V8 engine exclusively. Next, Node.js handles setting up the event loop, and it takes your code to register callbacks for events and executes them accordingly. There are some important details regarding how Node.js responds to exceptions and other errors that you will need to be aware of while developing your applications. Atop Node.js is the developer API. This API is written mostly in JavaScript which allows you, as a JavaScript developer, to read it for yourself, and understand how it works. There are many provided modules that you will likely end up using, and it's important for you to know how they work, so you can code defensively. Last, but not least, the third-party modules that npm gives you access to, are in great abundance, which can be a double-edged sword. On one hand, you have many options to pick from that suit your needs. On the other hand, having a third-party code is a potential security liability, as you will be expected to support and audit each of these modules (in addition to their own dependencies) for security vulnerabilities. JavaScript security One of the biggest security risks in JavaScript itself, both on the client and now on the server, is the use of the eval() function. This function, and others like it, takes a string argument, which can represent an expression, statement, or a series of statements, and it is executed as any other JavaScript source code. This is demonstrated in the following code: // these variables are available to eval()'d code // assume these variables are user input from a POST request var a = req.body.a; // => 1 var b = req.body.b; // => 2 var sum = eval(a + "+" + b); // same as '1 + 2' This code has full access to the current scope, and can even affect the global object, giving it an alarming amount of control. Let's look at the same code, but imagine if someone malicious sent arbitrary JavaScript code instead of a simple number. The result is shown in the following code: var a = req.body.a; // => 1 var b = req.body.b; // => 2; console.log("corrupted"); var sum = eval(a + "+" + b); // same as '1 + 2; console.log("corrupted"); Due to how eval() is exploited here, we are witnessing a "remote code execution" attack! When executed directly on the server, an attacker could gain access to server files and databases. There are a few cases where eval() can be useful, but if the user input is involved in any step of the process, it should likely be avoided at all costs! There are other features of JavaScript that are functional equivalents to eval(), and should likewise be avoided unless absolutely necessary. First is the Function constructor that allows you to create a callable function from strings, as shown in the following code: // creates a function that returns the sum of 2 arguments var adder = new Function("a", "b", "return a + b"); adder(1, 2); // => 3 While very similar to the eval() function, it is not exactly the same. This is because it does not have access to the current scope. However, it does still have access to the global object, and should be avoided whenever a user input is involved. If you find yourself in a situation where there is an absolute need to execute an arbitrary code that involves user input, you do have one secure option. Node.js platform's API includes a vm module that is meant to give you the ability to compile and run code in a sandbox, preventing manipulation of the global object and even the current scope. It should be noted that the vm module has many known issues and edge cases. You should read the documentation, and understand all the implications of what you are doing to make sure you don't get caught off-guard. ES5 features ECMAScript5 included an extensive batch of changes to JavaScript, including the following changes: Strict mode for removing unsafe features from the language. Property descriptors that give you control over object and property access. Functions for changing object mutability. Strict mode Strict mode changes the way JavaScript code runs in select cases. First, it causes errors to be thrown in cases that were silent before. Second, it removes and/or change features that made optimizations for JavaScript engines either difficult or impossible. Lastly, it prohibits some syntax that is likely to show up in future versions of JavaScript. Additionally, strict mode is opt-in only, and can be applied either globally or for an individual function scope. For Node.js applications, to enable strict mode globally, add the –use_strict command line flag, while executing your program. While dealing with third-party modules that may or may not be using strict mode, this can potentially have negative side effects on your overall application. With that said, you could potentially make strict mode compliance a requirement for any audits on third-party modules. Strict mode can be enabled by adding the "use strict" pragma at the beginning of a function, before any other expressions as shown in the following code: function sayHello(name) { "use strict"; // enables strict mode for this function scope console.log("hello", name); } In Node.js, all the required files are wrapped with a function expression that handles the CommonJS module API. As a result, you can enable strict mode for an entire file, by simply putting the directive at the top of the file. This will not enable strict mode globally, as it would in an environment like the browser. Strict mode makes many changes to the syntax and runtime behavior, but for the sake of brevity we will only discuss changes relevant to application security. First, scripts run via eval() in strict mode cannot introduce new variables to the enclosing scope. This prevents leaking new and possibly conflicting variables into your code, when you run eval() as shown in the following code: "use strict"; eval("var a = true"); console.log(a); // ReferenceError thrown – a does not exist In addition, the code run via eval() is not given access to the global object through its context. This is similar, if not related, to other changes for function scope, which will be explained shortly, but this is specifically important for eval(), as it can no longer use the global object to perform additional black magic. It turns out that the eval() function is able to be overridden in JavaScript. It can be accomplished by creating a new global variable called eval, and assigning something else to it, which could be malicious. Strict mode prohibits this type of operation. It is treated more like a language keyword than a variable, and attempting to modify it will result in a syntax error as shown in the following code: // all of the examples below are syntax errors "use strict"; eval = 1; ++eval; var eval; function eval() { } Next, the function objects are more tightly secured. Some common extensions to ECMAScript add the function.caller and function.arguments references to each function, after it is invoked. Effectively, you can "walk" the call stack for a specific function by traversing these special references. This potentially exposes information that would normally appear to be out of scope. Strict mode simply makes these properties throw a TypeError remark, while attempting to read or write them, as shown in the following code: "use strict"; function restricted() { restricted.caller; // TypeError thrown restricted.arguments; // TypeError thrown } Next, arguments.callee is removed in strict mode (such as function.caller and function.arguments shown previously). Normally, arguments.callee refers to the current function, but this magic reference also exposes a way to "walk" the call stack, and possibly reveal information that previously would have been hidden or out of scope. In addition, this object makes certain optimizations difficult or impossible for JavaScript engines. Thus, it also throws a TypeError exception, when an access is attempted, as shown in the following code: "use strict"; function fun() { arguments.callee; // TypeError thrown } Lastly, functions executed with null or undefined as the context no longer coerce the global object as the context. This applies to eval() as we saw earlier, but goes further to prevent arbitrary access to the global object in other function invocations, as shown in the following code: "use strict"; (function () { console.log(this); // => null }).call(null); Strict mode can help make the code far more secure than before, but ECMAScript 5 also includes access control through the property descriptor APIs. A JavaScript engine has always had the capability to define property access, but ES5 includes these APIs to give that same power to application developers. Summary In this article, we examined the security features that applied generally to the language of JavaScript itself, including how to use static code analysis to check for many of the aforementioned pitfalls. Also, we looked at some of the inner workings of a Node.js application, and how it differs from typical browser development, when it comes to security. Resources for Article: Further resources on this subject: Setting up Node [Article] So, what is Node.js? [Article] Learning to add dependencies [Article]
Read more
  • 0
  • 0
  • 1119

article-image-mobile-and-social-threats-you-should-know-about
Packt
17 Sep 2013
8 min read
Save for later

Mobile and Social - the Threats You Should Know About

Packt
17 Sep 2013
8 min read
(For more resources related to this topic, see here.) A prediction of the future (and the lottery numbers for next week) scams Security threats, such as malware, are starting to be manifested on mobile devices, as we are learning that mobile devices are not immune to virus, malware, and other attacks. As PCs are increasingly being replaced by the use of mobile devices, the incidence of new attacks against mobile devices is growing. The user has to take precautions to protect their mobile devices just as they would protect their PC. One major type of mobile cybercrime is the unsolicited text message that captures personal details. Another type of cybercrime involves an infected phone that sends out an SMS message that results in excess connectivity charges. Mobile threats are on the rise according to the Symantec Report of 2012; 31 percent of all mobile users have received an SMS from someone that they didn't know. An example is where the user receives an SMS message that includes a link or phone number. This technique is used to install malware onto your mobile device. Also, these techniques are an attempt to hoax you into disclosing personal or private data. In 2012, Symantec released a new cybercrime report. They concluded that countries like Russia, China, and South Africa have the highest cybercrime incidents. Their rate of exploitation ranges from 80 to 92 percent. You can find this report at http://now-static.norton.com/now/en/pu/images/Promotions/2012/cybercrimeReport/2012_Norton_Cybercrime_Report_Master_FINAL_050912.pdf. Malware The most common type of threat is known as malware . It is short for malicious software. Malware is used or created by attackers to disrupt many types of computer operations, collect sensitive information, or gain access to a private mobile device/computer. It includes worms, Trojan horses, computer viruses, spyware, keyloggers and root kits, and other malicious programs. As mobile malware is increasing at a rapid speed, the U.S. government wants users to be aware of all the dangers. So in October 2012, the FBI issued a warning about mobile malware (http://www.fbi.gov/sandiego/press-releases/2012/smartphone-users-should-be-aware-of-malware-targeting-mobile-devices-and-the-safety-measures-to-help-avoid-compromise). The IC3 has been made aware of various malware attacking Android operating systems for mobile devices. Some of the latest known versions of this type of malware are Loozfon and FinFisher. Loozfon hooks its victims by emailing the user with promising links such as: a profitable payday just for sending out email. It then plants itself onto the phone when the user clicks on this link. This specific malware will attach itself to the device and start to collect information from your device, including: Contact information E-mail address Phone numbers Phone number of the compromised device On the other hand, a spyware called FinFisher can take over various components of a smartphone. According to IC3, this malware infects the device through a text message and via a phony e-mail link. FinFisher attacks not only Android devices, but also devices running Blackberry, iOS, and Windows. Various security reports have shown that mobile malware is on the rise. Cyber criminals tend to target Android mobile devices. As a result, Android users are getting an increasing amount of destructive Trojans, mobile botnets, and SMS-sending malware and spyware. Some of these reports include: http://www.symantec.com/security_response/publications/threatreport.jsp http://pewinternet.org/Commentary/2012/February/Pew-Internet-Mobile.aspx https://www.lookout.com/resources/reports/mobile-threat-report As stated recently in a Pew survey, more than fifty percent of U.S. mobile users are overly suspicious/concerned about their personal information, and have either refused to install apps for this reason or have uninstalled apps. In other words, the IC3 says: Use the same precautions on your mobile devices as you would on your computer when using the Internet. Toll fraud Since the 1970s and 1980s, hackers have been using a process known as phreaking . This trick provides a tone that tells the phone that a control mechanism is being used to manage long-distance calls. Today, the hackers are now using a technique known as toll fraud . It's a malware that sends premium-rate SMSs from your device, incurring charges on your phone bill. Some toll fraud malware may trick you into agreeing to murky Terms of Service, while others can send premium text messages without any noticeable indicators. This is also known as premium-rate SMS malware or premium service abuser . The following figure shows how toll fraud works, portrayed by Lookout Mobile Security: According to VentureBeat, malware developers are after money. The money is in the toll fraud malware. Here is an example from http://venturebeat.com/2012/09/06/toll-fraud-lookout-mobile/: Remember commercials that say, "Text 666666 to get a new ringtone everyday!"? The normal process includes: Customer texts the number, alerting a collector—working for the ringtone provider—that he/she wants to order daily ringtones. Through the collector, the ringtone provider sends a confirmation text message to the customer (or sometimes two depending on that country's regulations) to the customer. That customer approves the charges and starts getting ringtones. The customer is billed through the wireless carrier. The wireless carrier receives payment and sends out the ringtone payment to the provider. Now, let's look at the steps when your device is infected with the malware known as FakeInst : The end user downloads a malware application that sends out an SMS message to that same ringtone provider. As normal, the ringtone provider sends the confirmation message. In this case, instead of reaching the smartphone owner, the malware blocks this message and sends a fake confirmation message before the user ever knows. The malware now places itself between the wireless carrier and the ringtone provider. Pretending to be the collector, the malware extracts the money that was paid through the user's bill. FakeInst is known to get around antivirus software by identifying itself as new or unique software. Overall, Android devices are known to be impacted more by malware than iOS. One big reason for this is that Android devices can download applications from almost any location on the Internet. Apple limits its users to downloading applications from the Apple App store. SMS spoofing The third most common type of scam is called SMS spoofing . SMS spoofing allows a person to change the original mobile phone number or the name (sender ID) where the text message comes from. It is a fairly new technology that uses SMS on mobile phones. Spoofing can be used in both lawful and unlawful ways. Impersonating a company, another person, or a product is an illegal use of spoofing. Some nations have banned it due to concerns about the potential for fraud and abuse, while others may allow it. An example of how SMS spoofing is implemented is as follows: SMS spoofing occurs when the message sender's address information has been manipulated. This is done many times to impersonate a cell phone user who is roaming on a foreign network and sending messages to a home area network. Often, these messages are addressed to users who are outside the home network, which is essentially being "hijacked" to send messages to other networks. The impacts of this activity include the following: The customer's network can receive termination charges caused by the valid delivery of these "bad" messages to interlink partners. Customers may criticize about being spammed, or their message content may be sensitive. Interlink partners can cancel the home network unless a correction of these errors is implemented. Once this is done, the phone service may be unable to send messages to these networks. There is a great risk that these messages will look like real messages, and real users can be billed for invalid roaming messages that they did not send. There is a flaw within iPhone that allows SMS spoofing. It is vulnerable to text messaging spoofing, even with the latest beta version, iOS 6. The problem with iPhone is that when the sender specifies a reply-to number this way, the recipient doesn't see the original phone number in the text message. That means there's no way to know whether a text message has been spoofed or not. This opens up the user to other spoofing types of manipulation where the recipient thinks he/she is receiving a message from a trusted source. According to pod2g (http://www.pod2g.org/2012/08/never-trust-sms-ios-text-spoofing.html): In a good implementation of this feature, the receiver would see the original phone number and the reply-to one. On iPhone, when you see the message, it seems to come from the reply-to number, and you loose track of the origin.
Read more
  • 0
  • 0
  • 1403

article-image-vcloud-networks
Packt
13 Sep 2013
14 min read
Save for later

vCloud Networks

Packt
13 Sep 2013
14 min read
(For more resources related to this topic, see here.) Basics Network Virtualization is what makes vCloud Director such an awesome tool. However, before we go full out in the next article, we need to set up the Network virtualization, and this is what we will be focusing on here. When we talk about isolated networks we are talking about vCloud director making use of different methods of Network Layer three encapsulation (OSI/ISO model). Basically it is the same concept as was introduced with VLANs. VLANs split up the network communication in physical network cables in different totally isolated communication streams. vCloud makes use of these isolated networks to create isolated Org and vApp Networks. VCloud Director has three different Network items: An external network is a network that exists outside the vCloud, for example, a production network. It is basically a PortGroup in vSphere that is used in vCloud to connect to the outside world. An External Network can be connected to multiple Organization Networks. External Networks are not virtualized and are based on existing PortGroups on a vSwitch or Distributed vSwitch. An organization network (Org Net) is a network that exists only inside one organization. You can have multiple Org Nets in an organization. Organizational Networks come in three different shapes: Isolated: An isolated Org Net exists only in this organization and is not connected to an external network; however, it can be connected to vApp networks or VMs. This network type uses network virtualization and its own Network setting. Routed Network (Edge Gateway): An Org Network connects to an existing Edge Device. An Edge Gateway allows defining firewall, NAT rules, as well as VPN connections and load balance functionality. Routed gateways connect external networks to vApp networks and/or VMs. This Network uses virtualized networks and its own Network setting. Directly connected: These Org Nets are an extension of an external network into the organization. They directly connect external networks to the vApp networks or VMs. These networks do NOT use network virtualization and they make use of the network settings of an External Network. A vApp network is a virtualized network that only exists inside a vApp. You can have multiple vApp networks inside one vApp. A vApp network can connect to VMs and to Org networks. It has its own network settings. When connecting a vApp Network to an Org Network you can create a Router between the vApp and the Org Network that lets you define DHCP, Firewall, NAT rules and Static Routing. To create isolated networks, vCloud Director uses Network Pools. Network pools are collection of VLANs, PortGroups and VLANs that can use L2 in L3 encapsulation. The content of these pools can be used by Org and vApp Networks for network virtualization. Network Pools There are four kinds of network pools that can be created: VXLAN: VXLAN networks are Layer 2 networks that are encapsulated in Layer 3 packages. VMware calls this Software Defined Networking (SDN). VXLANs are automatically created by vCD, however, they don't work out of the box and require some extra configuration in vCloud Network and Security (see later) Network isolation-backed: These are basically the same as VXLANs, however, they work out of the box and use Mac in Mac encapsulation. The difference is that VXLAN can transcend routers, network isolation-backed networks can't. vSphere Portgroups backed: vCD will use pre-created portgroups to build the vApp or organization networks. You need to pre-provision one portgroup for every vApp/Org network you would like to use. VLAN backed: vCD will use a pool of VLAN numbers to automatically provision portgroups on demand; however, you still need to configure the VLAN trunking. You will need to reserve one VLAN for every vApp/Org network you would like to use. VXLANs and network isolation networks solve the problems of pre-provisioning and reserving a multitude of VLANs, which makes them extremely important. However using PortGroup or VLAN Network Pools can have additional benefits that we will explore later. Types of vCloud Network VCloud Director has basically 3 different Network items. An external network is basically a PortGroup in vSphere that is imported into vCloud. An Org Network is an isolated network that exists only in an Organization. The same is true for vApp Network, they exist only in vApps. In the picture above you can see all possible connections. Let’s play through the scenarios and see how one can use them Isolated vApp Network An Isolated vApp network exist only inside a vApp. They are useful if one needs to test how VM’s behave in a network or to test using an IP range that is already in use (e.g. Production). The downside of them is that they are isolated, meaning it is hard to get information or software in and out. Have a look at the Recipe for RDP (or SSH) forward into an isolated vApp to find some answers to this problem. VMs directly connected to External Network VM’s inside a vApp are connected to a direct OrgNet, meaning they will be able to get IP’s from the External Network pool. Typically these VM’s are used for Production, meaning that customers choose vCloud for fast provisioning of predefined templates. As vCloud manages the IP’s for a given IP range it can be quite easy to fast provision a VM. vApp Network connected via vApp Router to External network VMs are connected to a vApp Network that has a vApp Router defined as its Gateway. The Gateway connects to a direct OrgNet, meaning that the Gateway will be automatically be given an IP from the External Network Pool. These configurations come in handy to reduce the amount of “physical” Networking that has to be done. The vApp Router can act as a Router with defined Firewall rules, it can do SNAT and DNAT as well as define static routing. So instead of using up a “physical” VLAN or SubNet, one can hide away applications this way. As an added benefit these Applications can be used as templates for fast deployment. VMs direct connected to isolated OrgNet VMs are connected directly to an isolated OrgNet. Connecting VMs directly to an Isolated Network normally only makes sense if there is more than one vApp/VM connected to the OrgNet. What they are used for is an extension of the isolated vApp concept. You need to test repeatedly complex Applications that require certain infrastructure like Active Directory, DHCP, DNS, Database, Exchange Servers etc. Instead of deploying large isolated vApps that contain these, you could deploy them in one vApp and connect them via an Isolated OrgNet directly to the vApp that contains your testing VMs. This makes it possible to reuse this base infrastructure. By using Sharing you even can hide away the Infrastructure vApp from your users. vApp connected via vApp Router to isolate OrgNet VMs are connected to an vApp network that has as its Gateway a vApp Router . The vApp router gets automatically its IP from the OrgNet Pool. Basically, a variant of the idea before. A test vApp or an infrastructure vApp can be packaged this way and be made ready for fast deployment. VMs connected directly to Edge VMs are directly connected to the Edge OrgNet and get their IP from the OrgNet Pool. Their Gateway is the Edge device that connects them to the External Networks through the Edge Firewall. A very typical setup is using the Edge Load balancing feature to load balance VM’s out of a vApp via the Edge. Another one is that the Organization is secured using the Edge Gateway against other Organizations that use the same External Network. This is mostly the case if the External Network is the internet and each Organization is an external customer. vApp connected to Edge via vApp Router VMs are connected to a vApp network that has the vApp router as its Gateway. The vApp Router will automatically get an IP form the OrgNet, which has its Gateway the Edge. This is a more complicated variant of the above scenario, allowing customers to package their VM’s, secure them against other vApps or VMs or subdivide their allocated networks. IP Management Let’s have a look into IP management with vCloud. vCloud knows about three different settings for IP management of VM’s. DHCP You need to provide a DHCP, vCloud doesn’t automatically create one. However a vApp Router or an Edge can create one. Static – IP Pool The IP for the VM comes from the Static IP Pool of the network it is connected to. In addition to that DNS and Domain Suffix will be written to the VM. Static – Manual The IP can be defined on the spot; however, it must be in the network defined by the Gateway and the Network mask of the network the VM is connected to. In addition to that, DNS and Domain Suffix will be written to the VM. All these settings require Guest Customization to be effective. If no Guest Customization is selected, it doesn’t work and whatever the VM was configured with as a Template will be used. vSphere and vCloud vApps One think that need to be said about vApps is that they actually come in two completely different versions. The vCenter vApp and the vCloud vApp. The vSphere vApp concept was introduced in vSphere 4.0 as a container for VMs. In vSphere a vApp is essentially a resource pool with some extras, such as starting and stopping order and (if you configured it) Network IP allocation method. The idea is it to have an entity of VMs that build one unit. Such vApp then can be exported or imported using the OVF format. A very good example for an vApp is VMware Operations Manager. It comes as a vApp in an OVF format and contains not only the VMs but also the start-up sequence as well as some setup script. When the vApp is deployed the first time, additional information like Network settings are asked and then implemented. As vSphere vApp is a resource pool, it can be configured so that it will only demand resources that it is using; on the other hand resource pool configuration is something that most people struggle with. A vSphere vApp is ONLY a resource pool, it is not automatically a folder in the Folder and Template View of vSphere, but is viewed there as again as a vApp. The vCloud vApp is a very different concept; first of all it is not a resource pool. The VMs of the vCloud vApp live in the OvDC resource Pool. However the vCloud vAppp is automatically a folder in the Folder and Template View of vSphere. It is a construct that is created by vCloud, it consists of VMs, a Start and Stop sequence and Networks. The Network part is one of the major differences (next to the resource pool). In vSphere only network information, like how IPs gets assigned to it and settings like Gateway and DNS are given to the vApp, a vCloud vApp actually encapsulates Networks. The vCloud vApp Networks are full networks, meaning they contain the full information for a given network including network settings and IP Pools. For more details see the last article. This information is kept when importing and exporting vCloud vApps. When I’m talking about vApps in the book, I will always mean vCloud vApps. vCenter vApp, if they feature will be written as vCenter vApp. Datastores, profiles and clusters I probably don’t have to explain what a datastore is, but here is a short intro just in case . A Datastore is a VMware object that exists in ESXi. This Object can be a hard disk that is attached to an ESXi server, a NFS or iSCSSI mount on a ESXi host or an fibre channel disk that is attached to an HBA on the ESXi server. A Storage Profile is a container that contains one or more Datastores. A Storage Profile doesn’t have any intelligence implemented, it just groups the Storage. However, it is extremely beneficial in vCloud. If you run out of storage on a datastore you can just add another datastore to the same Storage Profile and your back in business. Datastore Clusters again are containers for datastores, but now there is intelligence included. A Datastore Cluster can use Storage DRS, which allows for VMs to automatically use Storage vMotion to move from one datastore to another if the I/O latency is high or the storage low. Depending on your storage backend system this can be extremely useful. vCloud Director doesn’t know the difference between a Storage Profile and a Datastore Cluster. If you add a Datastore cluster, vCloud will pick it up as a Storage Profile, but that’s ok because it’s not a problem at all. Be aware that Storage profiles are part of the vSphere Enterprise Plus licensing. If you don’t have Enterprise Plus you won’t get storage profiles, and the only thing you can do in vCloud is use the storage profile ANY, which doesn’t contribute to productivity. Thin provisioning Thin Provisioning means that the file that contains the virtual hard disk (.vmdk) is only as big as the the amount of data written to the virtual hard disk.. As an example, if you have a 40GB hard disk attached to a Windows VM and have just installed Windows on it you are using around 2GB of the 40GB disk. When using Thin provisioning only 2GB will be written to the datastore not 40GB. If you don’t use thin provisioning the .vmdk file wil be 40GB big. If your storage vendors Storage APIs is integrated in your ESXi servers Thin Provisioning may be offloaded to your storage backend, making it even faster. Fast Provisioning Fast provisioning is similar to linked clones that you may know from Lab Manager or VMware View. However, in vCloud they are a bit more intelligent than in the other products. In the other products linked clones can NOT be deployed across different datastores but in vCloud they can. Let’s talk about how linked clones work. If you have a VM with a hard disk of 40GB and you clone that VM you would normally have to spend another 40GB (not using Thin Provisioning). Using Linked clones you will not need another 40GB but less. What happens in layman’s terms is that vCloud creates two snapshots of the original VM’s hard disk. A snapshot contains only the differences between the original and the Snapshot. The original hard disk (.vmdk file) is set to read-only and the first snapshot is connected to the original VM, so that one still can work with the original VM. The second snapshot is used to create the new VM. Using snapshots makes deploying a VM using Fast Provisioning not only Fast but it also saves a lot of disk space. The problem with this is that a snapshot must be on the same datastore as its source. So if you have a VM in one datastore, its linked clone cannot be in another. vCloud has solved that problem by deploying a Shadow VM. When you deploy a VM with Fast Provisioning onto a different datastore than its source, vCloud creates a full clone (a normal full copy) of the VM onto the new datastore and then creates a linked clone from the Shadow VM. If your storage vendors Storage APIs is integrated in your ESXi servers Fast Provisioning may be offloaded to your storage backend, making it faster. See also recipe “Making NFS based datastores faster”. Summary In this article, we saw vCloud networks, vSphere and vCloud vApps, and datastores, profiles and clusters. Resources for Article :   Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 2930

article-image-features-cloudflare
Packt
10 Sep 2013
5 min read
Save for later

Features of CloudFlare

Packt
10 Sep 2013
5 min read
(For more resources related to this topic, see here.) Top 5 features you need to know about Here we will go over the various security, performance, and monitoring features CloudFlare has to offer. Malicious traffic Any website is susceptible to attacks from malicious traffic. Some attacks might try to take down a targeted website, while others may try to include their own spam. Worse attacks might even try and trick your users to provide information or compromise user accounts. CloudFlare has tools available to mitigate various types of attacks. Distributed denial of service A common attack on the Internet is the distributed denial-of-service(DDoS) attack. A distributed denial-of-service attack involves producing so many requests for a service that it cannot fulfill them, and crumbles under the load. A common way this is handled in practice is by having the attacker make a server request, but never listen for the response. Typically a response will be presented by the client notifying the server that it received data, but if a client does not acknowledge, the server will keep trying for quite a while. A single client could send thousands of these requests per second, but the server would not be able to handle many at once. Another twist to these attacks is the dynamic denial-of-service attack. This attack will be spread across many machines, making it difficult to tell where the attacks are coming from. CloudFlare can help with this because it can monitor when users are trying an attack and reject access, or require a captcha challenge to gain access. It also monitors all of its customers for this, so if there is an attack happening on another CloudFlare site, it can protect yours from the traffic attacking the site as well. It is a difficult problem to solve. Sometimes traffic just spikes if big news article are run. It is hard to tell when it's legitimate traffic and when it is an attack. For this, CloudFlare offers multiple levels of DoS protection. On the CloudFlare settings the Securitytab is where you can configure this advanced protection: On the CloudFlare settings the Security tab is where you can configure this advanced protection: The basic settings are rolled into the Basic protection level setting: SQL injection SQL injection is a more involved attack. On a web page, you may have a field like a username/password field. That field will probably be checked against a database for validity. The database queries to do this are simple text strings. This means that if the query is written in a way that doesn't explicitly prevent it, an attacker can start writing their own queries. A site that is not equipped to handle these cases would be susceptible to hackers destroying data, gaining access by pretending to be other users, or accessing data they otherwise would not have access to. It is a difficult problem to check against when building a software. Even big companies have had issues. CloudFlare mitigates this by looking for requests containing things that look like database queries. Almost no websites take in raw database commands as normal queries. This means that CloudFlare can search for suspicious traffic and prevent it from accessing your page. Cross-site scripting Cross-site scripting is similar to SQL injection except that it deals with JavaScript and not database SQL. If you have a site that has comments, for example, an unprotected site might allow a hacker to put their own JavaScript on it. Any other user of the site could execute that JavaScript. They could do things like sniff for passwords, or even credit card information. CloudFlare prevents this in a similar fashion by looking for requests that contain JavaScript and blocking them. Open ports Often, services available on a server can be available without the sysadmin knowing about it. If Telnet is allowed, for example, an attacker could simply log in to the system and start checking out source code, looking into the database, or taking down the website. CloudFlare acts as a firewall to ensure that the ports are blocked even if the server has them open. Challenge page When CloudFlare receives a request from a suspect user, it will usually show a challenge page asking the user to fill out a captcha to access the site. The options for customizing these settings is on the Security Settings tab: You can also configure how that page looks by clicking on Customize. By default, it will look something like the following: E-mail address obfuscation E-mail address obfuscation scrambles any e-mail addresses on your page, then runs some JavaScript to decode it so that the text ends up being readable. This is nice in order to avoid getting spam in your user's e-mails, but the downside is that if a user has JavaScript disabled, they will not be able to read e-mail addresses: Summary In this article, we have looked at the various security features provided by CloudFlare against malicious traffic, distributed denial of service, e-mail address obfuscation, and so on. Therefore, it can be concluded that CloudFlare is one of the better website-designing options available in the market today. Resources for Article: Further resources on this subject: Getting Started with RapidWeaver [Article] LESS CSS Preprocessor [Article] Translations in Drupal 6 [Article]
Read more
  • 0
  • 0
  • 2010

article-image-understanding-big-picture
Packt
04 Sep 2013
7 min read
Save for later

Understanding the big picture

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) So we've got this thing for authentication and authorization. Let's see who is responsible and what for. There is an AccessDecisionManager, which, as the name suggests, is responsible for deciding whether we can access something or not; if not, an AccessDeniedException or InsufficientAuthenticationException is thrown. AuthenticationManager is another crucial interface. It is responsible for confirming who we are. Both are just interfaces, so we can swap our own implementations if we like. In a web application, the job of talking with these two components and the user is handled by a web filter called DelegatingFilterProxy, which is decomposed into several small filters. Each one is responsible for a different thing, so we can turn them on, off, or put our own filters in between and mess with them anyway we like. These are quite important, and we will dig into them later. For the big picture, all we need to know is that these filters take care of all the talking, redirect the user to the login page (or an access-denied page), and save the current user details in an HTTPSession. Well, the last part, while true, is a bit misleading. User details are kept in a SecurityContext object, which we can get a hold of by calling SecurityContextHolder.getContext(), and which in the end is stored in HTTPSession by our filters. But we had promised a big picture, not the gory details, so here it is: Quite simple, right? If we have an authentication protocol without login and password, it works in a similar way. We just switch one of the filters, or the authentication manager, to a different implementation. If we don't have a web application, we just need to do the talking ourselves. But this is all for web resources (URLs). What is much more interesting and useful is securing calls to methods. It looks, for example, like this: @PreAuthorize(["isAuthenticated() and hasRole('ROLE_ADMIN')"])public void somethingOnlyAdminCanDo() {} Here, we decided that somethingOnlyAdminCanDo will be protected by our AccessDecisionManager and that the user must be authenticated (not anonymous) and has to have an admin role. Can a user be anonymous and have an admin role at the same time? In theory, yes, but it would not make any sense. Because it's much cheaper to check if he is authenticated and stop right there. We see a bit of optimization in here. We could drop the isAuthenticated() method and the behavior wouldn't change. We can put this kind of annotation on any Java method, but our configuration and mechanism to fire up the security will depend on the type of objects we are trying to protect. For objects declared as Spring beans (which is a short name for anything defined in our Inversion of Control (IoC) configuration, either via XML or annotations), we don't need to do much. Spring will just create proxies (dynamic classes) that take over calls to our secured methods and fire up AccessDecisionManager before passing the call to the object we really wanted to call. For objects outside of the IoC container (anything created with new or just code not defined in Spring context), we can use the power of Aspect Oriented Programming (AOP) to get the same effect. If you don't know what AOP is, don't worry. It's just a bit of magic at the classloader and bytecode level. For now, the only important thing is that it works basically in the same way. This is depicted as follows: We can do much more than this, as we'll see next, but these are the basics. So, how does the AccessDecisionManager decide whether we can access something or not? Imagine a council of very old Jedi masters sitting around a fire. They decide whether or not you are permitted to call a secured method or access a web resource. Each of these masters makes a decision or abstains. Each of them can consult additional information (not only who you are and what you want to do, but every aspect of the situation). In Spring Security, those smart people are called AccessDecisionVoters, and each of them has one vote. The council can be organized in many different ways. It has one voice, and so it may make the decision based on a majority of votes. It may be veto-based, where everything is allowed unless someone disagrees. Or it may need everyone to agree to grant access, otherwise access is denied. The council is the AccessDecisionManager, and we have three implementations previously mentioned out of the box. We can also decide who's in the council and who is not. This is probably the most important decision we can make, because this will decide the security model that we will use in our application. Let's talk about the most popular counselors (implementations of AccessDecisionVoter). Model based on roles (RoleVoter): This guy makes his decision based on the role of the user and the required role for the resource/method. So if we write @PreAuthorize("hasRole('ROLE_ADMIN')"), you better be a damn admin or you'll get a no-no from this guy. Model based on entity access control permissions (AclEntryVoter): This guy doesn't worry about roles. He is much more than that. Acl stands for Access Control List, which represents a list of permissions. Every user has a list of permissions, possibly for every domain object (usually an object in the database), that you want to secure. So, for example, if we have a bank application, the supervisor can give Frank access to a single specific customer (say, ACME—A Company that Makes Everything), which is represented as an entity in the database and as an object in our system. No other employee will be able to do anything to that customer unless the supervisor grants that person the same permission as Frank. This is probably the most scrutinous voter we would ever use. Our customer can have a very detailed configuration with him/her. On the other hand, this is also the most cumbersome, as we need to create a usable graphical interface to set permissions for every user and every domain object. While we have done this a few times, most of our customers wanted a simpler approach, and even those who started with a graphical user interface to configure everything asked for a simplified version based on business rules, at the end of the project. If your customer describes his security needs in terms of rules such as "Frank can edit every customer he has created but he cannot do anything other than view other customers", it means it's time for PreInvocationAuthorizationAdviceVoter. Business rules model (PreInvocationAuthorizationAdviceVoter): This is usually used when you want to implement static business rules in the application. This goes like "if I've written a blog post, I can change it later, but others can only comment" and "if a friend asked me to help him write the blog post, I can do that, because I'm his friend". Most of these things are also possible to implement with ACLs, but would be very cumbersome. This is our favorite voter. With it, it's very easy to write, test, and change the security restrictions, because instead of writing every possible relation in the database (as with ACL voter) or having only dumb roles, we write our security logic in plain old Java classes. Great stuff and most useful, once you see how it works. Did we mention that this is a council? Yes we did. The result of this is that we can mix any voters we want and choose any council organization we like. We can have all three voters previously mentioned and allow access if any of them says "yes". There are even more voters. And we can write new ones ourselves. Do you feel the power of the Jedi council already? Do you feel the power of the Jedi council already? Summary This section provides an overview of authentication and authorization, which are the principles of Spring security. Resources for Article : Further resources on this subject: Migration to Spring Security 3 [Article] Getting Started with Spring Security [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 1388
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime
article-image-defining-applications-policy-file
Packt
23 Aug 2013
21 min read
Save for later

Defining the Application's Policy File

Packt
23 Aug 2013
21 min read
(For more resources related to this topic, see here.) The AndroidManifest.xml file All Android applications need to have a manifest file. This file has to be named as AndroidManifest.xml and has to be placed in the application's root directory. This manifest file is the application's policy file. It declares the application components, their visibility, access rules, libraries, features, and the minimum Android version that the application runs against. The Android system uses the manifest file for component resolution. Thus, the AndroidManfiest.xml file is by far the most important file in the entire application, and special care is required when defining it to tighten up the application's security. The manifest file is not extensible, so applications cannot add their own attributes or tags. The complete list of tags with how these tags can be nested is as follows: <uses-sdk><?xml version="1.0" encoding="utf-8"?> <manifest> <uses-permission /> <permission /> <permission-tree /> <permission-group /> <instrumentation /> <uses-sdk /> <uses-configuration /> <uses-feature /> <supports-screens /> <compatible-screens /> <supports-gl-texture /> <application> <activity> <intent-filter> <action /> <category /> <data /> </intent-filter> <meta-data /> </activity> <activity-alias> <intent-filter> </intent-filter> <meta-data /> </activity-alias> <service> <intent-filter> </intent-filter> <meta-data/> </service> <receiver> <intent-filter> </intent-filter> <meta-data /> </receiver> <provider> <grant-uri-permission /> <meta-data /> <path-permission /> </provider> <uses-library /> </application> </manifest> Only two tags, <manifest> and <application>, are the required tags. There is no specific order to declare components. The <manifest> tag declares the application specific attributes. It is declared as follows: <manifest package="string" android_sharedUserId="string" android_sharedUserLabel="string resource" android_versionCode="integer" android_versionName="string" android_installLocation=["auto" | "internalOnly" | "preferExternal"] > </manifest> An example of the <manifest> tag is shown in the following code snippet. In this example, the package is named com.android.example, the internal version is 10, and the user sees this version as 2.7.0. The install location is decided by the Android system based on where it has room to store the application. <manifest package="com.android.example" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The attributes of the <manifest> tag are as follows: package: This is the name of the package. This is the Java style namespace of your application, for example, com.android.example. This is the unique ID of your application. If you change the name of a published application, it is considered a new application and auto updates will not work. android:sharedUserId: This attribute is used if two or more applications share the same Linux ID. This attribute is discussed in detail in a later section. android:sharedUserLabel: This is the user readable name of the shared user ID and only makes sense if android:sharedUserId is set. It has to be a string resource. android:versionCode: This is the version code used internally by the application to track revisions. This code is referred to when updating an application with the more recent version. android:versionName: This is the version of the application shown to the user. It can be set as a raw string or as a reference, and is only used for display to users. android:installLocation: This attribute defines the location where an APK will be installed. The application tag is defined as follows: <application android_allowTaskReparenting=["true" | "false"] android_backupAgent="string" android_debuggable=["true" | "false"] android_description="string resource" android_enabled=["true" | "false"] android_hasCode=["true" | "false"] android_hardwareAccelerated=["true" | "false"] android_icon="drawable resource" android_killAfterRestore=["true" | "false"] android_largeHeap=["true" | "false"] android_label="string resource" android_logo="drawable resource" android_manageSpaceActivity="string" android_name="string" android_permission="string" android_persistent=["true" | "false"] android_process="string" android_restoreAnyVersion=["true" | "false"] android_supportsRtl=["true" | "false"] android_taskAffinity="string" android_theme="resource or theme" android_uiOptions=["none" | "splitActionBarWhenNarrow"] > </application> An example of the <application> tag is shown in the following code snippet. In this example, the application name, description, icon, and label are set. The application is not debuggable and the Android system can instantiate the components. <application android_label="@string/app_name" android_description="@string/app_desc" android_icon="@drawable/example_icon" android_enabled="true" android_debuggable="false"> </application> Many attributes of the <application> tag serve as the default values for the components declared within the application. These tags include permission, process, icon, and label. Other attributes such as debuggable and enabled are set for the entire application. The attributes of the <application> tag are discussed as follows: android:allowTaskReparenting: This value can be overridden by the <activity> element. It allows an Activity to re-parent with the Activity it has affinity with, when it is brought to the foreground. android:backupAgent: This attribute contains the name of the backup agent for the application. android:debuggable: This attribute when set to true allows an application to be debugged. This value should always be set to false before releasing the app in the market. android:description: This is the user readable description of an application set as a reference to a string resource. android:enabled: This attribute if set to true, the Android system can instantiate application components. This attribute can be overridden by components. android:hasCode: This attribute if set to true, the application will try to load some code when launching the components. android:hardwareAccelerated: This attribute when set to true allows an application to support hardware accelerated rendering. It was introduced in the API level 11. android:icon: This is the application icon as a reference to a drawable resource. android:killAfterRestore: This attribute if set to true, the application will be terminated once its settings are restored during a full-system restore. android:largeHeap: This attribute lets the Android system create a large Dalvik heap for this application and increases the memory footprint of the application, so this should be used sparingly. android:label: This is the user readable label for the application. android:logo: This is the logo for the application. android:manageSpaceActivity: This value is the name of the Activity that manages the memory for the application. android:name: This attribute contains the fully qualified name of the subclass that will be instantiated before any other component is started. android:permission: This attribute can be overridden by a component and sets the permission that a client should have to interact with the application. android:persistent: Usually used by a system application, this attribute allows the application to be running all the time. android:process: This is the name of the process in which a component will run. This can be overridden by any component's android:process attribute. android:restoreAnyVersion: This attribute lets the backup agent attempt a restore even if the backup currently stored is by a newer application than what is attempting to restore now. android:supportsRtl: This attribute when set to true supports right-to-left layouts. It was added in the API level 17. android:taskAffinity: This attribute lets all activities have affinity with the package name, unless it is set by the Activity explicitly. android:theme: This is a reference to the style resource for the application. android:uiOptions: This attribute if set to none, there are no extra UI options; if set to splitActionBarWhenNarrow, a bar is set at the bottom if constrained for the screen. In the following sections we will discuss how to handle specific requirements using the policy file. Application policy use cases This section discusses how to define the application policies using the manifest file. I have used use cases and we will discuss how to implement these use cases in the policy file. Declaring application permissions An application on the Android platform has to declare what resources it intends to use for proper functioning of the application. These are the permissions that are displayed to the user when they download the application. Application permissions should be descriptive so that users can understand them. Also, as is the general rule with security, it is important to request the minimum permissions required. Application permissions are declared in the manifest file by using the tag <uses-permission>. An example of a location-based manifest file that uses the GPS for retrieving location is shown in the following code snippet: <uses-permissionandroid:name="android. permission.ACCESS_COARSE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_FINE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permissionandroid:name="android. permission.ACCESS_MOCK_LOCATION" /> <uses-permissionandroid:name="android.permission.INTERNET" /> These permissions will be displayed to the users when they install the app, and can always be checked by going to Application under the settings menu. These permissions are seen in the following screenshot: Declaring permissions for external applications The manifest file also declares the permissions an external application (which does not run with the same Linux ID) needs to access the application components. This can be one of two places in the policy file: in the <application> tag or along with the component in the <activity>, <provider>, <receiver>, and <service> tag. If there are permissions that all components of an application require, then it is easy to specify them in the <application> tag. If a component requires some specific permission, then those can be defined in the specific component tag. Remember, only one permission can be declared in any of the tags. If a component is protected by permission then the component permission overrides the permission declared in the <application> tag. The following is an example of an application that requires external applications to have android.permission.ACCESS_COARSE_LOCATION to access its components and resources: <application android_allowBackup="true" android_icon="@drawable/ic_launcher" android_label="@string/app_name" android_permission="android. permission.ACCESS_COARSE_LOCATION"> If a Service requires that any application component that accesses it should have access to the external storage, then it can be defined as follows: <service android_enabled="true" android_name=".MyService" android_permission="android. permission.WRITE_EXTERNAL_STORAGE"> </service> If a policy file has both the preceding tags then when an external component makes a request to this Service, it should have android.permission.WRITE_EXTERNAL_STORAGE, as this permission will override the permission declared by the application tag. Applications running with the same Linux ID Sharing data between applications is always tricky. It is not easy to maintain data confidentiality and integrity. Proper access control mechanisms have to be put in place based on who has access to how much data. In this section, we will discuss how to share application data with the internal applications (signed by the same developer key). Android is a layered architecture with an application isolation enforced by the operating system itself. Whenever an application is installed on the Android device, the Android system gives it a unique user ID defined by the system. Notice that the two applications, example1 and example2, in the following screenshot are the applications run as separate user IDs, app_49 and app_50: However, an application can request the system for a user ID of its choice. The other application can then request the same user ID as well. This creates tight coupling and does not require components to be made visible to the other application or to create shared content providers. This kind of tight coupling is done in the manifest tags of all applications that want to run in the same process. The following is a snippet of manifest files of the two applications com.example.example1 and com.example.example2 that use the same user ID: <manifest package="com.example.example1" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> <manifest package="com.example.example2" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> The following screenshot is displayed when these two applications are running on the device. Notice that the applications, com.example.example1 and com.example.example2, now have the app ID of app_113. You will notice that the shared UID follows a certain format akin to a package name. Any other naming convention will result in an error such as an installation error: INSTALL_PARSE_FAILED_BAD_SHARED_USER_ID. All applications that share the same UID should have the same certificate. External storage Starting with API Level 8, Android provides support to store Android applications (APK files) on external devices, such as an SD card. This helps to free up internal phone memory. Once the APK is moved to external storage, the only memory taken up by the app is the private data of the application stored on internal memory. It is important to note that even for the SD card resident APKs, the DEX (Dalvik Executable) files, private data directories, and native shared libraries remain on the internal storage. Adding an optional attribute in the manifest file enables this feature. The application info screen for such an application either has a move to the SD card or move to a phone button depending on the current storage location of APK. The user then has an option to move the APK file accordingly. If the external device is un-mounted or the USB mode is set to Mass Storage (where the device is used as a disk drive), all the running activities and services hosted on that external device are immediately killed. The feature to enable storing APK on the external devices is enabled by adding the optional attribute android:installLocation in the application's manifest file in the <manifest> element. The attribute android:installLocation can have the following three values: InternalOnly: The Android system will install the application on the internal storage only. In case of insufficient internal memory, storage errors are returned. PreferExternal: The Android system will try to install the application on the external storage. In case there is not enough external storage, the application will be installed on the internal storage. The user will have the ability to move the app from external to internal storage and vice versa as desired. auto: This option lets the Android system decide the best install location for the application. The default system policy is to install the application on internal storage first. If the system is running low on internal memory, the application is then installed on the external storage. The user will have the ability to move the application from external to internal storage and vice versa as desired. For example, if android:installLocation is set to Auto, then on devices running a version of Android less than 2.2, the system will ignore this feature and APK will only be installed on the internal memory. The following is the code snippet from an application's manifest file with this option: <manifest package="com.example.android" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The following is a screenshot of the application with the manifest file as specified previously. You will notice that Move to SD card is enabled in this case: In another application, where android:installLocation is not set, the Move to SD Card is disabled as shown in the following screenshot: Setting component visibility Any of the application components namely, activities, services, providers, and receivers can be made discoverable to the external applications. This section discusses the nuances of such scenarios. Any Activity or Service can be made private by setting android:exported=false. This is also the default value for an Activity. See the following two examples of a private Activity: <activity android_name=".Activity1" android_exported="false" /> <activity android_name=".Activity2" /> However, if you add an Intent Filter to the Activity, then the Activity becomes discoverable for the Intent in the Intent Filter. Thus, the Intent Filter should never be relied upon as a security boundary. See the following examples for Intent Filter declaration: <activity android_name=".Activity1" android_label="@string/app_name" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android_name=".Activity2"> <intent-filter> <action android_name="com.example.android. intent.START_ACTIVITY2" /> </intent-filter> </activity> Both activities and services can also be secured by an access permission required by the external component. This is done using the android:permission attribute of the component tag. A Content Provider can be set up for private access by using android:exported=false. This is also the default value for a provider. In this case, only an application with the same ID can access the provider. This access can be limited even further by setting the android:permission attribute of the provider tag. A Broadcast Receiver can be made private by using android:exported=false. This is the default value of the receiver if it does not contain any Intent Filters. In this case, only the components with the same ID can send a broadcast to the receiver. If the receiver contains Intent Filters then it becomes discoverable and the default value of android:exported is false. Debugging During the development of an application, we usually set the application to be in the debug mode. This lets developers see the verbose logs and can get inside the application to check for errors. This is done in the <application> tag by setting android:debuggable to true. To avoid security leaks, it is very important to set this attribute to false before releasing the application. An example of sensitive information that I have seen in my experience includes usernames and passwords, memory dumps, internal server errors, and even some funny personal notes state of a server and a developer's opinion about a piece of code. The default value of android:debuggable is false. Backup Starting with API level 8, an application can choose a backup agent to back up the device to the cloud or server. This can be set up in the manifest file in the <application> tag by setting android:allowBackup to true and then setting android:backupAgent to a class name. The default value of android:allowBackup is set to true and the application can set it to false if it wants to opt out of the backup. There is no default value for android:backupAgent and a class name should be specified. The security implications of such a backup are debatable as services used to back up the data are different and sensitive data, such as usernames and passwords can be compromised. Putting it all together The following example puts all the learning we have done so far to analyze AndroidManifest.xml provided with an Android SDK sample for RandomMusicPlayer. The manifest file specifies that this is version 1 of the application com.example.android.musicplayer. It runs on SDK 14 but supports backwards up to SDK 7. The application uses two permissions namely, android.permission.INTERNET and android.permission.WAKE_LOCK. The application has one Activity that is the entry point for the application called MainActivity, one Service called MusicService, and one receiver called MusicIntentReceiver. MusicService has defined custom actions called PLAY, REWIND, PAUSE, SKIP, STOP, and TOGGLE_PLAYBACK. The receiver uses the action intent android.media.AUDIO_BECOMING_NOISY and android.media.MEDIA_BUTTON defined by the Android system. None of the components are protected with permissions. An example of an AndroidManifst.xml file is shown in the following screenshot: Example checklist In this section, I have tried to put together an example list that I suggest you refer to whenever you are ready to release a version of your application. This is a very general version and you should adapt it according to your own use case and components. When creating a checklist think about issues that relate to the entire application, those that are specific to a component, and issues that might come up by setting the component and application specification together. Application level In this section, I have listed some questions that you should be asking yourself as you define the application specific preferences. They may affect how your application is viewed, stored, and perceived by users. Some application level questions that you may like to ask are as follows: Do you want to share resources with other applications that you have developed? Did you specify the unique user ID? Did you define this unique ID for another application either intentionally or unintentionally? Does your application require some capabilities such as camera, Bluetooth, and SMS? Does your application need all these permissions? Is there another permission that is more restrictive than the one you have defined? Remember the principle of least privilege Do all the components of your application need this permission or only a few? Check the spellings of all the permissions once again. The application may compile and work even if the permission spelling is incorrect. If you have defined this permission, is this the correct one that you need? At what API level does the application work? What is the minimum API level that your application can support? Are there any external libraries that your application needs? Did you remember to turn off the debug attribute before you release? If you are using a backup agent then remember to mention it here Did you remember to set a version number? This will help you during application upgrade Do you want to set an auto upgrade? Did you remember to sign the application with your release key? Sometimes setting a particular screen orientation will not allow your application to be visible on certain devices. For example, if your application only supports portrait mode then it might not appear for devices with landscape mode only. Where do you want to install the APK? Are there any services that might cease to work if the intent is not received in time? Do you want some other application level settings, such as the ability of the system to restore components? If defining a new permission, think twice if you really want them. Chances are there is already an existing permission that will cover your use case. Component level Some component level questions that you will want to think about in the policy are listed here. These are questions that you should be asking yourself for each component: Did you define all components? If using the third party libraries in your application, did you define all the components that you will use? Was there a particular setting that the third party library expects from your application? Do you want this component to be visible to other applications? Do you need to add some Intent Filters? If the component is not supposed to be visible, did you add Intent Filters? Remember as soon as you add Intent Filters, your component becomes visible. Do other components require some special permission to trigger this component? Verify the spelling of the permission name. Does your application require some capabilities such as camera, Bluetooth, and SMS? Summary In this article, we've learned how to define an applications policy file. The manifest file is the most important artifact of an application and should be defined with utmost care. This manifest file declares the permissions requested by an application and permissions that the external applications need to access its components. With the policy file we also define the storage location of the out APK and the minimum SDK against which the out application will run. The policy file exposes components that are not sensitive to the application. At the end of this article we discussed some sample issues that a developer should be aware of when writing a manifest file. In this article, we've learned about an Android application structure. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] New Connectivity APIs – Android Beam [Article]
Read more
  • 0
  • 0
  • 2033

article-image-getting-started-spring-security
Packt
14 Mar 2013
14 min read
Save for later

Getting Started with Spring Security

Packt
14 Mar 2013
14 min read
(For more resources related to this topic, see here.) Hello Spring Security Although Spring Security can be extremely difficult to configure, the creators of the product have been thoughtful and have provided us with a very simple mechanism to enable much of the software's functionality with a strong baseline. From this baseline, additional configuration will allow a fine level of detailed control over the security behavior of our application. We'll start with an unsecured calendar application, and turn it into a site that's secured with rudimentary username and password authentication. This authentication serves merely to illustrate the steps involved in enabling Spring Security for our web application; you'll see that there are some obvious flaws in this approach that will lead us to make further configuration refinements. Updating your dependencies The first step is to update the project's dependencies to include the necessary Spring Security .jar files. Update the Maven pom.xml file from the sample application you imported previously, to include the Spring Security .jar files that we will use in the following few sections. Remember that Maven will download the transitive dependencies for each listed dependency. So, if you are using another mechanism to manage dependencies, ensure that you also include the transitive dependencies. When managing the dependencies manually, it is useful to know that the Spring Security reference includes a list of its transitive dependencies. pom.xml <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at https://www.packtpub.com. If you purchased this book elsewhere, you can visit https://www.packtpub.com/books/content/support and register to have the files e-mailed directly to you. Using Spring 3.1 and Spring Security 3.1 It is important to ensure that all of the Spring dependency versions match and all the Spring Security versions match; this includes transitive versions. Since Spring Security 3.1 builds with Spring 3.0, Maven will attempt to bring in Spring 3.0 dependencies. This means, in order to use Spring 3.1, you must ensure to explicitly list the Spring 3.1 dependencies or use Maven's dependency management features, to ensure that Spring 3.1 is used consistently. Our sample applications provide an example of the former option, which means that no additional work is required by you. In the following code, we present an example fragment of what is added to the Maven pom.xml file to utilize Maven's dependency management feature, to ensure that Spring 3.1 is used throughout the entire application: <project ...> ... <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>3.1.0.RELEASE</version> </dependency> … list all Spring dependencies (a list can be found in our sample application's pom.xml ... <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> </dependencies> </dependencyManagement> </project> If you are using Spring Tool Suite, any time you update the pom.xml file, ensure you right-click on the project and navigate to Maven | Update Project…, and select OK, to update all the dependencies. Implementing a Spring Security XML configuration file The next step in the configuration process is to create an XML configuration file, representing all Spring Security components required to cover standard web requests.Create a new XML file in the src/main/webapp/WEB-INF/spring/ directory with the name security.xml and the following contents. Among other things, the following file demonstrates how to require a user to log in for every page in our application, provide a login page, authenticate the user, and require the logged-in user to be associated to ROLE_USER for every URL:URL element: src/main/webapp/WEB-INF/spring/security.xml <?xml version="1.0" encoding="UTF-8"?> <bean:beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security- 3.1.xsd"> <http auto-config="true"> <intercept-url pattern="/**" access="ROLE_USER"/> </http> <authentication-manager> <authentication-provider> <user-service> <user name="user1@example.com" password="user1" authorities="ROLE_USER"/> </user-service> </authentication-provider> </authentication-manager> </bean:beans> If you are using Spring Tool Suite, you can easily create Spring configuration files by using File | New Spring Bean Configuration File. This wizard allows you to select the XML namespaces you wish to use, making configuration easier by not requiring the developer to remember the namespace locations and helping prevent typographical errors. You will need to manually change the schema definitions as illustrated in the preceding code. This is the only Spring Security configuration required to get our web application secured with a minimal standard configuration. This style of configuration, using a Spring Security-specific XML dialect, is known as the security namespace style, named after the XML namespace (http://www.springframework.org/schema/security) associated with the XML configuration elements. Let's take a minute to break this configuration apart, so we can get a high-level idea of what is happening. The <http> element creates a servlet filter, which ensures that the currently logged-in user is associated to the appropriate role. In this instance, the filter will ensure that the user is associated with ROLE_USER. It is important to understand that the name of the role is arbitrary. Later, we will create a user with ROLE_ADMIN and will allow this user to have access to additional URLs that our current user does not have access to. The <authentication-manager> element is how Spring Security authenticates the user. In this instance, we utilize an in-memory data store to compare a username and password. Our example and explanation of what is happening are a bit contrived. An inmemory authentication store would not work for a production environment. However, it allows us to get up and running quickly. We will incrementally improve our understanding of Spring Security as we update our application to use production quality security . Users who dislike Spring's XML configuration will be disappointed to learn that there isn't an alternative annotation-based or Java-based configuration mechanism for Spring Security, as there is with Spring Framework. There is an experimental approach that uses Scala to configure Spring Security, but at the time of this writing, there are no known plans to release it. If you like, you can learn more about it at https://github.com/tekul/scalasec/. Still, perhaps in the future, we'll see the ability to easily configure Spring Security in other ways. Although annotations are not prevalent in Spring Security, certain aspects of Spring Security that apply security elements to classes or methods are, as you'd expect, available via annotations. Updating your web.xml file The next steps involve a series of updates to the web.xml file. Some of the steps have already been performed because the application was already using Spring MVC. However, we will go over these requirements to ensure that these more fundamental Spring requirements are understood, in the event that you are using Spring Security in an application that is not Spring-enabled. ContextLoaderListener The first step of updating the web.xml file is to ensure that it contains the o.s.w.context.ContextLoaderListener listener, which is in charge of starting and stopping the Spring root ApplicationContext interface. ContextLoaderListener determines which configurations are to be used, by looking at the <context-param> tag for contextConfigLocation. It is also important to specify where to read the Spring configurations from. Our application already has ContextLoaderListener added, so we only need to add the newly created security.xml configuration file, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/spring/services.xml /WEB-INF/spring/i18n.xml /WEB-INF/spring/security.xml </param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> The updated configuration will now load the security.xml file from the /WEB-INF/spring/ directory of the WAR. As an alternative, we could have used /WEB-INF/spring/*.xml to load all the XML files found in /WEB-INF/spring/. We choose not to use the *.xml notation to have more control over which files are loaded. ContextLoaderListener versus DispatcherServlet You may have noticed that o.s.web.servlet.DispatcherServlet specifies a contextConfigLocation component of its own. src/main/webapp/WEB-INF/web.xml <servlet> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/mvc-config.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> DispatcherServlet creates o.s.context.ApplicationContext, which is a child of the root ApplicationContext interface. Typically, Spring MVC-specific components are initialized in the ApplicationContext interface of DispatcherServlet, while the rest are loaded by ContextLoaderListener. It is important to know that beans in a child ApplicationContext (such as those created by DispatcherServlet) can reference beans of its parent ApplicationContext (such as those created by ContextLoaderListener). However, the parent ApplicationContext cannot refer to beans of the child ApplicationContext. This is illustrated in the following diagram where childBean can refer to rootBean, but rootBean cannot refer to childBean. As with most usage of Spring Security, we do not need Spring Security to refer to any of the MVC-declared beans. Therefore, we have decided to have ContextLoaderListener initialize all of Spring Security's configuration. springSecurityFilterChain The next step is to configure springSecurityFilterChain to intercept all requests by updating web.xml. Servlet <filter-mapping> elements are considered in the order that they are declared. Therefore, it is critical for springSecurityFilterChain to be declared first, to ensure the request is secured prior to any other logic being invoked. Update your web.xml file with the following configuration: src/main/webapp/WEB-INF/web.xml </listener> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class> org.springframework.web.filter.DelegatingFilterProxy </filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <servlet> Not only is it important for Spring Security to be declared as the first <filter-mapping> element, but we should also be aware that, with the example configuration, Spring Security will not intercept forwards, includes, or errors. Often, it is not necessary to intercept other types of requests, but if you need to do this, the dispatcher element for each type of request should be included in <filter-mapping>. We will not perform these steps for our application, but you can see an example, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> <dispatcher>REQUEST</dispatcher> <dispatcher>ERROR</dispatcher> ... </filter-mapping> DelegatingFilterProxy The o.s.web.filter.DelegatingFilterProxy class is a servlet filter provided by Spring Web that will delegate all work to a Spring bean from the root ApplicationContext that must implement javax.servlet.Filter. Since, by default, the bean is looked up by name, using the value of <filter-name>, we must ensure we use springSecurityFilterChain as the value of <filter-name>. Pseudo-code for how o.s.web.filter.DelegatingFilterProxy works for our web.xml file can be found in the following code snippet: public class DelegatingFilterProxy implements Filter { void doFilter(request, response, filterChain) { Filter delegate = applicationContet.getBean("springSecurityFilterChain") delegate.doFilter(request,response,filterChain); } } FilterChainProxy When working in conjunction with Spring Security, o.s.web.filter. DelegatingFilterProxy will delegate to Spring Security's o.s.s.web. FilterChainProxy, which was created in our minimal security.xml file. FilterChainProxy allows Spring Security to conditionally apply any number of servlet filters to the servlet request. We will learn more about each of the Spring Security filters and their role in ensuring that our application is properly secured, throughout the rest of the book. The pseudo-code for how FilterChainProxy works is as follows: public class FilterChainProxy implements Filter { void doFilter(request, response, filterChain) { // lookup all the Filters for this request List<Filter> delegates = lookupDelegates(request,response) // invoke each filter unless the delegate decided to stop for delegate in delegates { if continue processing delegate.doFilter(request,response,filterChain) } // if all the filters decide it is ok allow the // rest of the application to run if continue processing filterChain.doFilter(request,response) } } Due to the fact that both DelegatingFilterProxy and FilterChainProxy are the front door to Spring Security, when used in a web application, it is here that you would add a debug point when trying to figure out what is happening. Running a secured application If you have not already done so, restart the application and visit http://localhost:8080/calendar/, and you will be presented with the following screen: Great job! We've implemented a basic layer of security in our application, using Spring Security. At this point, you should be able to log in using user1@example.com as the User and user1 as the Password (user1@example.com/user1). You'll see the calendar welcome page, which describes at a high level what to expect from the application in terms of security. Common problems Many users have trouble with the initial implementation of Spring Security in their application. A few common issues and suggestions are listed next. We want to ensure that you can run the example application and follow along! Make sure you can build and deploy the application before putting Spring Security in place. Review some introductory samples and documentation on your servlet container if needed. It's usually easiest to use an IDE, such as Eclipse, to run your servlet container. Not only is deployment typically seamless, but the console log is also readily available to review for errors. You can also set breakpoints at strategic locations, to be triggered on exceptions to better diagnose errors. If your XML configuration file is incorrect, you will get this (or something similar to this): org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'beans'. It's quite common for users to get confused with the various XML namespace references required to properly configure Spring Security. Review the samples again, paying attention to avoid line wrapping in the schema declarations, and use an XML validator to verify that you don't have any malformed XML. If you get an error stating "BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/ schema/security] ...", ensure that the spring-security-config- 3.1.0.RELEASE.jar file is on your classpath. Also ensure the version matches the other Spring Security JARs and the XML declaration in your Spring configuration file. Make sure the versions of Spring and Spring Security that you're using match and that there aren't any unexpected Spring JARs remaining as part of your application. As previously mentioned, when using Maven, it can be a good idea to declare the Spring dependencies in the dependency management section.
Read more
  • 0
  • 0
  • 2838

article-image-wireshark-working-packet-streams
Packt
11 Mar 2013
3 min read
Save for later

Wireshark: Working with Packet Streams

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Working with Packet Streams While working on network capture, there can be multiple instances of network activities going on. Consider a small example where you are simultaneously browsing multiple websites through your browser. Several TCP data packets will be flowing across your network for all these multiple websites. So it becomes a bit tedious to track the data packets belonging to a particular stream or session. This is where Follow TCP stream comes into action. Now when you are visiting multiple websites, each site maintains its own stream of data packets. By using the Follow TCP stream option we can apply a filter that locates packets only specific to a particular stream. To view the complete stream, select your preferred TCP packet (for example, a GET or POST request). Right-clicking on it will bring up the option Follow TCP Stream. Once you click on Follow TCP Stream, you will notice that a new filter rule is applied to Wireshark and the main capture window reflects all those data packets that belong to that stream. This can be helpful in figuring out what different requests/responses have been generated through a particular session of network interaction. If you take a closer look at the filter rule applied once you follow a stream, you will see a rule similar to tcp.stream eq <Number>. Here Number reflects the stream number which has to be followed to get various data packets. An additional operation that can be carried out here is to save the data packets belonging to a particular stream. Once you have followed a particular stream, go to File | Save As. Then select Displayed to save only the packets belonging to the viewed stream. Similar to following the TCP stream, we also have the option to follow the UDP and SSL streams. The two options can be reached by selecting the particular protocol type (UDP or SSL) and right-clicking on it. The particular follow option will be highlighted according to the selected protocol. The Wireshark menu icons also provide some quick navigation options to migrate through the captured packets. These icons include: Go back in packet history (1): This option traces you back to the last analyzed/selected packet. Clicking on it multiple times keeps pushing you back to your selection history. Go forward in packet history (2): This option pushes you forward in the series of packet analysis. Go to last packet (5): This option jumps your selection to the last packet in your capture window.: This option is useful in directly going to a specific packet number. Go to the first packet (4): This option takes you to the first packet in your current display of the capture window. Go to last packet (5): This option jumps your selection to the last packet in your capture window. Summary In this article, we learned how to work with packet streams. Resources for Article : Further resources on this subject: BackTrack 5: Advanced WLAN Attacks [Article] BackTrack 5: Attacking the Client [Article] Debugging REST Web Services [Article]
Read more
  • 0
  • 0
  • 7946

article-image-dpm-feature-set
Packt
29 Jul 2011
3 min read
Save for later

The DPM Feature Set

Packt
29 Jul 2011
3 min read
  Microsoft Data Protection Manager 2010 A practical step-by-step guide to planning deployment, installation, configuration, and troubleshooting of Data Protection Manager 2010 with this book and eBook         Read more about this book       (For more resources on this subject, see here.) DPM has a robust set of features and capabilities. The following are some of the most valuable ones: Disk-based data protection and recovery Continuous back up Tape-based archiving and back up Built in monitoring Cloud-based back up and recovery Built-in reports and notifications Integration with Microsoft System Center Operations Manager Windows PowerShell integration for scripting Remote administration Tight integration with other Microsoft products Protection of clustered servers Protection of application-specific servers Backing up the system state Backing up client computers New features of DPM 2010 Microsoft has done a great job of updating Data Protection Manager 2010 with great new features and some much needed features. There were some issues with Data Protection Manager 2007 that would cause an Administrator to perform routine maintenance on it. Most of these issues have been resolved with Data Protection Manager 2010. The following are the most exciting new features to DPM: DPM 2007 to DPM 2010 in-place upgrade Auto-Rerun and Auto-CC (Consistency Check) automatically fixes Replica Inconsistent errors Auto-Grow will automatically grow volumes as needed It allows you to shrink volumes as needed Bare metal restore A Back up SLA report that can be configured and e-mailed to you daily Self-restore service for SQL Database Administrators of SQL back ups When backing up SharePoint 2010, no recovery farm is required for item level recoveries for example: recover SharePoint list items, and recovery of items in SharePoint farm using host-headers. This is an improvement to SharePoint that DPM takes advantage of Better back up for mobile or disconnected employees (This requires VPN or Direct Access) End users of protected clients are able to recover their data. The end users can do this without an Administrator doing anything. DPM is Live Migration aware. We already know DPM can protect VMs on Hyper-V. Now DPM will automatically continue protection of a VM even after it has been migrated to a different Hyper-V server. The Hyper-V server has to be a Windows Server 2008 R2 clustered server. DPM2DPM4DR (DPM to DPM for Disaster Recovery) allows you to back up your DPM to a second DPM. This feature was available in 2007 and it can now be set up via the GUI. You can also perform chained DPM back up so you could have DPM A, DPM B, and DPM C. Before you could only have a secondary DPM server backing up a primary DPM server. With the 2010 release, a single DPM server's scalability has been increased over its previous 2007 release: DPM can handle 80 TB per server DPM can back up up to 100 servers DPM can back up up to 1000 clients DPM can back up up to 2000 SQL databases As you can see from the previous list there are many enhancements to DPM 2010 that will benefit Administrators as well as end users. Summary In this article we took a look at the existing as well as new features of DPM. Further resources on this subject: Installing Data Protection Manager 2010 [article] Overview of Data Protection Manager 2010 [article] Debatching Bulk Data on Microsoft Platform [article]
Read more
  • 0
  • 0
  • 1428
article-image-installing-data-protection-manager-2010
Packt
01 Jun 2011
6 min read
Save for later

Installing Data Protection Manager 2010

Packt
01 Jun 2011
6 min read
With the DPM upgrade you will face some of the same issues as with the installation such as what are the prerequisites? is your operating system patched? and is DPM 2007 fully patched and ready for the upgrade? This article will walk you through each step of the DPM installation process during the first half and the DPM 2007 to DPM 2010 upgrade in the second half. After reading this article you should know what to look for when working through the prerequisites and requirements. The goal is to ensure that your install or upgrade goes smoothly. Prerequisites In this section we will jump right into the prerequisites for DPM and how to install them as well as the two different ways to install DPM. We will also go through the DPM 2007 to DPM 2010 upgrade process. We will first visit the hardware and software requirements, and a pre-install that is needed before you are able to actually install DPM 2010. Hardware requirements DPM 2010 requires a processor of 1 GHz (dual-core or faster), 4 GB of RAM or higher, the page file should be set to 1.5 or 2 times the amount of RAM on the computer. The DPM disk space requirements are as follows: DPM installation location needs 3 GB free Database file drive needs 900 MB free System drive needs 1 GB free Disk space for protected data should be 2.5 to 3 times the size of the protected data DPM also needs to be on a dedicated, single purpose computer. Software requirements DPM has requirements of both the operating system as well as software that needs to be on the server before DPM can be installed. Let's take a look at what these requirements are. Operating system DPM 2007 can be installed on both a 32-bit and an x64-bit operating systems. However, DPM 2010 is only supported on an x64-bit operating systems. DPM can be installed on a Windows Server 2008 and Windows Server 2008 R2. It is recommended that you install DPM 2010 on Windows Server 2008 R2. DPM can be deployed in a Windows Server 2008, Windows Server 2008 R2, or Windows Server 2003 Active Directory domain. Be sure to launch the Windows update and completely patch the server before you start the DPM installation, no matter what operating system you decide to use. If you end up using Windows 2008 Server for your DPM deployment you will need to install some hotfixes before you start the DPM installation. The hotfixes are as follows: FIX: You are prompted to format the volume when a formatted volume is mounted on a NTFS folder that is located on a computer that is running Windows Server 2008 or Windows Vista (KB971254). Dynamic disks are marked as "Invalid" on a computer that is running Windows Server 2008 or Windows Vista. When you bring the disks online, take the disks offline, or restart the computer if Data Protection Manager is installed (KB962975). An application or service that uses a file system filter driver may experience function failure on a computer that is running Windows Vista, Windows Server 2003, or Windows Server 2008 (KB975759). Software By default, DPM will install any software prerequisites automatically if it is not enabled or installed. Sometimes these software prerequisites might fail during the DPM setup. If they do, you can install these manually. Visit the Microsoft TechNet site for detailed information on installing the software prerequisites. The following is a list of the software that DPM requires before it can be installed: Microsoft .NET Framework 3.5 with Service Pack 1 (SP1) Microsoft Visual C++ 2008 Redistributable Windows PowerShell 2.0 Windows Installer 4.5 or later versions Windows Single Instance Store (SIS) Microsoft Application Error Reporting NOTE: It is recommended that you manually install Single Instance Store on your server before you even begin the DPM 2010 installation. We shall see a step by step installation along with a detailed explanation of Single Instance Store. User privilege requirement The server that you plan to install DPM on must be joined to a domain before you install the DPM software. In order to join the server to the domain you need to have at least domain administrative privileges. You also need to have administrative privileges on the local server to install the DPM software. Restrictions DPM has to be installed on a dedicated server. It is best to make sure that DPM is the only server role running on the server you use for it. You will run into issues if you try to install DPM on a server with other roles on it. The following are the restrictions you need to pay attention to when installing DPM: DPM should not be installed on a domain controller (not recommended) DPM cannot be installed on an Exchange server DPM cannot be installed on a server with System Center Operations Manager installed on it The server you install on cannot be a node in a cluster There is one exception—you can install DPM on a domain controller and make it work but this is not supported by Microsoft. Single Instance Store Before you install DPM on your server, it is important to install a technology called Single Instance Store (SIS). SIS will ensure you get the maximum performance out of your disk space and reduce bandwidth needs on DPM. SIS is a technology that keeps the overhead of handling duplicate files low. This is often referred to as de-duplication. SIS is used to eliminate data duplication by storing only one copy of files on backup storage media. SIS is used in storage, mail, and backup solutions such as DPM. SIS helps to lower the costs of bandwidth when copying data across a network as well as needed storage space. Microsoft has used a single installation store in Exchange since version 4.0. SIS searches a hard disk and identifies duplicate files. SIS then saves only one copy of the files to a central location such as a DPM storage pool. SIS will then replace other copies of the files with pointers that direct you to the copy of the files the SIS repository already has stored. Installing Single Instance Store (SIS) The following procedure walks you through the steps required to install SIS: Click on Start and then click on Run. In the Run dialog box, type CMD.exe and press OK: At the command prompt type the following: start /wait ocsetup.exe SIS-Limited /quiet /norestart And then press Enter. (Move the mouse over the image to enlarge it.) Restart the server. To ensure the installation of SIS went okay, check for the existence of the SIS registry key. Click on Start, then click Run. In the Run dialog box, type regedit and press OK. Navigate to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices SIS: If the SIS key is shown (as in the screenshot) in the registry it would mean that Single Instance Store (SIS) is installed properly and you can be sure to get the maximum performance out of your disk space on the DPM server.
Read more
  • 0
  • 0
  • 2271

article-image-overview-data-protection-manager-2010
Packt
30 May 2011
9 min read
Save for later

Overview of Data Protection Manager 2010

Packt
30 May 2011
9 min read
  Microsoft Data Protection Manager 2010 A practical step-by-step guide to planning deployment, installation, configuration, and troubleshooting of Data Protection Manager 2010         Read more about this book       (For more resources on Microsoft, see here.) DPM structure In this section we will look at the DPM file structure in order to have a better understanding of where DPM stores its components. We will also look at important processes that DPM runs and what they are used for. There will be some hints and tips that you should know about that will be useful when administering DPM. DPM file locations It is important to know not only how DPM operates, but also to know the structure that is underneath the application. Understanding the structure of where the DPM components are will help you with administering and troubleshooting DPM if the need arises. The following are some important locations: The DPM database backups are stored in the following location. Also when you make backup shadow copies for the replicas these will be stored in this directory. You would make backup show copies of your replicas if you were archiving them using a third-party backup solution: C:Program FilesMicrosoft DPMDPMVolumesShadowCopyDatabase Backups The following directory is where DPM is installed: C:Program FilesMicrosoft DPM The following directory contains PowerShell scripts that come with DPM. There are many scripts that can be used for performing common DPM tasks. C:Program FilesMicrosoft DPMDPMbin The following folder contains the database and files for SQL reporting services: C:Program FilesMicrosoft DPMSQL The following directory contains the SQL DPM database. MDF and LDF files: C:Program FilesMicrosoft DPMDPMDPMDB The following directory stores shadow copy volumes that are recovery points for a data source. These essentially are the changed blocks of VSS (Volume Shadow Copy Service) (Shadow Copy). C:Program FilesMicrosoft DPMDPMVolumesDiffArea The following folder contains mounted replica volumes. Mounted replica volumes are essentially pointers for every protected data object that points to the partition in a DPM storage pool. Think of these mounted replica points as a map from DPM to the protected data on the hard drives where the actual protected data lives. C:Program FilesMicrosoft DPMDPMVolumesReplica DPM processes We are now going to explore DPM processes. The executable files for these are all located in C:Program FilesMicrosoft DPMDPMbin. You can view these processes in Windows Task Manager and they show up in Windows Services as well: The following screenshot shows the DPM services as they appear in Windows Services: We will look at what each of these processes are and what they do. We will also look at the processes that have an impact on the performance of your DPM server. The processes are as follows: DPMAMService.exe: In Windows Services this is listed as the DPM AccessManager Service. This manages access to DPM. DpmWriter.exe: This is a service as well, so you will see it on the services list. This service is used for archiving. It manages the backup shadow copies or replicas, backups of report databases, as well as DPM backups. Msdpm.exe: The DPM service is the core component of DPM. The DPM service manages all core DPM operations, including replica creation, synchronization, and recovery point creation. This service implements and manages synchronization and shadow copy creation for protected file servers. DPMLA.exe: This is the DPM Library Agent Service. DPMRA.exe: This is the DPM Replication Agent. It helps to back up and recover file and application data to DPM. Dpmac.exe: This is known as the DPM Agent Coordinator Service. This manages the installations, uninstallations, and upgrades of DPM protection agents on remote computers that you need to protect. DPM processes that impact DPM performance The Msdpm.exe, MsDpmProtectionAgent.exe, Microsoft$DPM$Acct.exe, and mmc.exe processes take a toll on DPM performance. mmc.exe is a standard Windows service. "MMC" stands for Microsoft Management Console application and is used to display various management plug-ins. Not all but a good amount of Microsoft server applications run in the MMC such as Exchange, ISA, IIS, System Center, and the Microsoft Server Manager. The DPM Administrator Console runs in an MMC as well. mmc.exe can cause high memory usage. The best way to ensure that this process does not overload your memory is to close the DPM Administrator Console when not using it. MsDpmProtectionAgent.exe is the DPM Protection Agent service and affects both CPU and memory usage when DPM jobs and consistency checks are run. There is nothing you can do to get the usage down for this service. You just need to be aware of this and try not to schedule any other resource intensive applications such as antivirus scans at the same time as DPM jobs or consistency checks. Mspdpm.exe is a service that runs synchronization and shadow copy creations as stated previously. Like MsDpmProtectionAgent.exe, Mspdpm.exe also affects CPU and memory usage when running synchronizations and shadow copies. Like MsDpmProtectionAgent.exe there is nothing you can do to the Mspdpm. exe service to reduce memory and CPU usage. Just make sure to keep the system clear of resource intensive applications when the Mspdpm.exe is running jobs. If you are running a local SQL instance for your DPM deployment you will notice a Microsoft$DPM$Acct.exe process. The SQL Server and SQL Agent services use a Microsoft$DPM$Acct account. This normally runs on a high level. This service reserves part of your system's memory for cache. If the system memory goes low, the Microsoft$DPM$Acct.exe process will let go of the memory cache it has reserved. Important DPM terms In this section you will learn some important terms used commonly in DPM. You will need to understand these terms as you begin to administer DPM on a regular basis. You can read the full list of terms at this site: http://technet.microsoft.com/en-us/library/bb795543.aspx We group the terms in a way that each group relates to an area of DPM. The following are some important terms: Bare metal recovery: This is a restore technique that allows one to restore a complete system onto bare metal, without any requirements, to the previous hardware. This allows restoring to dissimilar hardware. Change journal: A feature that tracks changes to NTFS (New Technology File System) volumes, including additions, deletions, and modifications. The change journal exists on the volume as a sparse file. Sparse files are used to make disk space usage more efficient in NTFS. A sparse file allocates disk space only when it is needed. This allows files to be created even when there is insufficient space on a hard drive. These files contain zeroes instead of disk blocks. Consistency check: The process by which DPM checks for and corrects inconsistencies between a protected data source and its replica. A consistency check is only performed when normal mechanisms for recording changes to protected data, and for applying those changes to replicas, have been interrupted. Express full backup: A synchronization operation in which the protection agent transfers a snapshot of all the blocks that have changed since the previous express full backup (or initial replica creation, for the first express full backup). Shadow copy: A point-in-time copy of files and folders that is stored on the DPM server. Shadow copies are sometimes referred to as snapshots. Shadow copy client software: Client software that enables an end-user to independently recover data by retrieving a shadow copy. Replica: A complete copy of the protected data on a single volume, database, or storage group. Each member of a protection group is associated with a replica on the DPM server. Replica creation: The process by which a full copy of data sources, selected for inclusion in a protection group, is transferred to the DPM storage pool. The replica can be created over the network from data on the protected computer or from a tape backup system. Replica creation is an initialization process that is performed for each data source when the data source is added to a protection group. Replica volume: A volume on the DPM server that contains the replica for a protected data source. Custom volume: A volume that is not in the DPM storage pool and is specified to store the replica and recovery points for a protection group member. Dismount: To remove a removable tape or disc from a drive. DPM Alerts log: A log that stores DPM alerts as Windows events so that the alerts can be displayed in Microsoft System Center Operations Manager (SCOM). DPMDB.mdf: The filename of the DPM database, the SQL Server database that stores DPM settings and configuration information. DPMDBReaders group: A group, created during DPM installation, that contains all accounts that have read-only access to the DPM database. The DPMReport account is a member of this group. DPMReport account: The account that the Web and NT services of SQL Server Reporting Services use to access the DPM database. This account is created when an administrator configures DPM reporting. MICROSOFT$DPM$: The name that the DPM setup assigns to the SQL Server instance used by DPM. Microsoft$DPMWriter$ account: The low-privilege account under which DPM runs the DPM Writer service. This account is created during the DPM installation. MSDPMTrustedMachines group: A group that contains the domain accounts for computers that are authorized to communicate with the DPM server. DPM uses this group to ensure that only computers that have the DPM protection agent installed from a specific DPM server can respond to calls from that server. Protection configuration: The collection of settings that is common to a protection group; specifically, the protection group name, disk allocations, replica creation method, and on-the-wire compression. Protection group: A collection of data sources that share the same protection configuration. Protection group member: A data source within a protection group. Protected computer: A computer that contains data sources that are protection group members. Synchronization: The process by which DPM transfers changes from the protected computer to the DPM server, and applies the changes to the replica of the protected volume. Recovery goals: The retention range, data loss tolerance, and frequency of recovery points for protected data. Recovery collection: The aggregate of all recovery jobs associated with a single recovery operation. Recovery point: The date and time of a previous version of a data source that is available for recovery from media that is managed by DPM. Report database: The SQL Server database that stores DPM reporting information (ReportServer.mdf). ReportServer.mdf: In DPM, the filename for the report database—a SQL Server database that stores reporting information. Retention range: Duration of time for which the data should be available for recovery.
Read more
  • 0
  • 0
  • 1844

article-image-spring-security-3-tips-and-tricks
Packt
28 Feb 2011
6 min read
Save for later

Spring Security 3: Tips and Tricks

Packt
28 Feb 2011
6 min read
  Spring Security 3 Make your web applications impenetrable. Implement authentication and authorization of users. Integrate Spring Security 3 with common external security providers. Packed full with concrete, simple, and concise examples. It's a good idea to change the default value of the spring_security_login page URL. Tip: Not only would the resulting URL be more user- or search-engine friendly, it'll disguise the fact that you're using Spring Security as your security implementation. Obscuring Spring Security in this way could make it harder for malicious hackers to find holes in your site in the unlikely event that a security hole is discovered in Spring Security. Although security through obscurity does not reduce your application's vulnerability, it does make it harder for standardized hacking tools to determine what types of vulnerabilities you may be susceptible to.   Evaluating authorization rules Tip: For any given URL request, Spring Security evaluates authorization rules in top to bottom order. The first rule matching the URL pattern will be applied. Typically, this means that your authorization rules will be ordered starting from most-specific to least-specific order. It's important to remember this when developing complicated rule sets, as developers can often get confused over which authorization rule takes effect. Just remember the top to bottom order, and you can easily find the correct rule in any scenario!   Using the JSTL URL tag to handle relative URLs Tip: : Use the JSTL core library's url tag to ensure that URLs you provide in your JSP pages resolve correctly in the context of your deployed web application. The url tag will resolve URLs provided as relative URLs (starting with a /) to the root of the web application. You may have seen other techniques to do this using JSP expression code (<%=request.getContextPath() %>), but the JSTL url tag allows you to avoid inline code!   Modifying username or password and the remember me Feature Tip: You have anticipated that if the user changes their username or password, any remember me tokens set will no longer be valid. Make sure that you provide appropriate messaging to users if you allow them to change these bits of their account.   Configuration of remember me session cookies Tip: If token-validity-seconds is set to -1, the login cookie will be set to a session cookie, which does not persist after the user closes their browser. The token will be valid (assuming the user doesn't close their browser) for a non-configurable length of 2 weeks. Don't confuse this with the cookie that stores your user's session ID—they're two different things with similar names!   Checking Full Authentication without Expressions Tip: If your application does not use SpEL expressions for access declarations, you can still check if the user is fully authenticated by using the IS_ AUTHENTICATED_FULLY access rule (For example, .access="IS_AUTHENTICATED_FULLY"). Be aware, however, that standard role access declarations aren't as expressive as SpEL ones, so you will have trouble handling complex boolean expressions.   Debugging remember me cookies Tip: There are two difficulties when attempting to debug issues with remember me cookies. The first is getting the cookie value at all! Spring Security doesn't offer any log level that will log the cookie value that was set. We'd suggest a browser-based tool such as Chris Pederick's Web Developer plug-in (http://chrispederick.com/work/web-developer/) for Mozilla Firefox. Browser-based development tools typically allow selective examination (and even editing) of cookie values. The second (admittedly minor) difficulty is decoding the cookie value. You can feed the cookie value into an online or offline Base64 decoder (remember to add a trailing = sign to make it a valid Base64-encoded string!)   Making effective use of an in-memory UserDetailsService Tip: A very common scenario for the use of an in-memory UserDetailsService and hard-coded user lists is the authoring of unit tests for secured components. Unit test authors often code or configure the minimal context to test the functionality of the component under test. Using an in-memory UserDetailsService with a well-defined set of users and GrantedAuthority values provides the test author with an easily controlled test environment.   Storing sensitive information Tip: Many guidelines that apply to storage of passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards).   Annotations at the class level Tip: Be aware that the method-level security annotations can also be applied at the class level as well! Method-level annotations, if supplied, will always override annotations specified at the class level. This can be helpful if your business needs dictate specification of security policies for an entire class at a time. Take care to use this functionality in conjunction with good comments and coding standards, so that developers are very clear about the security characteristics of a class and its methods.   Authenticating the user against LDAP Tip: Do not make the very common mistake of configuring an <authentication-provider> with a user-details-service-ref referring to an LdapUserDetailsService, if you are intending to authenticate the user against LDAP itself!   Externalize URLs and environment-dependent settings Tip: Coding URLs into Spring configuration files is a bad idea. Typically, storage and consistent reference to URLs is pulled out into a separate properties file, with placeholders consistent with the Spring PropertyPlaceholderConfigurer. This allows for reconfiguration of environment-specific settings via externalizable properties files without touching the Spring configuration files, and is generally considered good practice. Summary In this article we took a look at some of the tips and tricks for Spring Security. Further resources on this subject: Spring Security 3 [Book] Migration to Spring Security 3 [Article] Opening up to OpenID with Spring Security [Article] Spring Security: Configuring Secure Passwords [Article]
Read more
  • 0
  • 0
  • 2698
article-image-designing-secure-java-ee-applications-glassfish
Packt
31 May 2010
2 min read
Save for later

Designing Secure Java EE Applications in GlassFish

Packt
31 May 2010
2 min read
Security is an orthogonal concern for an application and we should assess it right from the start by reviewing the analysis we receive from business and functional analysts. Assessing the security requirements results in understanding the functionalities we need to include in our architecture to deliver a secure application covering the necessary requirements. Security necessities can include a wide area of requirements, which may vary from a simple authentication to several sub-systems. A list of these sub-systems includes identity and access management system and transport security, which can include encrypting data as well. In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Developing Secure Java EE Applications in GlassFish is the second part of this article series. Understanding the sample application The sample application that we are going to develop, converts different length measurement units into each other. Our application converts meter to centimeter, millimeter, and inch. The application also stores usage statistics for later use cases. Guest users who prefer not to log in can only use meter to centimeter conversion, while any company employee can use meter to centimeter and meter to millimeter conversion, and finally any of company's managers can access meter to inch in addition to two other conversion functionalities. We should show a custom login page to comply with site-wide look and feel. No encryption is required for communication between clients and our application but we need to make sure that no one can intercept and steal the username and passwords provided by members. All members' identification information is stored in the company's wide directory server. The following diagram shows the high-level functionality of the sample application: We have login action and three conversion actions. Users can access some of them after logging in and some of them can be accessed without logging in.
Read more
  • 0
  • 0
  • 2026

article-image-developing-secure-java-ee-applications-glassfish
Packt
31 May 2010
14 min read
Save for later

Developing Secure Java EE Applications in GlassFish

Packt
31 May 2010
14 min read
In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Read Designing Secure Java EE Applications in GlassFish here. Developing the Presentation layer The Presentation layer is the closest layer to end users when we are developing applications that are meant to be used by humans instead of other applications. In our application, the Presentation layer is a Java EE web application consisting of the elements listed in the following table. In the table you can see that different JSP files are categorized into different directories to make the security description easier. Element Name Element Description Index.jsp Application entry point. It has some links to functional JSP pages like toMilli.jsp and so on. auth/login.html This file presents a custom login page to user when they try to access a restricted resource. This file is placed inside auth directory of the web application. auth/logout.jsp Logging users out of the system after their work is finished. auth/loginError.html Unsuccessful login attempt redirect users to this page. This file is placed inside auth directory of the web application jsp/toInch.jsp Converts given length to inch, it is only available for managers. jsp/toMilli.jsp Converts given length to millimeter, this page is available to any employee. jsp/toCenti.jsp Converts given length to centimeter, this functionality is available for everyone. Converter Servlet Receives the request and invoke the session bean to perform the conversion and returns back the value to user. auth/accessRestricted.html An error page for error 401 which happens when authorization fails. Deployment Descriptors The deployment descriptors which we describe the security constraints over resources we want to protect. Now that our application building blocks are identified we can start implementing them to complete the application. Before anything else let's implement JSP files that provides the conversion GUI. The directory layout and content of the Web module is shown in the following figure: Implementing the Conversion GUI In our application we have an index.jsp file that acts as a gateway to the entire system and is shown in the following listing: <html> <head><title>Select A conversion</title></head> <body><h1>Select A conversion</h1> <a href="auth/login.html">Login</a> <br/> <a href="jsp/toCenti.jsp">Convert Meter to Centimeter</a> <br/> <a href="jsp/toInch.jsp">Convert Meter to Inch</a> <br/> <a href="jsp/toMilli.jsp">Convert to Millimeter</a><br/> <a href="auth/logout.jsp">Logout</a> </body> </html> Implementing the Converter servlet The Converter servlet receives the conversion value and method from JSP files and calls the corresponding method of a session bean to perform the actual conversion. The following listing shows the Converter servlet content: @WebServlet(name="Converter", urlPatterns={"/Converter"}) public class Converter extends HttpServlet { @EJB private ConversionLocal conversionBean; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("POST"); response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try{ int valueToconvert = Integer.parseInt(request.getParameter("meterValue")); String method = request.getParameter("method"); out.print("<hr/> <center><h2>Conversion Result is: "); if (method.equalsIgnoreCase("toMilli")) { out.print(conversionBean.toMilimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toCenti")) { out.print(conversionBean.toCentimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toInch")) { out.print(conversionBean.toInch(valueToconvert)); } out.print("</h2></center>"); }catch (AccessLocalException ale) { response.sendError(401); }finally { out.close(); } } } Starting from the beginning we are using annotation to configure the servlet mapping and servlet name instead of using the deployment descriptor for it. Then we use dependency injection to inject an instance of Conversion session bean into the servlet and decide which one of its methods we should invoke based on the conversion type that the caller JSP sends as a parameter. Finally, we catch javax.ejb.AccessLocalException and send an HTTP 401 error back to inform the client that it does not have the required privileges to perform the requested action. The following figure shows what the result of invocation could look like: Each servlet needs some description elements in the deployment descriptor or included as deployment descriptor elements. Implementing the conversion JSP files is the last step in implementing the functional pieces. In the following listing you can see content of the toMilli.jsp file. <html> <head><title>Convert To Millimeter</title></head> <body><h1>Convert To Millimeter</h1> <form method=POST action="../Converter">Enter Value to Convert: <input name=meterValue> <input type="hidden" name="method" value="toMilli"> <input type="submit" value="Submit" /> </form> </body> </html> The toCenti.jsp and toInch.jsp files look the same except for the descriptive content and the value of the hidden parameter which will be toCenti and toInch respectively for toCenti.jsp and toInch.jsp. Now we are finished with the functional parts of the Web layer; we just need to implement the required GUI for security measures. Implementing the authentication frontend For the authentication, we should use a custom login page to have a unified look and feel in the entire web frontend of our application. We can use a custom login page with the FORM authentication method. To implement the FORM authentication method we need to implement a login page and an error page to redirect the users to that page in case authentication fails. Implementing authentication requires us to go through the following steps: Implementing login.html and loginError.html Including security description in the web.xml and sun-web.xml or sun-application.xml Implementing a login page In FORM authentication we implement our own login form to collect username and password and we then pass them to the container for authentication. We should let the container know which field is username and which field is password by using standard names for these fields. The username field is j_username and the password field is j_password. To pass these fields to container for authentication we should use j_security_check as the form action. When we are posting to j_security_check the servlet container takes action and authenticates the included j_username and j_password against the configured realm. The listing below shows login.html content. <form method="POST" action="j_security_check"> Username: <input type="text" name="j_username"><br /> Password: <input type="password" name="j_password"><br /> <br /> <input type="submit" value="Login"> <input type="reset" value="Reset"> </form> The following figure shows the login page which is shown when an unauthenticated user tries to access a restricted resource: Implementing a logout page A user may need to log out of our system after they're finished using it. So we need to implement a logout page. The following listing shows the logout.jsp file: <% session.invalidate(); %> <body> <center> <h1>Logout</h1> You have successfully logged out. </center> </body> Implementing a login error page And now we should implement LoginError.html, an authentication error page to inform user about its authentication failure. <html> <body> <h2>A Login Error Occurred</h2> Please click <a href="login.html">here</a> for another try. </body> </html> Implementing an access restricted page When an authenticated user with no required privileges tries to invoke a session bean method, the EJB container throws a javax.ejb.AccessLocalException. To show a meaningful error page to our users we should either map this exception to an error page or we should catch the exception, log the event for audition purposes, and then use the sendError() method of the HttpServletResponse object to send out an error code. We will map the HTTP error code to our custom web pages with meaningful descriptions using the web.xml deployment descriptor. You will see which configuration elements we will use to do the mapping. The following snippet shows AccessRestricted.html file: <body> <center> <p>You need to login to access the requested resource. To login go to <a href="auth/login.html">Login Page</a></p></center> </body> Configuring deployment descriptors So far we have implemented required files for the FORM-based authentication and we only need to include required descriptions in the web.xml file. Looking back at the application requirement definitions, we see that anyone can use meter to centimeter conversion functionality and any other functionality that requires the user to login. We use three different HTML pages for different types of conversion. We do not need any constraint on toCentimeter.html therefore we do not need to include any definition for it. Per application description, any employee can access the toMilli.jsp page. Defining security constraint for this page is shown in the following listing: <security-constraint> <display-name>You should be an employee</display-name> <web-resource-collection> <web-resource-name>all</web-resource-name> <description/> <url-pattern>/jsp/toMillimeter.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>employee_role</role-name> </auth-constraint> </security-constraint> We should put enough constraints on the toInch.jsp page so that only managers can access the page. The listing included below shows the security constraint definition for this page. <security-constraint> <display-name>You should be a manager</display-name> <web-resource-collection> <web-resource-name>Inch</web-resource-name> <description/> <url-pattern>/jsp/toInch.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>manager_role</role-name> </auth-constraint> </security-constraint> Finally we need to define any role we used in the deployment descriptor. The following snippet shows how we define these roles in the web.xml page. <security-role> <description/> <role-name>manager_role</role-name> </security-role> <security-role> <description/> <role-name>employee_role</role-name> </security-role> Looking back at the application requirements, we need to define data constraint and ensure that username and passwords provided by our users are safe during transmission. The following listing shows how we can define the data constraint on the login.html page. <security-constraint> <display-name>Login page Protection</display-name> <web-resource-collection> <web-resource-name>Authentication</web-resource-name> <description/> <url-pattern>/auth/login.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <description/> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> One more step and our web.xml file will be complete. In this step we define an error page for HTML 401 error code. This error code means that application server is unable to perform the requested action due to negative authorization result. The following snippet shows the required elements to define this error page. <error-page> <error-code>401</error-code> <location>AccessRestricted.html</location> </error-page> Now that we are finished with declaring the security we can create the conversion pages and after creating these pages we can start with Business layer and its security requirements. Specifying the security realm Up to this point we have defined all the constraints that our application requires but we still need to follow one more step to complete the application's security configuration. The last step is specifying the security realm and authentication. We should specify the FORM authentication and per-application description; authentication must happen against the company-wide LDAP server. Here we are going to use the LDAP security realm LDAPRealm. We need to import a new LDIF file into our LDAP server, which contains groups and users definition required for this article. To import the file we can use the following command, assuming that you downloaded the source code bundle from https://www.packtpub.com//sites/default/files/downloads/9386_Code.zip and you have it extracted. import-ldif --ldifFile path/to/chapter03/users.ldif --backendID userRoot --clearBackend --hostname 127.0.0.1 --port 4444 --bindDN cn=gf cn=admin --bindPassword admin --trustAll --noPropertiesFile The following table show users and groups that are defined inside the users.ldif file. Username and password Group membership james/james manager, employee meera/meera employee We used OpenDS for the realm data storage and it had two users, one in the employee group and the other one in the manager group. To configure the authentication realm we need to include the following snippet in the web.xml file. <login-config> <auth-method>FORM</auth-method> <realm-name>LDAPRealm</realm-name> <form-login-config> <form-login-page>/auth/login.html</form-login-page> <form-error-page>/auth/loginError.html</form-error-page> </form-login-config> </login-config> If we look at our Web and EJB modules as separate modules we must specify the role mappings for each module separately using the GlassFish deployment descriptors, which are sun-web.xml and sun-ejb.xml. But we are going to bundle our modules as an Enterprise Application Archive (EAR) file so we can use the GlassFish deployment descriptor for enterprise applications to define the role mapping in one place and let all modules use that definitions. The following listing shows roles and groups mapping in the sun-application.xml file. <sun-application> <security-role-mapping> <role-name>manager_role</role-name> <group-name>manager</group-name> </security-role-mapping> <security-role-mapping> <role-name>employee_role</role-name> <group-name>employee</group-name> </security-role-mapping> <realm>LDAPRealm</realm> </sun-application> The security-role-mapping element we used in sun-application.xml has the same schema as the security-role-mapping element of the sun-web.xml and sun-ejb-jar.xml files. You should have noticed that we have a realm element in addition to role mapping elements. We can use the realm element of the sun-application.xml to specify the default authentication realm for the entire application instead of specifying it for each module separately. Summary In this article series, we covered the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container We learnt how to develop a secure Java EE application with all standard modules including Web, EJB, and application client modules.
Read more
  • 0
  • 0
  • 2724