(For more resources related to this topic, see here.)
While there are ongoing arguments in the courts of America at the time of writing over who invented the likes of dragging images, it is without a doubt that a key feature of iOS is the ability to use gestures. To put it simply, when you tap the screen to start an app or select a part of an image to enlarge it or anything like that, you are using gestures.
A gesture (in terms of iOS) is any touch interaction between the UI and the device. With iOS 6, there are six gestures the user has the ability to use. These gestures, along with brief explanations, have been listed in the following table:
Class |
Name and type |
Gesture |
UIPanGesture Recognizer |
PanGesture; Continuous type |
Pan images or over-sized views by dragging across the screen |
UISwipeGesture Recognizer |
SwipeGesture; Continuous type |
Similar to panning, except it is a swipe |
UITapGesture Recognizer |
TapGesture; Discrete type |
Tap the screen a number of times (configurable) |
UILongPressGesture Recognizer |
LongPressGesture; Discrete type |
Hold the finger down on the screen |
UIPinchGesture Recognizer |
PinchGesture; Continuous type |
Zoom by pinching an area and moving your fingers in or out |
UIRotationGesture Recognizer |
RotationGesture; Continuous type |
Rotate by moving your fingers in opposite directions |
Gestures can be added by programming or via Xcode. The available gestures are listed in the following screenshot with the rest of the widgets on the right-hand side of the designer:
To add a gesture, drag the gesture you want to use under the view on the View bar (shown in the following screenshot):
Design the UI as you want and while pressing the Ctrl key, drag the gesture to what you want to recognize using the gesture. In my example, the object you want to recognize is anywhere on the screen. Once you have connected the gesture to what you want to recognize, you will see the configurable options of the gesture.
The Taps field is the number of taps required before the Recognizer is triggered, and the Touches field is the number of points onscreen required to be touched for the Recognizer to be triggered.
When you come to connect up the UI, the gesture must also be added.
When using Xcode, it is simple to code gestures. The class defined in the Xcode design for the tapping gesture is called tapGesture and is used in the following code:
private int tapped = 0; public override void ViewDidLoad() { base.ViewDidLoad(); tapGesture.AddTarget(this, new Selector("screenTapped")); View.AddGestureRecognizer(tapGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { tapped++; lblCounter.Text = tapped.ToString(); }
There is nothing really amazing to the code; it just displays how many times the screen has been tapped.
The Selector method is called by the code when the tap has been seen. The method name doesn't make any difference as long as the Selector and Export names are the same.
When the gesture types were originally described, they were given a type. The type reflects the number of messages sent to the Selector method. A discrete one generates a single message. A continuous one generates multiple messages, which requires the Selector method to be more complex. The complexity is added by the Selector method having to check the State of the gesture to decide on what to do with what message and whether it has been completed.
It is not a requirement that Xcode be used to add a gesture. To perform the same task in the following code as my preceding code did in Xcode is easy. The code will be as follows:
UITapGestureRecognizer t'pGesture = new UITapGestureRecognizer() { NumberOfTapsRequired = 1 };
The rest of the code from AddTarget can then be used.
The following code, a Pinch Recognizer, shows a simple rescaling. There are a couple of other states that I'll explain after the code. The only difference in the designer code is that I have UIImageView instead of a label and a UIPinchGestureRecognizer class instead of a UITapGestureRecognizer class.
public override void ViewDidLoad() { base.ViewDidLoad(); uiImageView.Image =UIImage.FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f); pinchGesture.AddTarget(this, new Selector("screenTapped")); uiImageView.AddGestureRecognizer(pinchGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { UIPinchGestureRecognizer pinch = (UIPinchGestureRecognizer)s; float scale = 0f; PointF location; switch(s.State) { case UIGestureRecognizerState.Began: Console.WriteLine("Pinch begun"); location = s.LocationInView(s.View); break; case UIGestureRecognizerState.Changed: Console.WriteLine("Pinch value changed"); scale = pinch.Scale; uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f), scale); break; case UIGestureRecognizerState.Cancelled: Console.WriteLine("Pinch cancelled"); uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f)); scale = 0f; break; case UIGestureRecognizerState.Recognized: Console.WriteLine("Pinch recognized"); break; } }
The following table gives a list of other Recognizer states:
State |
Description |
Notes |
Possible |
Default state; gesture hasn't been recognized |
Used by all gestures |
Failed |
Gesture failed |
No messages sent for this state |
Translation |
Direction of pan |
Used in the pan gesture |
Velocity |
Speed of pan |
Used in the pan gesture |
In addition to these, it should be noted that discrete types only use Possible and Recognized states.
Gestures certainly can add a lot to your apps. They can enable the user to speed around an image, move about a map, enlarge and reduce, as well as select areas of anything on a view. Their flexibility underpins why iOS is recognized as being an extremely versatile device for users to manipulate images, video, and anything else on-screen.
Further resources on this subject: