Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Deployment and DevOps

Save for later
  • 16 min read
  • 14 Oct 2016

article-image

 In this article by Makoto Hashimoto and Nicolas Modrzyk, the authors of the book Clojure Programming Cookbook, we will cover the recipe Clojure on Amazon Web Services.

(For more resources related to this topic, see here.)

Clojure on Amazon Web Services

This recipe is a standalone dish where you can learn how to combine the elegance of Clojure with Amazon Web Services (AWS).

AWS was started in 2006 and is used by many businesses as easy to use web services. This style of serverless services is becoming more and more popular. You can use computer resources and software services on demand, without the need of preparing hardware or installing software by yourselves.

You will mostly make use of the amazonica library, which is a comprehensive Clojure client for the entire Amazon AWS set of APIs. This library wraps the Amazon AWS APIs and supports most of AWS services including EC2, S3, Lambda, Kinesis, Elastic Beanstalk, Elastic MapReduce, and RedShift.

This recipe has received a lot of its content and love from Robin Birtle, a leading member of the Clojure Community in Japan.

Getting ready

You need an AWS account and credentials to use AWS, so this recipe starts by showing you how to do the setup and acquire the necessary keys to get started.

Signing up on AWS

You need to sign up AWS if you don't have your account in AWS yet. In this case, go to https://aws.amazon.com, click on Sign In to the Console, and follow the instruction for creating your account:

 deployment-and-devops-img-0

To complete the sign up, enter the number of a valid credit card and a phone number.

Getting access key and secret access key

To call the API, you now need your AWS's access key and secret access key. Go to AWS console and click on your name, which is located in the top right corner of the screen, and select Security Credential, as shown in the following screenshot:

deployment-and-devops-img-1

Select Access Keys (Access Key ID and Secret Access Key), as shown in the following screenshot:

 deployment-and-devops-img-2

Then, the following screen appears; click on New Access Key:

deployment-and-devops-img-3

You can see your access key and secret access key, as shown in the following screenshot:

deployment-and-devops-img-4

Copy and save these strings for later use.

Setting up dependencies in your project.clj

Let's add amazonica library to your project.clj and restart your REPL:

:dependencies [[org.clojure/clojure "1.8.0"]
                 [amazonica "0.3.67"]]

How to do it…

From there on, we will go through some sample usage of the core Amazon services, accessed with Clojure, and the amazonica library. The three main ones we will review are as follows:

  • EC2, Amazon's Elastic Cloud, which allows to run Virtual Machines on Amazon's Cloud
  • S3, Simple Storage Service, which gives you Cloud based storage
  • SQS, Simple Queue Services, which gives you Cloud-based data streaming and processing

Let's go through each of these one by one.

Using EC2

Let's assume you have an EC2 micro instance in Tokyo region:

deployment-and-devops-img-5

First of all, we will declare core and ec2 namespace in amazonica to use:

(ns aws-examples.ec2-example
  (:require [amazonica.aws.ec2 :as ec2]
           [amazonica.core :as core]))

We will set the access key and secret access key for enabling AWS client API accesses AWS. core/defcredential does as follows:

(core/defcredential "Your Access Key" "Your Secret Access Key" "your region")
;;=> {:access-key "Your Access Key", :secret-key "Your Secret Access Key", :endpoint "your region"}

The region you need to specify is ap-northeast-1, ap-south-1, or us-west-2. To get full regions list, use ec2/describe-regions:

(ec2/describe-regions)
;;=> {:regions [{:region-name "ap-south-1", :endpoint "ec2.ap-south-1.amazonaws.com"}
;;=>                       .....
;;=>            {:region-name "ap-northeast-2", :endpoint "ec2.ap-northeast-2.amazonaws.com"}
;;=>            {:region-name "ap-northeast-1", :endpoint "ec2.ap-northeast-1.amazonaws.com"}
;;=>                       .....           
;;=>            {:region-name "us-west-2", :endpoint "ec2.us-west-2.amazonaws.com"}]}

ec2/describe-instances returns very long information as the following:

(ec2/describe-instances)
;;=> {:reservations [{:reservation-id "r-8efe3c2b", :requester-id "226008221399",
;;=>                  :owner-id "182672843130", :group-names [], :groups [], ....

To get only necessary information of instance, we define the following __get-instances-info:

(defn get-instances-info[]
  (let [inst (ec2/describe-instances)]
    (->>
     (mapcat :instances (inst :reservations))
     (map
      #(vector
        [:node-name (->> (filter (fn [x] (= (:key  x)) "Name" ) (:tags %)) first :value)]
        [:status (get-in % [:state :name])]
        [:instance-id (:instance-id %)]
        [:private-dns-name (:private-dns-name %)]
        [:global-ip (-> % :network-interfaces first :private-ip-addresses first :association :public-ip)]
        [:private-ip (-> % :network-interfaces first :private-ip-addresses first  :private-ip-address)]))
     (map #(into {} %))
     (sort-by :node-name))))
;;=> #'aws-examples.ec2-example/get-instances-info

Let's try to use the following function:

get-instances-info)
;;=> ({:node-name "ECS Instance - amazon-ecs-cli-setup-my-cluster",
;;=>   :status "running",
;;=>   :instance-id "i-a1257a3e",
;;=>   :private-dns-name "ip-10-0-0-212.ap-northeast-1.compute.internal",
;;=>   :global-ip "54.199.234.18",
;;=>   :private-ip "10.0.0.212"}
;;=>  {:node-name "EcsInstanceAsg",
;;=>   :status "terminated",
;;=>   :instance-id "i-c5bbef5a",
;;=>   :private-dns-name "",
;;=>   :global-ip nil,
;;=>   :private-ip nil})

As in the preceding example function, we can obtain instance-id list. So, we can start/stop instances using ec2/start-instances and ec2/stop-instances_ accordingly:

(ec2/start-instances :instance-ids '("i-c5bbef5a"))
;;=> {:starting-instances
;;=>  [{:previous-state {:code 80, :name "stopped"},
;;=>    :current-state {:code 0, :name "pending"},
;;=>    :instance-id "i-c5bbef5a"}]}

(ec2/stop-instances :instance-ids '("i-c5bbef5a"))
;;=> {:stopping-instances
;;=>  [{:previous-state {:code 16, :name "running"},
;;=>    :current-state {:code 64, :name "stopping"},
;;=>    :instance-id "i-c5bbef5a"}]}

Using S3

Amazon S3 is secure, durable, and scalable storage in AWS cloud. It's easy to use for developers and other users. S3 also provide high durability, availability, and low cost. The durability is 99.999999999 % and the availability is 99.99 %.

Let's create s3 buckets names makoto-bucket-1, makoto-bucket-2, and makoto-bucket-3 as follows:

(s3/create-bucket "makoto-bucket-1")
;;=> {:name "makoto-bucket-1"}
(s3/create-bucket "makoto-bucket-2")
;;=> {:name "makoto-bucket-2"}

(s3/create-bucket "makoto-bucket-3")
;;=> {:name "makoto-bucket-3"}
s3/list-buckets returns buckets information:
(s3/list-buckets)
;;=> [{:creation-date #object[org.joda.time.DateTime 0x6a09e119 "2016-08-01T07:01:05.000+09:00"],
;;=>   :owner
;;=>   {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27",
;;=>    :display-name "tokoma1"},
;;=>   :name "makoto-bucket-1"}
;;=>  {:creation-date #object[org.joda.time.DateTime 0x7392252c "2016-08-01T17:35:30.000+09:00"],
;;=>   :owner
;;=>   {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27",
;;=>    :display-name "tokoma1"},
;;=>   :name "makoto-bucket-2"}
;;=>  {:creation-date #object[org.joda.time.DateTime 0x4d59b4cb "2016-08-01T17:38:59.000+09:00"],
;;=>   :owner
;;=>   {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27",
;;=>    :display-name "tokoma1"},
;;=>   :name "makoto-bucket-3"}]

We can see that there are three buckets in your AWS console, as shown in the following screenshot:

deployment-and-devops-img-6

Let's delete two of the three buckets as follows:

(s3/list-buckets)
;;=> [{:creation-date #object[org.joda.time.DateTime 0x56387509 "2016-08-01T07:01:05.000+09:00"],
;;=> :owner {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", :display-name "tokoma1"}, :name "makoto-bucket-1"}]

We can see only one bucket now, as shown in the following screenshot:

deployment-and-devops-img-7

Now we will demonstrate how to send your local data to s3. s3/put-object uploads a file content to the specified bucket and key. The following code uploads /etc/hosts and makoto-bucket-1:

(s3/put-object
 :bucket-name "makoto-bucket-1"
 :key "test/hosts"
 :file (java.io.File. "/etc/hosts"))
;;=> {:requester-charged? false, :content-md5 "HkBljfktNTl06yScnMRsjA==",
;;=> :etag "1e40658df92d353974eb249c9cc46c8c", :metadata {:content-disposition nil,
;;=> :expiration-time-rule-id nil, :user-metadata nil, :instance-length 0, :version-id nil,
;;=> :server-side-encryption nil, :etag "1e40658df92d353974eb249c9cc46c8c", :last-modified nil,
;;=> :cache-control nil, :http-expires-date nil, :content-length 0, :content-type nil,
;;=> :restore-expiration-time nil, :content-encoding nil, :expiration-time nil, :content-md5 nil,
;;=> :ongoing-restore nil}}
s3/list-objects lists objects in a bucket as follows:
(s3/list-objects :bucket-name "makoto-bucket-1")
;;=> {:truncated? false, :bucket-name "makoto-bucket-1", :max-keys 1000, :common-prefixes [],
;;=> :object-summaries [{:storage-class "STANDARD", :bucket-name "makoto-bucket-1",
;;=> :etag "1e40658df92d353974eb249c9cc46c8c",
;;=> :last-modified #object[org.joda.time.DateTime 0x1b76029c "2016-08-01T07:01:16.000+09:00"],
;;=> :owner {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27",
;;=> :display-name "tokoma1"}, :key "test/hosts", :size 380}]}

To obtain the contents of objects in buckets, use s3/get-object:

(s3/get-object :bucket-name "makoto-bucket-1" :key "test/hosts")
;;=> {:bucket-name "makoto-bucket-1", :key "test/hosts",
;;=> :input-stream #object[com.amazonaws.services.s3.model.S3ObjectInputStream 0x24f810e9
;;=> ......
;;=> :last-modified #object[org.joda.time.DateTime 0x79ad1ca9 "2016-08-01T07:01:16.000+09:00"],
;;=> :cache-control nil, :http-expires-date nil, :content-length 380, :content-type "application/octet-stream",
;;=> :restore-expiration-time nil, :content-encoding nil, :expiration-time nil, :content-md5 nil,
;;=> :ongoing-restore nil}}

The result is a map, the content is a stream data, and the value of :object-content. To get the result as a string, we will use slurp_ as follows:

(slurp (:object-content (s3/get-object :bucket-name "makoto-bucket-1" :key "test/hosts")))
;;=> "127.0.0.1tlocalhostn127.0.1.1tphenixnn# The following lines are desirable for IPv6 capable hostsn::1     ip6-localhost ip6-loopbacknfe00::0 ip6-localnetnff00::0 ip6-mcastprefixnff02::1 ip6-allnodesnff02::2 ip6-allroutersnn52.8.30.189 my-cluster01-proxy1 n52.8.169.10 my-cluster01-master1 n52.8.198.115 my-cluster01-slave01 n52.9.12.12 my-cluster01-slave02nn52.8.197.100 my-node01n"

Using Amazon SQS

Amazon SQS is a high-performance, high-availability, and scalable Queue Service. We will demonstrate how easy it is to handle messages on queues in SQS using Clojure:

(ns aws-examples.sqs-example
  (:require [amazonica.core :as core]
          [amazonica.aws.sqs :as sqs]))

To create a queue, you can use sqs/create-queue as follows:

(sqs/create-queue :queue-name "makoto-queue"
              :attributes
                {:VisibilityTimeout 3000
                 :MaximumMessageSize 65536 
                 :MessageRetentionPeriod 1209600 
                 :ReceiveMessageWaitTimeSeconds 15}) 
;;=> {:queue-url "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue"}

To get information of queue, use sqs/get-queue-attributes as follows:

(sqs/get-queue-attributes "makoto-queue")
;;=> {:QueueArn "arn:aws:sqs:ap-northeast-1:864062283993:makoto-queue", ...

You can configure a dead letter queue using sqs/assign-dead-letter-queue as follows:

(sqs/create-queue "DLQ")
;;=> {:queue-url "https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ"}
(sqs/assign-dead-letter-queue (sqs/find-queue "makoto-queue")
                              (sqs/find-queue "DLQ") 10)
;;=> nil

Let's list queues defined:

(sqs/list-queues)
;;=> {:queue-urls
;;=>  ["https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ"
;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue"]}

The following image is of the console of SQS:

deployment-and-devops-img-8

Let's examine URLs of queues:

(sqs/find-queue "makoto-queue")
;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue"
(sqs/find-queue "DLQ")
;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ"

To send messages, we use sqs/send-message:

(sqs/send-message (sqs/find-queue "makoto-queue") "hello sqs from Clojure")
;;=> {:md5of-message-body "00129c8cc3c7081893765352a2f71f97", :message-id "690ddd68-a2f6-45de-b6f1-164eb3c9370d"}

To receive messages, we use sqs/receive-message:

(sqs/receive-message "makoto-queue")
;;=> {:messages [
;;=>             {:md5of-body "00129c8cc3c7081893765352a2f71f97",
;;=>              :receipt-handle "AQEB.....", :message-id "bd56fea8-4c9f-4946-9521-1d97057f1a06",
;;=>              :body "hello sqs from Clojure"}]}

To remove all messages in your queues, we use sqs/purge-queue:

(sqs/purge-queue :queue-url (sqs/find-queue "makoto-queue"))
;;=> nil

To delete queues, we use sqs/delete-queue:

(sqs/delete-queue "makoto-queue")
;;=> nil
(sqs/delete-queue "DLQ")
;;=> nil

Serverless Clojure with AWS Lambda

Lambda is an AWS product that allows you to run Clojure code without the hassle and expense of setting up and maintaining a server environment. Behind the scenes, there are still servers involved, but as far as you are concerned, it is a serverless environment. Upload a JAR and you are good to go. Code running on Lambda is invoked in response to an event, such as a file being uploaded to S3, or according to a specified schedule. In production environments, Lambda is normally used in wider AWS deployment that includes standard server environments to handle discrete computational tasks.

Particularly those that benefit from Lambda's horizontal scaling that just happens with configuration required. For Clojurians working on personal project, Lambda is a wonderful combination of power and limitation. Just how far can you hack Lambda given the constraints imposed by AWS?

Clojure namespace helloworld

Start off with a clean empty projected generated using lein new. From there, in your IDE of choice, configure and package and a new Clojure source file. In the following example, the package is com.sakkam and the source file uses the Clojure namespace helloworld. The entry point to your Lambda code is a Clojure function that is exposed as a method of a Java class using Clojure's gen-class.

Similar to use and require, the gen-class function can be included in the Clojure ns definition, as the following, or specified separately. You can use any name you want for the handler function but the prefix must be a hyphen unless an alternate prefix is specified as part of the :methods definition:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime
(ns com.sakkam.lambda.helloworld
  (:gen-class
   :methods [^:static [handler [String] String]]))
(defn -myhandler [s]
  (println (str "Hello," s)))

From the command line, use lein uberjar to create a JAR that can be uploaded to AWS Lambda.

Hello World – the AWS part

Getting your Hello World to work is now a matter of creating a new Lambda within AWS, uploading your JAR, and configuring your handler.

Hello Stream

The handler method we used in our Hello World Lambda function was coded directly and could be extended to accept custom Java classes as part of the method signature. However, for more complex Java integrations, implementing one of AWS's standard interfaces for Lambda is both straightforward and feels more like idiomatic Clojure. The following example replaces our own definition of a handler method with an implementation of a standard interface that is provided as part of the aws-lambda-java-core library. First of all, add the dependency [com.amazonaws/aws-lambda-java-core "1.0.0"] into your project.clj. While you are modifying your project.clj, also add in the dependency for [org.clojure/data.json "0.2.6"] since we will be manipulating JSON formatted objects as part of this exercise. Then, either create a new Clojure namespace or modify your existing one so that it looks like the following (the handler function must be named -handleRequest since handleRequest is specified as part of the interface):

(ns aws-examples.lambda-example
  (:gen-class
   :implements [com.amazonaws.services.lambda.runtime.RequestStreamHandler])
  (:require [clojure.java.io :as io]
            [clojure.data.json :as json]
            [clojure.string :as str]))
(defn -handleRequest [this is os context]
  (let [w (io/writer os)
        parameters (json/read (io/reader is) :key-fn keyword)]
    (println "Lambda Hello Stream Output ")
    (println "this class: " (class this))
    (println "is class:" (class is))
    (println "os class:" (class os))
    (println "context class:" (class context))
    (println "Parameters are " parameters))
  (.flush w))

Use lein uberjar again to create a JAR file. Since we have an existing Lambda function in AWS, we can overwrite the JAR used in the Hello World example. Since the handler function name has changed, we must modify our Lambda configuration to match. This time, the default test that provides parameters in JSON format should work as is, and the result will look something like the following:

We can very easily get a more interesting test of Hello Stream by configuring this Lambda to run whenever a file is uploaded to S3. At the Lambda management page, choose the Event Sources tab, click on Add Event, and choose an S3 bucket to which you can easily add a file. Now, upload a file to the specified S3 bucket and then navigate to the logs of the Hello World Lambda function. You will find that Hello World has been automatically invoked, and a fairly complicated object that represents the uploaded file is supplied as a parameter to our Lambda function.

Real-world Lambdas

To graduate from a Hello World Lambda to real-world Lambdas, the chances are you going to need richer integration with other AWS facilities. As a minimum, you will probably want to write a file to an S3 bucket or insert a notification into SNS queue. Amazon provides an SDK that makes this integration straightforward for developers using standard Java. For Clojurians, using the Amazon Clojure wrapper Amazonica is a very fast and easy way to achieve the same.

How it works…

Here, we will explain how AWS works.

What Is Amazon EC2?

Using EC2, we don't need to buy hardware or installing operating system. Amazon provides various types of instances for customers' use cases. Each instance type has varies combinations of CPU, memory, storage, and networking capacity.

Some instance types are given in the following table. You can select appropriate instances according to the characteristics of your application.

Instance type

Description

M4

M4 type instance is designed for general purpose computing. This family provides a balanced CPU, memory and network bandwidth

C4

C4 type instance is designed for applications that consume CPU resources. C4 is the highest CPU performance with the lowest cost

R3

R3 type instances are for memory-intensive applications

G2

G2 type instances has NVIDIA GPU and is used for graphic applications and GPU computing applications such as deep learning

 

The following table shows the variations of models of M4 type instance. You can choose the best one among models.

Model

vCPU

RAM (GiB)

EBS bandwidth (Mbps)

m4.large

2

8

450

m4.xlarge

4

16

750

m4.2xlarge

8

32

1,000

m4.4xlarge

16

64

2,000

m4.10xlarge

40

160

4,000

 

Amazon S3

Amazon S3 is storage for Cloud. It provides a simple web interface that allows you to store and retrieve data. S3 API is an ease of use but ensures security. S3 provides Cloud storage services and is scalable, reliable, fast, and inexpensive.

Buckets and Keys

Buckets are containers for objects stored in Amazon S3. Objects are stored in buckets. Bucket name is unique among all regions in the world. So, names of buckets are the top-level identities of S3 and units of charges and access controls.

Keys are the unique identifiers for an object within a bucket. Every object in a bucket has exactly one key. Keys are the second-level identifiers and should be unique in a bucket. To identify an object, you use the combination of bucket name and key name.

Objects

Objects are accessed by a bucket names and keys. Objects consist of data and metadata. Metadata is a set of name-value pairs that describe the characteristics of object. Examples of metadata are the date last modified and content type. Objects can have multiple versions of data.

There's more…

It is clearly impossible to review all the different APIs for all the different services proposed via the Amazonica library, but you would probably get the feeling of having tremendous powers in your hands right now. (Don't forget to give that credit card back to your boss now …)

Some other examples of Amazon services are as follows:

  • Amazon IoT: This proposes a way to get connected devices easily and securely interact with cloud applications and other devices.
  • Amazon Kinesis: This gives you ways of easily loading massive volumes of streaming data into AWS and easily analyzing them through streaming techniques.

Summary

We hope you enjoyed this appetizer to the book Clojure Programming Cookbook, which will present you a set of progressive readings to improve your Clojure skills, and make it so that Clojure becomes your de facto everyday language for professional and efficient work.

This book presents different topics of generic programming, which are always to the point, with some fun so that each recipe feels not like a classroom, but more like a fun read, with challenging exercises left to the reader to gradually build up skills.

See you in the book!

Resources for Article:


Further resources on this subject: