Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Servers

5 Articles
article-image-is-serverless-architecture-a-good-choice-for-app-development
Mehul Rajput
11 Oct 2019
6 min read
Save for later

Is serverless architecture a good choice for app development?

Mehul Rajput
11 Oct 2019
6 min read
App development has evolved rapidly in recent years. With new demands and expectations from businesses and users, trends like cloud have helped developers to be more productive and to build faster, more reliable and secure applications. But, there’s no end to evolution - and serverless is arguably the next step for application development. But is a serverless architecture the right choice? What is a Serverless Architecture? When you hear the word sever-less, you might assume that it means no servers. In actual fact it really refers to the elimination of the need to manage the servers. Instead, it shifts that responsibility to your cloud provider. Simply, it means that the constituent parts of an application are divided between multiple servers, with no need for the application owner/manager to create or manage the infrastructure that supports it. Instead of running off a server, with a serverless architecture, it runs off functions. These are essentially actions that are fired off to ensure things happen within the application. This is where the phrase ‘function-as-a-service’, or FaaS, (another way of describing serverless) comes from.  A recent report claims that the FaaS market is projected to grow up to 32.7% by 2021, by 7.72 billion US dollars. Is Serverless Architecture a Good Choice for App Development? Now that we’ve established what the serverless actually means, we must get to business. Is serverless architecture the right choice for app development? Well, it can work either way. It can be positive as well as negative. Here are some reasons: Using serverless for app development: the positives There are many reasons because of which serverless architecture can be good for app development and should be used. Some of them are discussed below: Decreasing costs Easier for service Scalability Third-party services Decreasing costs The most effective use of a serverless architecture in an app development process is that it reduces the costs of the work.It’s typically less expensive a ‘traditional’ server architecture. The reason is that on hardware servers, you have to pay for many different things that might not be  required. For example, you won’t have to pay for regular maintenance, the grounds, the electricity, and staff maintenance. Hence, you can save a considerable amount of money and use that for app quality as well. Easier for service It is a rational thought that when the owner or the app manager will not have to manage the server themselves, and a machine can do this job, then it won’t be as challenging to make the service accessible. As it will make the job more comfortable because it will not require supervision. Second, you will not have to spend time on it. Instead, you can use this time for productive work such as product development. Third, the service by this technology is reliable, and hence you can easily use it without much fear. Scalability Now another interestingly useful advantage of serverless architecture in app development is scalability. So, what is scalability? Well, it is the phenomenon by which a system handles an extra amount of work by adding resources to the system. It is the capability of an app or product to continue to work appropriately without disturbance when it is reformed in size or volume to meet any users need. So, serverless architecture act as the resource that is added to the system to handle any work that has piled up. Third-party services Another essential and useful feature of the serverless architecture is that, going this way you can use third-party services. Hence, your app can use any third-party service it requires other than what you already have. This way, the struggle needed to create the backend architecture of the app reduces. Additionally the third-party might provide us with better services than we already have. Hence, eventually, serverless architecture proves to be better as it provides the extent of a third-party. Serverless for app development: negatives Now we know all the advantages of a serverless architecture, it’s important to note that it can also it  bring some limitations and disadvantages. These are: Time restrictions Vendor lock-in Multi-tenancy Debugging is not possible Time restrictions As mentioned before, serverless architecture works on FaaS rules and has a time limit for running a function. This time limit is 300 seconds exactly. So, when this limit is reached, the function is stopped. Therefore, for more complex functions that require more time to execute, FaaS approach may not be a good choice. Although this problem can be tackled in a way that the problem is solved easily, to do this, we can split a task into several simpler functions if the task allows it. Otherwise, time restrictions like these can cause great difficulty. Vendor lock-in We have discussed that by using serverless architecture, we can utilize with third party services. Well, it can also go in the wrong way and cause vendor lock-in. If, for any reason, you decide to shift to a new service provider, in most cases services will be fulfilled in a different way. That means the productivity gains you expected from serverless will be lost as you will have to adjust and reconfigure the infrastructure to accept the new service. Multi-tenancy Multi-tenancy is an increasing problem in serverless architecture. The data of many tenants are kept quite near to each other. This can create  confusion. Some data might be exchanged, distributed, or probably lost. In turn, this can cause security and reliability issues. A customer could, for example, suddenly produce an extraordinarily high load which would affect other customers' applications. Debugging is not possible Debugging isn’t possible with serverless. As Serverless Architecture is a place where the data is being stored, it doesn’t have a debugging facility where the uploaded code can be debugged. If you want to know the function, run or perform it and wait for the result. The result can crash in the function and you cannot do anything about this. However, there is a way to resolve this problem, as well. You can use extensive logging which with every step being logged, decreases the chances of errors that cause debugging issues. Conclusion Serverless architecture certainly seems impressive in spite of having some limitations. There is no doubt that the viability and success of architectures depend on the business requirements and of course on the technology used. In the same way, serverless can sparkle bright if used in the appropriate case. I hope this blog might have helped you in the understanding of Serverless architecture for mobile apps and might be able to see it's both bright and dark sides. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business.   What is serverless architecture and why should I be interested? Introducing numpywren, a system for linear algebra built on a serverless architecture Serverless Computing 101 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2
Read more
  • 0
  • 0
  • 6060

article-image-pets-cattle-analogy-demonstrates-how-serverless-fits-software-infrastructure-landscape
Russ McKendrick
20 Feb 2018
8 min read
Save for later

The pets and cattle analogy demonstrates how serverless fits into the software infrastructure landscape

Russ McKendrick
20 Feb 2018
8 min read
When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers. This can be quite a valid conclusion if you are using a public cloud service like AWS, but when it comes to running in your own environment, you can't avoid having to run on a server of some sort. This blog post is an extract from Kubernetes for Serverless Applications by Russ McKendrick. Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach. The pets, cattle, chickens insects, and snowflakes analogy I first came across the pets versus cattle analogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft. The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago. Pets: the bare metal servers and virtual machines Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines: We name each server as you would a pet. For example, app-server01.domain.com and database-server01.domain.com. When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily. You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented. There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers. Cattle: the sort of instances you run on public clouds Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled. You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called ip123067099123.domain.com and tagged as app-server. When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica. You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years. Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires additional resources, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state. Chickens: an analogy for containers In 2015,  Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle: Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance. Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM. Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes. Insects: An analogy for serverless Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service. Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds. Snowflakes Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare: Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago. Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up. Bringing the pets, cattle, chickens, insects and snowflakes analogy together... When I explain the analogy to people, I usually sum up by saying something like this: Organizations who have pets are slowly moving their infrastructure to be more like cattle. Those who are already running their infrastructure as cattle are moving towards chickens to get the most out of their resources. Those running chickens are going to be looking at how much work is involved in moving their application to run as insects by completely decoupling their application into individually executable components. But the most important take away is this:  No one wants to or should be running snowflakes. Serverless and insects As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model. When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers. Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code. Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments. So how does that explanation fits in with the insect analogy? Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site. In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed. We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests. Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.
Read more
  • 0
  • 0
  • 7508

article-image-virtual-machines-vs-containers
Amit Kothari
17 Oct 2017
5 min read
Save for later

Virtual machines vs Containers

Amit Kothari
17 Oct 2017
5 min read
Virtual machines and containers are pretty similar, but they do possess some important differences. These differences will dictate which ones you decide to use. So, when you ask a question like 'virtual machines vs containers' there isn't necessarily going to be an outright winner - but there might be a winner for you in a given scenario. Let's take a look at what a virtual machine is, exactly, what a container is, and how they compare - as well as the key differences between the two. What is a virtual machine? Virtual machines are a product of hardware virtualization. They sit on top of physical machines with the hypervisor or virtual machine manager in between, acting as a layer of abstraction between the virtual machine and the underlying hardware. A virtualized physical machine can host multiple virtual machines, enabling better hardware utilization. Since the hypervisor abstracts the physical machine's hardware, it allows virtual machines to use a different operating system on the same host machine. The host operating system and virtual machine operating system run their own kernel. All the communication between the virtual machines and the host machine occurs through the hypervisor, resulting in high level of isolation. This means if one virtual machine crashes, it would not affect other virtual machines running on the same physical machine. Although the hypervisor's abstraction layer offers a high level of isolation, it also affects the performance. This problem can be solved by using a different virtualization technique. What is a container? Containers use lightweight operating system level virtualization. Similar to virtual machines, multiple containers can run on the same host machine. However, containers do not have their own kernel. They share the host machine's kernel, making them much smaller in size compared to virtual machines. They use process level isolation, allowing processes inside a container to be isolated from other containers. The difference between virtual machines and containers In his post Containers are not VMs, Mike Coleman use the analogy of houses and apartment buildings to compare virtual machines and containers. Self-contained houses have their own infrastructure while apartments are built around shared infrastructure. Similarly, virtual machines have their own operating system, with kernel, binaries, libraries etc. While containers share the host operating system kernel with other containers. Due to this, containers are much smaller in size allowing a physical machine to host more containers than virtual machines. Since containers use lightweight operating system level virtualization instead of a hypervisor, they are less resource intensive compared to virtual machines and offer better performance. Compared to virtual machines, containers are faster, quicker to provision, and easy to scale. As spinning a new container is quick and easy when a patch or an update is required, it is easy to start a new container and stop the old one instead of updating a running container. This allows us to build immutable infrastructure, which is reliable, portable and easy to scale. All of this makes containers a preferred choice for application deployment, especially with the teams that are using micro-services or similar architecture, where an application is composed of multiple small services instead of a monolith. In microservice architecture, an application is built as a suite of independent, self-contained services. This allows the teams to work independent of each other and deliver features quicker. However, decomposing applications into multiple parts adds operational complexity and overhead. Containers solve this problem. Containers can serve as a building block in the microservice world where each service can be packaged and deployed as a container. A container will have everything that is required to run a service, this includes service code, its dependencies, configuration files, libraries etc. Packaging a service and all its dependencies as a container makes it easy to distribute and deploy a service. Since the container includes everything that is required to run a service, it can be deployed reliably in different environments. A service packaged as a container will run the same way locally on a developer's machine, in a test environment, and in production. However, there are things to consider when using containers. Containers share the kernel and other components of the host operating system. This makes them less isolated compared to virtual machines, and thus less secure. Since each virtual machine has its own kernel, we can run virtual machines with a different operating system on the same physical machine. However since containers share the host operating system kernel, only the guest operating system that can work with the host operating system can be installed in a container. Virtual machines vs containers - in conclusion... Compared to virtual machines, containers are lightweight, performant and easy to provision. While containers seem to be the obvious choice to build and deploy applications, virtual machines have their own advantages. Compared to physical machines, virtual machines have the better tooling and are easier to automate. Virtual machines and containers can co-exist. Organizations with existing infrastructure built around virtual machines can take the benefits of containers by deploying them on virtual machines.
Read more
  • 0
  • 0
  • 4137

article-image-how-move-server-serverless-10-steps
Erik Kappelman
27 Sep 2017
7 min read
Save for later

How to move from server to serverless in 10 steps

Erik Kappelman
27 Sep 2017
7 min read
If serverless computing sounds a little contrived to you, you’re right, it is. Serverless computing isn't really serverless, well not yet anyway. It would be more accurate to call it serverless development. If you are a backend boffin, or you spend most of your time writing Dockerfiles, you are probably not going to be super into serverless computing. This is because serverless computing allows for applications to consist of chunks of code that do things in response to stimulus. What makes this different that other development is that the chunks of code don’t need to be woven into a traditional frontend-backend setup. Instead, serverless computing allows code to execute without the need for complicated backend configurations. Additionally, the services that provide serverless computing can easily scale an application as necessary, based on the activity the application is receiving. How AWS Lambda supports serverless computing We will discuss Amazon Web Services (AWS) Lambda, Amazon’s serverless computing offering. We are going to go over one of Amazon’s use cases to better understand the value of serverless computing, and how someone can get started. Have an application, build an application, or have an idea for an application. This could also be step zero, but you can’t really have a serverless application without an application. We are going to be looking at a simple abstraction of an app, but if you want to put this into practice, you’ll need a project. Create an AWS account, if you don’t already have one, and set up the AWS Command Line Interface on your machine. Quick Note: I am on OSX and I had a lot of trouble getting the AWS Command Line Interface installed and working. AWS recommends using pip to install, but the bash command never seemed to end up in the right place. Instead I used Homebrew and then it worked fine. Navigate to the S3 on AWS and create two buckets for testing purposes. One is going to be used for uploading, and the other is going to receive uploaded pictures that have been transformed from the other bucket. The bucket used to receive the transformed pictures should have a name of this form “Other buckets name”+“resized”. The code we are using requires this format in order to work. If you really don’t like that, you can modify the code to use a different format. Navigate to the AWS Lambda Management Console and choose the Create Function option, choose Author from scratch, and click the empty box next to the Lambda symbol in order to create a trigger. Choose S3. Now specify the bucket that the pictures are going to be initially uploaded into. Then under the event type choose Object Created (All). Leave the trigger disabled and press the Next button. Give your function a name, and for now, we are done with the console. On your local machine set up a workspace creating a root directory for our project with a node_modules folder. Then install the async and gm libraries. Create a JavaScript file named index.js and copy and paste the code from the end of the blog into the file. It needs to be name index.js for this example to work. There are settings that determine what the function entry point is that can be changed to look for a different filename. The code we are using comes from an example on AWS located here. I recommend you check out their documentation. If we look at the code that we are pasting into our editor we can learn a few things about using Lambda. We can see that there is an aws-sdk in use and that we use that dependency to create an S3 object. We get the information about the source bucket from the event object that is passed into the main function. This is why we named our buckets the way we did. We can get our uploaded picture using the getObject method of our S3 object. We have the S3 file information we want to get from the event object passed into the main function. This code grabs that file, puts it into a buffer, uses the gm library to resize the object and then use the same S3 object, specifying the destination bucket this time, to upload the file. Now we are ready ZIP up your root folder and let's deploy this function to our new Lambda instance that we have created. Quick Note: While using OSX I had to zip my JS file and node_modules folder directly into a ZIP archive instead of recursively zipping the root folder. For some reason the upload doesn’t work unless the zipping is done this way. This is at least true when using OSX. We are going upload using the Lambda Management Console, if you’re fancy you can use the AWS Command Line Interface. So, get to the management console and choose Upload a .ZIP File. Click the upload button, specify your ZIP file and then press the Save button. Now we will test our work. Click the Actions drop down and choose the Configure test event option. Now choose the S3 PUT test event and specify the bucket that images will be uploaded too. This creates a test that simulates an upload and if everything goes according to plan, your function should pass. Profit! I hope this introduction in AWS Lambda serves as a primer on Serverless development in general. The goal here is to get you started. Serverless computing has some real promise. As a primarily front-end developer, I revel in the idea of serverless anything. I find that the absolute worst part of any development project is the back-end. That being said, I don’t think that sysadmins will be lining up for unemployment checks tomorrow. Once serverless computing catches on, and maybe grows and matures a little bit, we’re going to have a real juggernaut on our hands. The code below is used in this example and comes from AWS: // dependencies varasync = require('async'); var AWS = require('aws-sdk'); var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration. var util = require('util'); // constants var MAX_WIDTH = 100; var MAX_HEIGHT = 100; // get reference to S3 client var s3 = new AWS.S3(); exports.handler = function(event, context, callback) { // Read options from the event. console.log("Reading options from event:n", util.inspect(event, {depth: 5})); var srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters. var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); var dstBucket = srcBucket + "resized"; var dstKey = "resized-" + srcKey; // Sanity check: validate that source and destination are different buckets. if (srcBucket == dstBucket) { callback("Source and destination buckets are the same."); return; } // Infer the image type. var typeMatch = srcKey.match(/.([^.]*)$/); if (!typeMatch) { callback("Could not determine the image type."); return; } var imageType = typeMatch[1]; if (imageType != "jpg"&& imageType != "png") { callback('Unsupported image type: ${imageType}'); return; } // Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ functiondownload(next) { // Download the image from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, functiontransform(response, next) { gm(response.Body).size(function(err, size) { // Infer the scaling factor to avoid stretching the image unnaturally. var scalingFactor = Math.min( MAX_WIDTH / size.width, MAX_HEIGHT / size.height ); var width = scalingFactor * size.width; var height = scalingFactor * size.height; // Transform the image buffer in memory. this.resize(width, height) .toBuffer(imageType, function(err, buffer) { if (err) { next(err); } else { next(null, response.ContentType, buffer); } }); }); }, functionupload(contentType, data, next) { // Stream the transformed image to a different S3 bucket. s3.putObject({ Bucket: dstBucket, Key: dstKey, Body: data, ContentType: contentType }, next); } ], function (err) { if (err) { console.error( 'Unable to resize ' + srcBucket + '/' + srcKey + ' and upload to ' + dstBucket + '/' + dstKey + ' due to an error: ' + err ); } else { console.log( 'Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey ); } callback(null, "message"); } ); }; Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 2636

article-image-what-serverless-architecture-and-why-should-i-be-interested
Ben Neil
01 Jun 2017
6 min read
Save for later

What is serverless architecture and why should I be interested?

Ben Neil
01 Jun 2017
6 min read
I’ve heard the term “serverless” architecture for over a year and it took awhile before I even started seriously looking into this technology.  I was of the belief that that serverless was going to be another PaaS solution, similar to cloud foundry with even less levers to pull.  However, as I started playing with a few different use cases, I quickly discovered that a serverless approach to a problem could be fast, focused and unearthed some interesting use cases. So without any further ado I want to break down some of the techniques that make architecture “serverless” and provide you with suggestions along the way.   My four tenants of the serverless approach are as follows: Figure out where in your code base this seems a good fit. Find a service that runs FaaS. Look at dollars not cents. Profit.  The first step, as with any new solution, is to determine where in your code base a scalable solution would make sense.  By all means, when it comes to serverless, you shouldn’t get too exuberant and refactor your project in its entirety. You want to find a bottleneck that could use the high scalability options that serverless vendors can grant you. This can be math functions, image manipulations, log analysis, specific map reduce, or anything you find that may need some intensive compute, but not requiring a lot of stateful data.  A really great litmus test for this is to use some performance tooling that's available for your language, if you note that a bottleneck is related to a critical section like database access, but a spike that keeps occurring from a piece of code that perhaps works, but hasn't been fully optimized yet.  Assuming you found that piece of code (modifying it to be in a request/response pattern),and you want to expose it in a highly scalable way, you can move on to applying that code to your FaaS solution. Integrating that solution should be relatively painless, but it's worth taking a look at some of the current contenders in the FaaS ecosystem, thus leading into the second point “finding a FaaS,” which is now easier with vendors such as Hook.io, AWS Lambda, Google Cloud functions, Microsoft Azure, Hyper.sh Func, and others. Note, one of the bonuses from all the above vendors I have included is that you will only pay for the compute time of your function, meaning that as requests come in, your function will directly scale the cost of running your code.  Think of it like usingjitsu: (http://www.thomas.gazagnaire.org/pub/MLSGSSMCSLCL15.pdf), you can spin up the server and apply the function, get a result, and rinse/repeat all without having to worry about the underlying infrastructure.  Now, given your experience in general with these vendors, if you are new to FaaS, I would strongly recommend taking a look at Hook.io because you can get a free developer account and start coding to get an idea of how this pattern works for this technology. Then, after you become more familiar you can than move onto AWSLamda or Google Cloud Functions for all the different possibilities that those larger vendors can provide. Another interesting trend that has became popular from a modern aspect of serverless infrastructure is to “use vendors where applicable,” which can be restated as only focusing on the services you want to be responsible for.  Taking the least interesting parts of the application and offloading them to third parties, which translates, as a developer, to maximizing your time by often paying for just for the services you are using rather than hosting large VMs yourself, and expending the time required to maintain them effectively.  For example, it'srelatively painless to spin up an instance on AWS, Rackspace, and install a MySql server on a virtual machine, but over time the investment of your personal time to back up, monitor, and continually maintain that instance may be too much of draw for your experience, or take too much attention away from day-to-day responsibilities. You might say, well isn’t that what Docker is for? But what people often discover with visualization is it has its own problem set, which may not be what you are looking for. Given the MySql example, you can easily bring up a Docker instance, but what about keeping stateful volumes? Which drivers are you going to use for persistent storage? Questions start piling up about the short-term gains versus long-term issues, and that's when you can use a service like AWS RDS to get a database up and running for the long term. Set the backup schedule to your desire,and never you’ll have to worry about any of that maintenance (well some push button upgrades every once in a blue moon). As stated earlier, how does a serverless approach differ from having a bunch of containers with these technologies spun up through Docker compose and hooking them up to event-based systems frameworks similar to the serverless framework (https://github.com/serverless/serverless). Well,you might have something there and I would encourage anyone reading this article to take a glance. But to keep it brief, depending on your definition of serverless, those investments in time might not be what you’re looking for.  Given the flexibility and rising popularity in micro/nanoservices, alongside all the vendors that are at your disposal to take some of the burden off developing, serverless architecture has really become interesting. So, take the advantages of this massive vendor ecosystem and FaaS solutions and focus on developing. Because when all is said and done, services, software, and applications are made of functions that are fun to write, whereas the thankless task of upgrading a MySql database could stop your hair from going white prematurely.  About the author  Ben Neil is a polyglot engineer who has the privilege to fill a lot of critical roles, whether it's dealing with front/backend application development, system administration, integrating devops methodology or writing. He has spent 10+ years creating solutions, services, and full lifecycle automation for medium to large companies.  He is currently focused on Scala, container and unikernel technology following a bleeding edge open source community, which brings the benefits to companies that strive to be at the foremost of technology. He can usually be found either playing dwarf fortress or contributing on Github.  I’m not sure but is ‘awhile’ an adverb? Also, I thought the first sentence could read better maybe if it was a structured a little differently. E.g. The term “serverless” architecture has been thrown around for over a year, and it’s taken some time for me to start seriously looking into this [adjective] technology.
Read more
  • 0
  • 0
  • 2994
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime