Understanding the new system architecture
As you will notice, the Exchange 2013 architecture has changed quite drastically, compared to earlier versions. Before diving into the specifics, let's have a quick look at where the architecture came from.
The old architecture
In Exchange 2007 and 2010 the system architecture was based on five server roles:
Mailbox server
Client Access Server
Hub Transport server
Edge Transport
Unified Messaging
One of the reasons Microsoft introduced different server roles at that time was mainly because of hardware considerations. Hardware that was available at that time wasn't always able to handle the load, Exchange generated appropriately. By splitting the Exchange workloads across several roles, it was easier to separate these workloads across different servers. This allowed for a better usage of the system resource and provided a work around for the hardware limitations.
As time passed by, the requirement to split roles across multiple servers became less relevant as modern hardware is now powerful enough to deal with Exchange's requirements. This is also the reason why, as of Exchange 2010 SP1, Microsoft started recommending deploying multi-role servers again. These multi-role servers are Exchange Servers on which the three default roles are installed together: Mailbox server, Client Access Server, and Hub Transport. This recommendation is also carried forward to Exchange 2013, but as you will find out it's not entirely the same.
The new architecture
Instead of five server roles, only two remain: the Client Access Server role and the Mailbox Server role. Each of these server roles has inherited some or all of the features present in the roles from before.
The Client Access Server role is still the main entry point for client connections into your Exchange organization. Rather than being an endpoint, also responsible for rendering user data like in Exchange 2010, it's evolved into a sort of reverse proxy. In fact, that described exactly what it does.
The Client Access Server accepts and authenticates new connections and forwards the requests on behalf of the user to the appropriate Mailbox server which will then fetch and return the data. In case of Outlook Web App, it's now the Mailbox server who is responsible for rendering the data. Additionally, the Client Access Server inherited some new functionalities as well. It now hosts the new Front End Transport service components which act as a proxy for incoming SMTP traffic. Just as for other client connections, the Front End Transport service will proxy incoming SMTP connections to an underlying Mailbox server.
The Exchange 2013 Mailbox server has had a significant overhaul as well. In fact, the Mailbox server is what used to be an Exchange 2010 multirole server combined with the Unified Messaging server role. It has inherited many components of each of these roles with the exception of some components that disappeared into the new Client Access Server role.
The following diagram depicts the new architecture from a high-level perspective:
At the time of writing, there's no information available as to whether Microsoft will create an Exchange 2013 Edge Transport server role. For now, you'll have to use the Exchange 2010 Edge Transport which is fully supported.
For more information on each of the server roles, have a look at Chapter 3, Configuring the Client Access Server Role and Chapter 4, Configuring and Managing the Mailbox Server Role.
Database Availability Groups
Introduced in Exchange Server 2010, the concept of the Database Availability Group (DAG) hasn't really changed. The idea is still to create a database copy on other servers, which is then continuously updated through replication. The improvements in Exchange 2013 come mainly from what has changed under the hood. Given that Exchange 2013 can be installed on top of Windows Server 2012, the DAG can take advantage of new clustering feature like dynamic quorum which ultimately leads to a higher availability rate.
Dynamic quorum allows the quorum settings of your cluster to be dynamically adjusted as members of the cluster fail. By doing so, a cluster could remain active up until the last man standing, whereas before the entire cluster would go down as soon as quorum was lost.
There's moreā¦
To multirole or not to multirole? A seemingly philosophical question, but the answer is less exciting than you might expect. The fact is that multirole deployments are still the general recommendation as it was for Exchange 2010 SP1 onwards. In my opinion you should only move away from deploying multirole Exchange servers if there's a good technical or business reason to do so.
Typically in large environment you might see a series dedicated Client Access Servers as splitting them off the Mailbox server role pays off in terms of the amount of servers required.