Architectural considerations for high traffic web portals

 

 

Almost everyone today is aware of internet based companies such as Yahoo!, MSN and Google. These are sites concurrently handling several million visitors from across the globe every hour. Have you ever wondered about what goes into designing such high volume websites? This blog discusses the factors that need to be kept in mind while designing such portals.

There are several aspects to architecting high traffic web portals, which are expected to serve high concurrency with high availability and without degrading the performance. Apart from the architecture, other SDLC phases such as design, development and deployment also need special considerations. Since architecting the system is the very first step towards building the portal, this post will highlight some of the important architectural considerations.

To withstand the heavy traffic, the system should primarily be scalable, be highly available and should be able to intelligently delegate/distribute the traffic to improve the overall performance. Each of these aspects is discussed in turn below.

Scalability

Scalability is about concurrency and expandability. In the current context, it is related more to servers which are serving the application. Higher the capacity of the server, more the traffic it can serve. There are two types of scaling with their own pros and cons and it is a judgmental decision to choose which one (or even both, in combination) to go for depending on the expected traffic.

Vertical Scaling vs. Horizontal Scaling

Vertical Scaling: Also known as scaling up. This means adding more hardware resources in terms of number of processors, memory etc. to the existing server to cope up with increasing traffic. The ease of implementation of this method also comes with some disadvantages, such as:

  • Continuous upgrading of server is expensive.
  • There is always a limit for a given server to upgrade to.
  • If the server crashes, the application is not available.

Horizontal Scaling: Also known as scaling out. In this approach, instead of adding hardware resources to the existing server, extra server machines (maybe with comparatively lower capacity) are added to the pool. All the servers serve the same application. This is a cheaper approach since individual servers need not have very high end configuration. Additionally, even if one server crashes, the others in the cluster will still continue to serve the application. The only drawback is that it requires more administrative efforts in terms of configuring and monitoring the cluster.

High Availability

 

Backup Server

In this configuration, two servers are deployed for the same application. The primary server serves the application and the second server acts as a backup for the primary. If the primary server goes down for some reason, the backup server takes care of the user requests. There are two configurations possible with this:

Active-Standby: where the standby server is passive while the primary server is active. In case the primary server goes down, the current user sessions are not maintained when the backup server takes over.

Active-Active: In this case, the user sessions are maintained and are continued to be served by the backup server when the primary goes down.

Clustering

For very high traffic, the clustered approach (Horizontal Scaling), which ensures high availability of the application, is effective. With the clustered environment, the user gets a seamless experience. This kind of environment can be configured to maintain the user’s web sessions in case any of the servers go down. Most of today’s application servers provide clustering as an inbuilt feature. Using proper load balancing mechanisms in place, one can have different servers of different capacity in the same cluster.

 

Clustering

Performance

Performance refers to how efficiently a site responds to browser requests according to predefined benchmarks. The application performance can be designed, tuned, and measured. As said earlier, it can also be affected by many complex factors, including application design and construction, database connectivity, network capacity and bandwidth, back office services (such as mail, proxy, and security services), and hardware server resources. In the scope of this post, below are some of the considerations for performance while architecting the system.

Load Balancers

In a clustered environment, it may be possible that all the servers are not of the same capacity in terms of CPU, RAM, etc. Software load balancers are available which can enforce a policy on the site while distributing the load across the servers. The simplest policy could be of a “round robin” type where requests are passed sequentially to all servers, thus utilizing cycles of each server in the cluster. Some of these tools also allow configuring the rules for individual servers on the basis of its CPU capacity, RAM or current load on the server. For example, servers having low capacity would serve comparatively lesser number of requests to maintain the performance benchmarks.

Delegating the Traffic

While loading any web page, a browser sends several HTTP requests to the server to download associated content such as images, CSS and JavaScript files, video files etc., which are required to be rendered on the page. It is possible to distribute the implicit requests for this content across different servers to allow the main application server to serve the dynamic contents of the main page. Several techniques can be adopted to achieve this, as discussed next.

  • Proxy Web Server

This is a commonly used technique where the web server acts as a proxy to the application server. All the static content (such as images, CSS, JavaScript and video files) used by the site is deployed and served by the web server. Only relevant requests are forwarded to the application server, which reduces the direct load on the application server. These web servers can also form a cluster in front of the application server cluster.

  • Use of CDN

A content delivery network or content distribution network (CDN) is a system of computers placed at various geographical locations in a network so as to maximize bandwidth for access to the data from clients throughout the network. All the servers in the network deploy and serve the same content. A client accesses a copy of the data nearest to it, as opposed to all clients accessing the same central server, so as to avoid a bottleneck near that server. These systems implement routing algorithms such that the nearest server serves the request for the fastest delivery.

Various vendors in the market provide this service with a high quality, low cost and low network load. A CDN can offer 100% availability, even with large power, network or hardware outages.

Content distribution network

 

  • Third Party Storage Services

This approach refers to using third party services to store the data on their servers. These services help in reducing the initial investments in infrastructure. The storage space can be bought on demand. Generally, these services are used to store contents uploaded by users.

There are services such as Amazon S3 which provide online storage through a simple web services interface at a very nominal cost to store and retrieve the contents.

Although this approach is generally useful in reducing the hardware cost, it can also help in performance improvements in this context. Since the contents are stored on third party servers and are also available as URI, the overall load on the main server is reduced to some extent.

Conclusion

In summary, some of the important architectural considerations for designing high traffic web sites and portals have been discussed in this blog. There are also several other factors at different phases of the design which need to be considered to achieve good concurrency and performance.

Advertisements
Leave a comment

Leave your opinion

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: