Usually most of the architecture and restructuring of the applications and finding a proper architect for the entire solution would require enough POC and testing. However, due to shortage of time, cost and also dependencies to other services, the solution architects will prepare the easiest workable one and then improve gradually, and later will face a lot of bottleneck and deadlock. The main target of this article is to compare and represent some foundation and common architectures based on 4 main application categories: resilient, high-performance, secure and cost effective.
Target of this architecture is to achieve an effective architecture using fundamental services. Best practices are important to know when building these architecture. Therefore, focusing on following items is required:
Placing all the resources on the same machine creates availability and security risk. If your server got down, the application would be down, and it would not communicate with the database. In case of any attack on the server, and if you do not have a replica then you are at greater risk of data loss. So, Multi-tier architecture solves these problems by splitting data access across more than one server. By distributing all the resources spread into different servers your deployment performance would be boosted. Moreover, having different layers for different resources implies adding an extra security layer by separating data from code. If you have replication, the database can be replicated across more than one server which prevents the loss of data in case of cluster failure.
High availability means that a system will almost always maintain uptime. Though sometimes it could be in a degraded status. Five nines is when your system has high availability at 99.999% uptime. It means that the system would be down only for a mere five minutes and fifteen seconds a year. System would be in a high availability status when it has fault tolerance. So, fault tolerance as high availability’s brother means that a system will almost always maintain uptime and users will not notice anything different during a primary system outage.
Loose coupling isolates the layers and components of your application so that each component interacts with the others independently. This is necessary if you want to enable scalability and want your system to become stateless. Therefore, by using a load balancer or a queuing system you can distribute components of your applications that perform different tasks.
The Resilient Storage enables a shared storage or clustered file system to access the same storage device over a network through a pool of data that is available to each server in the group.
Nowadays all cloud providers have specific services for users to have a Resilient Architectures in an easiest way, but it would be possible through an on-prem environment by using different tools and services easily. For example one of the easiest ways is to use queuing and autoscaling services.
Rather than using cloud or on-premises environments, preparing most decoupling architecture is the main target when you need a resilient solution. If you already integrated into cloud and wanted to rollout such solution into AWS for example, utilizing below services are very recommended:
EC2 Storage types, Elastic File System (EFS), Amazon Simple Storage Service (S3), Simple Queue Service (SQS), Elastic Load Balancer (ELB), Auto Scaling, etc.
When your application reaches to an enterprise level, considering a high-performance architecture is highly required. Therefore, certain metrics need to be followed:
Elasticity is to match resource allocation with actual amount of resources needed at any given point in time; and scalability handles the changing needs of an application within the confines of the infrastructure via statically adding or removing resources to meet applications demands if needed.
Scalable storage enables the architecture to increase the data capacity using a single repository rather than multiple sentinel servers. True scale-out storage does not only refer to the capacity of the systems to store more data. It should also provide the capability to sort out and find the data. Ultimately, the scalable storage solution should ensure flexibility, security, functionality, and reliability. As an example, you can use a POSIX (Portable Operating System Interface) object-based files, to manage the workloads. Moreover, Scaling data-intensive workloads on-premises typically involves purchasing more hardware.
High-performance networks (HPNs) play a role in real-time data processing requirements. For example, activities such as datacenter replication, datacenter disaster recovery, and high-performance distributed computing require high volume data transfer and low network latency. HPNs with dynamic connection capabilities make high-performance network resources more accessible and manageable.
It has always been a challenge for organizations with high-volume and complex data-management needs to find the database that offers the best fit. As enterprises increasingly mix and match on-premises and multi-cloud environments on stateless cloud native platforms, the performance bar that is required of databases becomes that much higher. In addition to improving data access performance, a good design achieves other benefits, such as maintaining data consistency, accuracy, reliability and reducing storage space by eliminating redundancies. Another benefit of good design is that the database is easier to use and maintain.
Many tools and services are available to give you a better opportunity to make your desired architecture. Therefore having API gateways, Load balancer, using a variety of web-service tools is possible easily. Moreover, in a cloud environment such as AWS utilizing the following services are very recommended: RDS, DynamoDB, Elasticache, CloudFront, APIGateway, etc.
Security must be implied within the architecture itself. It means that considering a secure architecture and design at the initial point of your planning would be required.
To achieve security on resource access we need to consider many corners, but mainly focusing on the basic parameters such as follows is required:
This ensures that all business needs and performance goals of the application are achieved without any security incident when deployed within the production environment. It means that by considering following items you need to make sure you are using multi-tire (commonly 3 tiers) application architecture to handle the security metrics:
Data security is the process of protecting corporate data and preventing data loss through unauthorized access. This includes protecting your data from attacks that can encrypt or destroy data, such as ransomware, as well as attacks that can modify or corrupt your data. Data security also ensures data is available to anyone in the organization who has access to it. It includes data encryption, hashing, tokenization, and key management practices that protect data across all applications and platforms. In addition, it’s a concept that encompasses every aspect of information security from the physical security of hardware and storage devices to administrative and access controls, as well as the logical security of software applications.
Cloud environments mostly prepare many tools to help you have all security metrics in once. For example in AWS you can use: Identity and Access Management (IAM), Key Management Service (KMS), Customer Managed Key (CMK), CloudHSM, etc.
Of course you might have wider options in clouds but having a proper architecture which is most cost-effective based on your application and usage is so important from a business point of view. The other word, cost is one of primary and critical items, which might stop your architecture from being developed.
With data volumes growing exponentially, many companies are looking to save money on their storage costs. You should know how to take cost into consideration when building your architectures.
You might store your data in a relational database to ease development and management tasks. When you launch your application, the database is manageable at first, but it grows to hundreds of gigabytes per week. Data storage and retrieval alone are consuming 20 percent of IOPS and CPU in your relational database instance. In addition, applications are storing XML, JSON, and binary documents in database tables along with transactional data. Historical data continues growing every month. Your traditional on-premises database licensing and infrastructure costs are increasing, and scaling the database has become a big challenge. The data store that your application uses depends on its access pattern. It also depends on the scale at which you anticipate its data to grow and how readily you require access to the data. Therefore, it is very important to have a proper database, not only to save the storage but also to save money, and reduce the cost.
Most unplanned and hidden cost of an architecture especially in a cloud environment is from network loads, therefore having a proper network architecture would cover a big gap within the whole infrastructure costs.
A cost optimization assessment provides a snapshot of the existing network infrastructure, strengths and areas for improvements, which helps identify opportunities for planning and recognizes immediate cost savings between 5% to 30%. On average, best practices can generate additional productivity and efficiency savings of 5% to 20% throughout the entire network infrastructure environment.
In Conclusion, selecting and then designing a proper architecture has a direct impact on your cost, performance, security and many more metrics. Therefore, picking the basic foundation and then building on top of it can help on your platform to be consistent and highly available.