Ponto Cyber

Blog Details

  • Home
  • Lesson 2 – Servers, Storage and Backups

Lesson 2 – Servers, Storage and Backups

In computer architecture, the server is an essential component of the client-server model. A server will provide a service for a client. Typically, one computer called the server will be reserved entirely for the execution of a given task. Then, additional computers are connected to this server, which awaits a request for a service. Much like your waiter is tasked with taking your order to the kitchen and returning with food, the server is tasked with providing a service to a client. This was highlighted previously when web servers and database servers were discussed. 

Generally, the service the server provides determines its name. A mail server will route communication. A print server will await requests to make printouts. A file server provides a common location to store files and folders. 

A diagram depicting communication between a print server, file server, and a client.

Recall what you learned on computer architecture where a computer’s ‘brain’ is the CPU. A computing server is a computer connected to a network that will perform some CPU-based tasks and return the results. A benefit of a computing server is that the computer will not become slow or unresponsive when making computations. 

An application server can host web browsers that have embedded applications. For example, these apps can run a piece of JavaScript code or perform some valuable function for that network. 

Modern businesses often keep their records in a digital format. Much like the application server, a database server will provide the service of returning stored information. Therefore, businesses should think about using dedicated database solutions if there is a large amount of data to keep. If the storage requirements are small, a Microsoft Excel sheet or dedicated accounting software suffices. 

RAM, as previously mentioned, is volatile, so if the power is cut, the information will be lost. Therefore, it is crucial to save your information on a regular basis in your main memory. It is worth keeping a backup of your information. There are four considerations when making a backup. 

Let’s start with durability.

Most storage decisions should meet the durability requirement. Data is durable when the information is saved from RAM into a secondary storage location. 

The next consideration of a backup is scalability.

Purchasing the hardware to act as a database server means you must have money available to acquire the machines. This involves sufficient planning because you need to know how much memory is needed. Buying too much memory is wasteful; if it’s not utilized, it is better to invest the money elsewhere. Alternatively, not buying enough hard drives might require acquiring and configuring more later. This potentially presents an additional expense of hiring a professional to perform the task. 

Using the cloud to store the information is preferable in this situation because you only pay for the storage you use. So, if a business wants to expand its current cloud storage capacity, it will pay for more space. Equally, if there is an excess amount of space, it can easily let go of the extra space at no additional cost. This option is often challenging if the hardware was purchased. 

Another key consideration when purchasing a backup is availability. 

Cloud storage is always available. Large online storage companies keep duplicates of your data in different geographical locations. So, if an earthquake impacts the storage centers in one location such as America, your data is backed-up and available in another geographical location like Europe. This, however, is dependent on your premises having an internet connection. 

If a business decides to keep a backup using cloud storage, the internet connection must be strong enough to access it. This is not a concern for direct-attached storage (DAS) because the database server is not dependent on being online. However, it does require being physically connected to the server to access the information. 

The last consideration is security.

Data stored on the cloud is encrypted, and by using one of the big storage companies, you benefit from the protection they guarantee for your data. Note that there are several ways that this security can be overcome. Information online does not guarantee safety, there is a chance it can get lost or corrupted due to viruses, loss of passwords, or an issue occurring with your online storage provider. Alternatively, DAS can be used to retain the data on-premises, but if a business becomes physically compromised, it is vulnerable to cyberattacks. 

Recent computer developments have led to a gradual shift from traditional on-premises computing to digital cloud computing. This is largely because cloud computing offers greater flexibility and cost savings over on-premises models. However, this does not automatically make it the best solution for everyone, as the traditional on-premises approach does retain advantages, such as more control over one’s data. In this reading, these two approaches will be compared and contrasted to help you better understand the optimal business uses for each one.

On-premises computing refers to the traditional approach of hosting everything on-site. This includes the hardware, operating systems, applications, and all associated infrastructure. This would include the data that is required for the business need. Typically, this would require an in-house IT department to oversee the network configuration and troubleshoot any hardware or software issues. 

Cloud computing refers to the newer practice of hosting the data, software platforms, applications, operating systems, and all associated infrastructure online. An enterprise would pay for what they require in the same way a contractor would pay for the services of each tradesman they employed in a house build. 

Both of these approaches aim to try to reduce cost while providing the highest level of service in executing the business need. Cloud computing provides unlimited memory space, on-demand services, regular upgrades, and dedicated companies providing various services. The on-premises approach has some limitations regarding storage and the applications that it can host. Still, with sufficient planning and implementation, a well-run traditional business can provide as much storage as is required and every application necessary for executing the business needs. Having a traditional model does not mean that cloud services cannot be temporarily utilized in place of acquiring additional hardware, should the need arise. 

Both business models work under the same principles. An on-premises set-up will have firewalls, authentication, and all necessary security to protect the client’s information. Different machines will function on-premises in a client-server model, data server, and, if required, application server. An online model will provide the same service, except that the machines providing the storage, computation, and services are dispersed worldwide and organized on a much larger scale. 

A notable difference between both approaches lies in the upfront costs. Purchasing hardware that is sufficiently powerful to cater to all business needs can be a steep initial cost. There are additional licensing, maintenance, and power needs to run the service. In contrast, with cloud services, one pays by the units of use. 

You can think of it like paying for electricity; rather than paying a large initial sum for a predetermined amount (which you may not even use in full!), you only pay for what you actually use. In addition, some services include the licensing required to access some services. As with paying the electricity, you depend on others to maintain and host your information. A significant drawback is that you lose access to your information if the provider has technical or internet connection issues. 

Another difference between the two models is the distance traveled by the data. On-premises deployment means data can be housed, processed, and deployed without going off-site. The nature of cloud computing is that data and applications are housed at different locations, which requires sending information over the network. This reduces the safety of the data as it opens more possible points of attack for a potential hacker. In addition, some geographic locations have different laws regarding handling and dealing with data and might require additional processing steps. 

Scaling is a measure issue with the traditional on-premises model. Scaling relates to growing and shrinking to accommodate demand. This becomes tricky when the business model requires purchasing all the hardware used. There is a slower response to greater and lesser needs. Inversely, cloud computing is highly scalable. The pay-as-you-go model means you can grow the processing power required to conduct business during peak times and relinquish this during the quieter hours. Thus, the cloud computing approach offers greater flexibility and scalability. 

A table listing the pros and cons of on-premises and cloud computing.

In this reading, you’ve compared on-premises and cloud computing and become familiar with their similarities and differences. You’ve learned that a cloud approach outsources storage and processing tasks to internet-based companies, while traditional approaches rely more on in-house solutions.

This means there are differences in cost, flexibility, control over resources, and vulnerability to cyber threats. However, ultimately one cannot say that one approach is better than the other, as there are aspects of both approaches that are best suited to different business needs. 

Previously, you found out about the fundamental security points in a computing environment that need to be considered when developing a security strategy. Specifically, a good policy should be able to stop unauthorized access, limit mobility within a system, and minimize any damage resulting from a breach. In this reading, you will learn more about the measures taken to enforce these points, with a focus on the following:

  1. Preventing an attacker from gaining entry to a system. 
  2. Segregating a system so that the damage to a system once accessed is limited.
  3. Best practices for storing copies of your system.

The policies and procedures you implement are your strategies for offsetting malicious manipulation from a would-be-hacker. Not having a plan is akin to planning to fail. Every organization needs to know how to prevent any malicious tampering and react at any breach stage. Finally, once a breach is complete, how to implement recovery as quickly as possible. 

Currently, Sam only uses personal devices to access emails related to Sam’s Scoops. She has taken some basic security measures and knows about best practices for staying safe. However, Sam is not sure if this is enough for a more complex business setup. Let’s find out what Sam should know in order to develop an effective approach for protecting customer data.

Gateway security is the top recommendation for cybersecurity. The reason is simple: if you can prevent any external unauthorized entity from accessing your system, you can ensure that your assets are protected. As with personal computers, the devices for accomplishing this are known as a firewall. A firewall will sit between a trusted network and filter traffic coming from an untrusted one. 

With on-premises access, implementing a firewall is relatively straightforward; all traffic coming from the internet is outside traffic and is treated with severe restrictions. All inside traffic is authorized and subject to more relaxed constraints. This is somewhat more complicated for cloud-based businesses, as the resources accessing the cloud resources may be dispersed. This means that it is more difficult to make a clear distinction between the inside and outside of a network.

A diagram depicting on-premise and cloud firewall.

If a threat is able to breach a gateway, there is another safeguard that can be implemented:

Segregation access is an effective security measure that can be applied to both traditional and cloud-based businesses. There are several ways that this can be achieved, and later you’ll learn about such concepts as identity management concepts and role-based access. For now, you only need to know that the shared idea is that access to one area only gives you a pass to some parts of a business, not all of it. These cloud-based solutions are implemented to deal with the vulnerabilities created by people accessing company resources using different types of devices in different and changing locations.

Zero Standing Access is the overarching concept that access to the production environment must be kept to a minimum and cannot be persisted over time. This means you must validate that you are authorized whenever you wish to access production-related areas. Even then, your access will only allow you to make changes sufficient to the area you have been authorized to access. Two critical policies that have grown from this are:

1. Just-In-Time (JIT)

2. Just-Enough-Access (JEA)

JIT means that having accessed a given area; you will only retain your access for a limited period of time before you are automatically ejected or asked to re-enter some authorization code. JEA relates to the limitations on the changes you can make while there. As a security specialist configures which areas should be accessed by which individuals, as well as what privileges are required when a specialized task needs to be engaged very carefully. Giving the wrong individual inappropriate access can be the difference between a minor and a severe security breach. 

Consider this situation: Sam hires someone to clean the windows on the shop front. Rather than giving this person the keys to the safe; instead, Sam will make the window and all the required access to this window available to the cleaner for the duration of the cleaning. If this includes keys to the shop, those keys are returned at the end of the process and will not include a key that could potentially open a safe or cashbox. 

While the first two parts of cybersecurity concern keeping intruders out and minimizing damage, the third relates to a policy for undoing any harm that might have been caused. In this regard, cloud-based businesses have an advantage over traditional ones because creating back-ups and spinning up new environments is part of cloud computing. 

In a traditional approach, applications are run on hardware, and it is advised to save and back up information regularly. Recall that this can be done by following the 3-2-1 recovery plan, which involves keeping three copies of everything in 2 formats, and 1 copy off-site.  

In operating on the cloud, cloud-based businesses already have their workflow virtualized. In addition, the underlying architecture ensures that backups are created of everything that goes online. This ensures that if there is an issue with accessing a business’s resources, the hosting company can provide an alternative. Finally, it is a good practice employed by cloud providers to store information in different geological locations (see the reading Azure Storage redundancy for more information).

In this reading, you learned about some of the specific methods and policies that both traditional and cloud-based businesses can implement to create a security strategy that will prevent access, limit exposure, and mitigate fallout.