Skip to main content

SAN vs. NAS: Comparing two approaches to data storage

Learn the difference between storage area networks and network-attached storage, along with when to use each.
Image
SAN vs NAS

Multiple Attributions*

For a new sysadmin, storage can be one of the more confusing aspects of infrastructure. This confusion can be caused by lack of exposure to new or different technologies, often because storage needs may be managed by another team. Without a specific interest in storage, an admin might find one’s self with a number of misconceptions, questions, or concerns about how or why to implement different solutions.

When discussing enterprise storage, two concepts are at the core of most conversations: storage area networks (SAN) and network-attached storage (NAS). Both options provide storage to clients across a network, which offers the huge benefit of removing individual servers as single points of failure. Using one of these options also reduces the cost of individual clients, as there is no longer a need to have large amounts of local storage.

Storing all of this important data in a specially designed system provides a centralized place to

manage—and back up your data, build access controls, provide security contexts to secure your data, and more in one single place, rather than having to build all of these processes into a fleet of machines. Scaling storage up to meet future needs is also much easier when storage is centrally located. You no longer need to spend as much energy tracking individual server disk usage. Instead, you manage the larger central pool and increase capacity by adding disks or shelves of disks as needed. These expansions can even be tiered, using drives with different performance capabilities to offer a more tailored experience to the different clients using this storage.

Regardless of the performance required, both a SAN and a NAS use the same basic building blocks for their underlying storage: drives. These drives can be anything from inexpensive consumer-grade 3.5-inch platter drives to 10K RPM SAS, and all the way up through solid-state and NVM Express (NVMe) devices. Speed, scale, and budget requirements determine the right design, but this is all commonly available hardware and nothing too exotic is required.

When looking at these two big-picture ideas from afar, they seem interchangeable, but they have many differences worth considering. A NAS, from an architectural standpoint, is usually a single server. It can be built as a virtual machine on a hypervisor, but is more often a physical machine itself, for scaling and performance reasons.

The NAS machine runs one or more file-sharing protocols that are exposed to an internal network. Subsequently, those shares are presented by protocols like NFS or SMB (CIFS) to allow clients to attach to the NAS for reading/writing files as if each client had a large local filesystem. A network-available filesystem like this is a pretty common need in a business environment, so a NAS is an easy entry point into the world of shared storage.

A SAN, on the other hand, is rarely a single machine. The SAN philosophy is to build a storage system from a handful of independent parts. Even with the most inexpensive options, you usually have a single physical chassis containing a pair of controllers that can failover to one another for maintenance (upgrades, etc.), or in case of failure.

SAN storage is based on the idea of providing block-level access for hosts that need control over their own storage details (filesystems, etc.), rather than a simple file share like NFS provides. A machine would normally use an internal disk as a block device, and upon that it would create filesystems. The SAN abstracts this issue away and provides that block device across a network. This access is almost always provided with either iSCSI or Fibre Channel (including Fibre Channel over Ethernet, or FCoE) as the communication protocol between the clients and the SAN. The client consuming that block device can then partition it and create filesystems on it as necessary, without having to worry about another team managing those details.

A good SAN use case is a VMware hypervisor using SAN storage to hold virtual machine data, rather than on its own local drives. VMware's native filesystem (VMFS) requires block-level access to its storage, which means it cannot use a file share (like NFS) to store this data; though an NFS datastore can be created instead if required.

While a NAS is most often a single machine, the components of a SAN can include dedicated switches (or VLANs on a shared network), controller nodes, disk shelves, tape backup units, or gateway devices. The added complexity provides better scalability, redundancy, and tiering for individual services running on the SAN. Because of this cluster-like approach to the hardware, it is usually easier to add additional resources to a SAN than it is to a NAS, in the form of new switches or disk shelves.

Adding resources to a NAS requires the space, connectivity, and power to be free in a machine since the NAS model usually depends on a single chassis (or single virtual machine). There are certainly ways to scale a NAS out to large sizes, but the SAN model is much more suited for growth and scale. It can even make sense for an administrator to build out a large SAN for many different groups to use, including backing storage for a virtual NAS if there is limited space to deploy a large physical one. This approach is possible because a SAN and a NAS effectively live at different layers of abstraction; the SAN providing the block storage that a NAS inherently needs, and the NAS managing the filesystem and network shares on top of that block storage.

These technologies are not mutually exclusive, nor is one inherently better than the other. They both provide valuable storage capabilities for different needs. Many organizations end up running one or more of each type for different workloads, levels of redundancy, and availability. If you need a place for a group of users to store and share their files, a NAS is probably the right answer. Trying to use a NAS to provide shared storage for workloads that require lower-level access to storage (like block), or that have a less common filesystem (like vmfs) can introduce a lot more complexity, and consequently, a SAN would be a much better fit.

* Image credits: "Storage 5x10s" by Meathead Movers is licensed under CC BY-SA 2.0 and "Storage Units" by JeepersMedia is licensed under CC BY 2.0.

Topics:   Storage   Linux  
Author’s photo

Steve Newsted

Lifelong nerd, storage enthusiast, automation fiend, infrastructure lover, security paranoid, open source advocate, telecom twerp, coach, teacher, and goofball. More about me

Try Red Hat Enterprise Linux

Download it at no charge from the Red Hat Developer program.