JFS (file system)

The Journaled File System ( JFS ) was published in 1990 by IBM for its own operating system AIX. Background was an extensive hardware virtualization layer in the then newly introduced version 3 of AIX: An equally newly imagined Logical Volume Manager (LVM ) replaced the rigid access schemes onto data media, a new storage manager brought the virtualization of storage space, so the outsourcing of main memory on a (virtual ) hard disk, and the PowerPC CPU family, which is the heart among other things, the pSeries today, was introduced. JFS for AIX should not be confused with the Veritas File System, which is also referred to on HP- UX JFS.

The primary design goal of JFS was the steady consistency of the file system: filesystem modifications are transaction-oriented written and recorded in a journal. During a crash can thus - starting from a consistency point of transactions - be produced very efficiently, a consistent state of the file system on the journal. A full access to the file system is thus achieved very quickly. The focus with this is the availability of the resource file system, not the performance or the integrity of the file contents ( Journaling refers only to changes in the file system, so for example file entries in directories and not the actual file contents ).

The LVM is useful for the scalability of the file system: during normal operation and under load can simply disks are added in the configuration and added to the Volume Group to expand the file system.

For, also published by IBM OS / 2, a new generation of JFS was developed and introduced in 2000. This JFS is a new implementation of JFS as the "historical" JFS code is heavily based on the pSeries architecture (OS / 2 runs on x86 computers). This new JFS code was imported as JFS2 in AIX 5.1 and released in 2002 by IBM under the GNU General Public License.

The main differences in size:

Moreover, optimizations for current server hardware have been made; Thus, the output of JFS2 is slightly better than that of JFS.

While it is Linux support, but the defragmentation has not yet been ported to Linux. This can lead to (a few kilobytes ) fragmented the file system by creating and deleting many small files and especially the write accesses to slow down a little and produce a higher CPU load. Because of the extent (English expansion, consisting of a pair of address -length ) -based allocation of file blocks and an intelligent allocation strategy, ie adjacent extents the same file during the changing of files merged (this is likely the more fragmented the file system ), the degree of fragmentation remains below a certain ratio. Many other file and database systems use a similar extent -based file block allocation.