No Data Corruption & Data Integrity
Discover what No Data Corruption & Data Integrity is and how it can be good for the files in your website hosting account.
The process of files getting damaged caused by some hardware or software failure is called data corruption and this is among the main problems that hosting companies face since the larger a hard disk is and the more information is filed on it, the more likely it is for data to get corrupted. You will find various fail-safes, still often the information becomes corrupted silently, so neither the file system, nor the admins notice a thing. Thus, a damaged file will be handled as a good one and if the HDD is part of a RAID, the file will be copied on all other drives. In principle, this is for redundancy, but in practice the damage will be worse. When a given file gets corrupted, it will be partly or entirely unreadable, so a text file will no longer be readable, an image file will present a random combination of colors if it opens at all and an archive shall be impossible to unpack, and you risk losing your content. Although the most frequently used server file systems include various checks, they are likely to fail to find some problem early enough or require a vast time period to check all the files and the web hosting server will not be operational for the time being.
No Data Corruption & Data Integrity in Cloud Web Hosting
The integrity of the data that you upload to your new cloud web hosting account will be ensured by the ZFS file system which we make use of on our cloud platform. The vast majority of hosting providers, like our firm, use multiple hard disk drives to keep content and because the drives work in a RAID, exactly the same info is synchronized between the drives all of the time. When a file on a drive gets corrupted for whatever reason, however, it is more than likely that it will be copied on the other drives because alternative file systems don't include special checks for that. In contrast to them, ZFS applies a digital fingerprint, or a checksum, for every file. In case a file gets corrupted, its checksum will not match what ZFS has as a record for it, which means that the bad copy will be swapped with a good one from a different drive. As this happens in real time, there's no risk for any of your files to ever get corrupted.