It may be worth reading, as it may give you a good idea of how the product works and is designed. If you want, Alex talks about how the "chunking" system works, especially in regards to larger chunks, here: To be blunt, consumer providers are going to be more prone to this corruption than things like Amazon Web Services, Microsoft Azure, and other enterprise solutions. If there is none, or if there is a base image newer than the last successful recovery point, the checksum check is a full check, in which every single EDB page is read and verified. It is possible that we could add ECC to it (eg, parity) but that would be much more resource intensive, as well as larger chunks, in general. The checksum algorithm finds the for the most recent previous recovery point for which the EDB checksum check was successful. The database graphical management software available in Linux is not as rich as windows, and most of them are in English, which is not very friendly to most small partners who are not good in English. The data verification/checksumming is meant to identify that, so that you don't get silently corrupted data. However, things can still happen, and corrupt the data. This is more bandwidth intensive, but it means that the data is always checked as it's put into the cloud. Upload verification checks the chunk at the time that it's uploaded, to make sure that it uploaded correctly and matches the checksum at that point in time.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |