Discard D (do not store to disk) and delete it from mapBlocksIndex - it's a bad block.Store C to disk and keep it in mapBlocksIndex - it's a known block.C is valid, but D has a bad transaction (double-spend, invalid signature, etc.): We also remove C because it has less work than B. At this point, we extend chainActive with B as the new tip, and remove B from setBlockIndexCandidates. #Bitcoin core blockchain download fullNow we receive the full block for B and it checks out. Assume B has more work than C but less work than D. At this point setBlockIndexCandidates contains. We verify headers for B, C, D and they all look good. Thus, the header is a candidate for extending our chain, but we can't say for sure until we receive the full block (and if the candidate is more than one block away from our current tip, we also need to receive and verify any intermediate blocks.)Įxample 1: Let A be our tip we then receive, in order, B, C, D, such that: (In the normal case where the block extends our current tip, it is easy enough to see that it has more total work than our tip.) Thus, they are "candidates" for extending our current blockchain (or re-organizing from our current chain to the chain that the candidate is on.) We call them "candidates" because we verify the block's proof-of-work when we receive the header, but before we receive the block. Set of block indexes that have more total work than our current tip. Upon receiving block B, we can connect B as our tip and delete its entries in mapBlocksUnlinked, which would now consist of only one item. Upon receiving block B, we can connect C. The alternative would be to search the entire mapBlockIndex however, it is more efficient to keep track of unlinked blocks in a separate data structure. The purpose of mapBlocksUnlinked is to quickly attach blocks we've already received to the blockchain when we receive a missing, intermediate block. Multimap containing "all pairs A->B, where A (or one if its ancestors) misses transactions, but B has transactions." (comment at main.cpp:125). By comparison, the chainstate wrapper's writing function (BatchWrite) both writes and erases. (Try searching main.cpp for mapBlockIndex.erase.) Observe also that the block index's LevelDB wrapper does not contain functionality for erasing blocks from the database - it's writing function (WriteBatchSync) only writes to the database. MapBlockIndex only grows, it never shrinks. Thereafter, it's updated whenever new blocks are received over the network. MapBlockIndex is initialized from the database in LoadBlockIndexGuts, which is run at Step 7 of startup. It is technically of type BlockMap, which is for readability. Just think of it as your blocks/ LevelDB in memory, with the key being the block hash. Since a block index is created and stored in the LevelDB when a header is received, it's possible to have block indexes in the block map without having received the full block yet, let alone having stored it to disk. This map contains all known blocks (where "block" means "block index").
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |