Improving Tezos Storage : Gitlab branch for testers

This article is the third post of a series of posts on improving Tezos storage.  In our previous post, we announced the availability of a docker image for beta testers, wanting to test our storage and garbage collector. Today, we are glad to announce that we rebased our code on the latest version of mainnet-staging, and pushed a branch mainnet-staging-irontez on our public Gitlab repository:

https://gitlab.com/tzscan/tezos/commits/mainnet-staging-irontez

The only difference with the previous post is a change in the name of the RPCs : /storage/context/gc will trigger a garbage collection (and terminate the node afterwards) and /storage/context/revert will migrate the database back to Irmin (and terminate the node afterwards).

Enjoy and send us feedback !!

13 thoughts on “Improving Tezos Storage : Gitlab branch for testers

  • I must be missing something. I compiled and issued the required rpc trigger:

    /storage/context/gc with the command

    ~/tezos/tezos-client rpc get /storage/context/gc

    But I just got an empty JSON response of {} and the size of the .tezos-node folder is unchanged. Any advice is much appreciated.

    Thank you!

    • By default, garbage collection will keep 9 cycles of blocks (~36000 blocks). If you have fewer blocks, or if you are using Irontez on a former Tezos database, and fewer than 9 cycles have been stored in Irontez, nothing will happen. If you want to force a garbage collection, you should tell Irontez to keep fewer block (but more than 100, that’s the minimum that we enforce):

      ~/tezos/tezos-client rpc get ‘/storage/context/gc?keep=120’

      should trigger a GC if the node has been running on Irontez for at least 2 hours.

      • I think it did work. I was confused because the total disk space for the .tezos-node folder remained unchanged. Upon closer inspection, I see these contents and sizes:

        These are the contents of .tezos-node, can I safely delete context.backup?

        4.0K config.json
        269M context
        75G context.backup
        4.0K identity.json
        4.0K lock
        1.4M peers.json
        5.4G store
        4.0K version.json

        Is it safe to delete context.backup if I do not plan to revert? (/storage/context/revert)

  • Have there been any issues reported with missing endorsements or missing bakings with this patch? We have been using this gc version (https://gitlab.com/tezos/tezos/merge_requests/720) for the past month and ever since we switched we have been missing endorsements and missing bakings. The disk space savings is amazing, but if we keep missing ends/bakes, it’s going to hurt our reputation as a baking service.

      • I was using the 720MR and experiencing issues with baking/endorsing. I understand that 720MR and IronTez are different. I was simply asking if your version has had any reports of baking/endorsing troubles.

  • Is there no way to convert a “standard node” to IronTez? I was running the official tezos-node, and my datadir is around 90G. I compiled IronTez and started it up on that same dir, then ran `rpc get /storage/context/gc` and nothing is happening. I thought this was supposed to convert my datadir to irontez? If not, what is the RPC to do this? Or must I start from scratch to be 100% irontez?

    • There are two ways to get a full Irontez DB:
      * Start a node from scratch and wait for one or two days…
      * Use an existing node, run Irontez on it for 2 hours, and then call `rpc get /storage/context/gc?keep=100` . 100 is the number of blocks to be kept. After 2 hours, the last 120 blocks should be stored in the IronTez DB, so the old DB will not be used anymore. Note that Irontez will not delete the old DB, just rename it. You should go there and remove the file to recover the disk space.

  • There is a major problem for bakers who want to use the irontez branch. After garbage collection, the baker application will not start because the baker requests a rpc call for the genesis block information. That genesis block information is gone after the garbage collection. Please address this isssue soon. Thank you!

Leave a Reply

Your email address will not be published. Required fields are marked *