If you’re reading this, you’re probably into Monero, Cryptocurrency Pools, and/or are looking into universal-pool implementation that I developed over the course of several weeks, to help improve the performance and design of Cryptocurrency pools, specifically for the coin Monero (XMR), so I may use these terms interchangeably as it goes forwards.
Cryptocurrency mining tends to take a couple of basic forms, PPLNS (Pay per last N shares), PPS (Pay per share), Prop (Proportional), Solo (Solo/Individual Mining). There’s a couple more exotic ones out there, but these serve and form the basis for mining within most of the cryptocurrencies out there. XMR’s main pool implementations (Clintar/Zone117x) are designed to be simple pool softwares, and don’t include nice features that have drawn miners towards the larger pools.
Due to this, there has been a heavy centralization of hashing power as of Jan/Feb 2017. The main issue that we’re solving though, is the abundance of pool hoppers, or people that rent/have large sums of hashing power, and “bounce” or change pools rapidly to better ensure their profitability by aiming to try and get on blocks early and pop one early, rather than staying for longer-term mining. This only effects the proportional payout system, as shares are unable to expire and are valid for the entire duration of the block.
There’s any number of ways to solve this, either by blocking large chunks of hashes (NiceHash is the most common source), or by implementing alternate payout systems, such as PPLNS/PPS. As the current pool software were not really designed for these sort of changes, and there are other changes coming down the line for pools, in particular, a new stratum implementation for XMR, I opted to rebuild the pool source from scratch, using Zone117x’s stratum implementation and CN implementations to support modern mining techniques from the ground up.
The biggest change and requirement is that shares must be tracked now, not for the duration of the block, but extending back to well past the current block, to previous blocks in order to ensure there’s enough data depth to perform payouts of blocks without truncating early. There’s other benefits to storing this data, such as being able to perform analysis of mining performance over time.
The first attempt was to shunt everything into a TSDB (Time Series Database) allowing it to take over the painful parts of averaging, summing, calculations, etc, which TSDB systems are very good at doing, usually. The first implementation of the new pool structure was based around InfluxDB, which is a TSDB built for handling series of data dropped into it. This should be a fairly sane, simple system. Unfortunately, a large number of issues were run into where large table scans via indexed string fields were crashing InfluxDB due to excessive memory usage of the InfluxDB daemon. It appears that these searches are non-optimized, and the Go GC is running too slowly to properly support this.
After some chatting with hyc AKA Howard Chu, of LMDB development, it was suggested that LMDB would be a workable solution. LMDB is the current data-store for the Monero Daemon, and while I’d never worked with it before, I’m familiar with the usage of KV stores, and once it was explained to me that there was a way to store multiple values on a single key, it became very workable to store the data in a sane manner.
MySQL was chosen as a backend datastore for configuration data that needs to be used by multiple applications, as well as the storage for a master block log to determine depth of payouts for PPS. Along with this, it was chosen as a stable, consistent store for payment information. This may be moved to LMDB in the long-term, but this was chosen during the initial implementation of InfluxDB, and the test/production pools were already utilizing this well. During the initial InfluxDB implementation, this also stored block data. This has been moved to LMDB.
This was a fairly “simple” conversion at it’s core, the pool system was updated to store shares directly into InfluxDB by way of it’s HTTP API. This is the suggest way to use it, and was proxied behind HTTPS via Caddy without any issues. When a block was found, the datastore was queried to provide, block-by-block, all shares in the block to ensure that the proper sum of payments would be made for the block.
The overall data flow was extremely heavy to and from the database server, as there is a limited amount of compression that can be done before the data is sent, as the data coming to and from the influxDB server would need to be compressed, which it appears, most clients do not properly support.
The major issue came from the frontend API doing large scale queries for statistical information, which should not be an issue, however, in order to properly account for some of the datasets, searches were ran limited by the address of the miner, which caused large index scans. This should not normally be an issue, however it appears that this caused InfluxDB to start running into issues where it would consume vast amounts of memory, apparently, this is due to the sharding design that influxDB uses, and is a known issue that could potentially be resolved.
Overall, I’m not displeased with InfluxDB, this was simply the wrong use-case for it. The more I’ve researched into InfluxDB, the more I’ve realized that it’s better suited to recording of metrics that are not heavily accessed or can be accessed slowly and utilizing their internal tools. Due to the fact there is no streaming binary protocol, you must be ready to receive massive amounts of data over HTTP even if you’re receiving multiple millions of records, which is unsustainable in my opinion. If you can do wide-scale data reductions through their built-in systems, this is much more practical.
Converting to LMDB was not extremely painful, but there were a number of pitfalls to consider, which I’ll touch on shortly. There were two major expansions to the project during this time, both of which were semi-driven by the conversion to LMDB. The first was that the pool was retargeted from a pure XMR pool to a pool that could accept plugins for various Cryptocurrencies and their hashing methods/etc. The other was that the database code was centralized to be used more readily by various portions of the code. In order to optimize the change between databases, the core database code for shares and blocks, which are the two highest pain points for a pool, as they’re both readily accessed, and need to be stored for some time for verification and validation.
LMDB is at its core, a Key-Value store, if you’re not familiar with these, it’s similar to a filing cabinet with file folders, each folder is a key, and the values are the documents stored within. LMDB is a little unusual, in the sense that you can have multiple values stored against a single key. This fact is extremely heavily abused to store share data. However, unlike KV stores you may be familiar with that are networked, such as Memcached, Redis, and to a lesser extent, Couchbase, LMDB is local to the system on which the DB is located. This leads to some interesting issues, as the pool was designed to allow for the usage of “leaf” or pool-only servers.
With the change to LMDB, in order to maintain network availability between the leaf nodes and the master node, a system was needed so that the leaf nodes could send data back to the master LMDB server, which maintains all other daemons. This was fairly easily solved thanks to the next portion. I opted to go with websockets, as they’re a long-life, trustable system as long as your proxy system (CaddyServer) doesn’t time them out, which isn’t an issue w/ LMDB, but with that configuration!
The suggested manner to store strings in LMDB with NodeJS is to JSON.Stringify everything and use a string storage, however, as LMDB supports native binary data storage, we can use Protobufs to both lock the content of the data, and ensure it’s valid, at the same time, we can save a bit of storage space as protobufs are naturally denser than JSON. Therefore, blocks and shares are now converted to protobufs and are stored in LMDB. Using LMDB’s transactions, we can perform updates to these in sane manners without too much of an issue. There are some slight slowdowns in encoding/decoding out of these formats, but with sufficent CPU, there’s been no issue.
During the initial deployment of the pool, transactions were taking 10-15 seconds to commit, due to LMDB’s synchronous nature and single writer systems, this caused a huge number of transactions to grind to an utter halt and block the system. Upon further review, the system was locking and committing to disk, which, as the system was stored on a rather slow spinning disk, rather than a SSD, had the expected response! The system was moved to using more asynchronous calls, as is the nature of JS, enabling mapAsync, noSync, useWriteMap, and noMetaSync, then enabling a per-thread call to env.sync to flush to disk, which could happen asynchronously at that point. This was timed to 60 seconds, as we have ~60 processes accessing this database on a production deployment.
Data layout was touched on slightly above with networking/storage, and the usage of protobufs, but as this is a KV store, some thought had to be given to hit our main requirements:
- Shares must be indexed per-block
- Blocks must be indexed by their height ID
Earlier, it was mentioned that LMDB allows keys to store multiple values, this is extremely heavily abused within the system, allowing the pool to store all the shares that belong to a particular block height to be stored within it, allowing the pool to perform a descending walk across the share “database” and read all shares, assigning payouts as proper.
The single largest issue I ran into was an overall lack of documentation, the module used, node-lmdb assumes that you’re already aware of a lot of the LMDB systems, and while the examples were useful, there were things that were simply undocumented. Reading through the lmdb.tech documentation answered a lot of these. Using a system built on top of LMDB would be much simpler, as they likely provide standardized interfaces, but direct access is, of course, much faster, and when a pool is involved, speed is of the essence.
Now that it’s deployed and running, I’m quite pleased with everything. There’s some annoying bugs that I only caught in production, due to the fact that the node-lmdb library will hard-fault in some cases, particularly with recursive functions and the like, but doesn’t report that this is the cause. These have been largely patched out at this time, so it’s not a concern. Overall, I’m very pleased with the change over, and will likely use LMDB again for systems it makes sense for. As an embedded KV store, I like it more than just about anything else, and the fact that it is a KV store, means that it’s functionally usable anywhere I would use a local Memcached or Redis instance as a KV stores. Some care has to be given for atomic updates to data contained within, but this has largely not been an issue thanks to how fast zero-copy ops are so far for strings, where we use it for caching.