{"id":36932,"date":"2019-08-16T00:21:08","date_gmt":"2019-08-16T00:21:08","guid":{"rendered":"https:\/\/www2019.dash.org\/?p=36932"},"modified":"2021-09-18T11:43:16","modified_gmt":"2021-09-18T11:43:16","slug":"overview-of-mainnet-stress","status":"publish","type":"post","link":"https:\/\/wp.dash.org\/news\/overview-of-mainnet-stress\/","title":{"rendered":"Overview of Mainnet Stress Test and Dash Core v0.14.0.3 Release"},"content":{"rendered":"
Last week there was an abnormally high network load on the Dash network consisting of around a million 1 input 1 output transactions with a fee just higher than 1 duff per byte. After contacting all likely community members who could have executed this and finding that none of them were involved, it appears that the artificial load on the Dash network was either a stress test by someone outside of the Dash developer community or was an attempted attack.<\/p>\n
Over the course of 2 days, there were two large spikes in transactions:<\/p>\n
The Dash mempool spiked to 46 MB around 11:30AM on the 7th of August before being cleared in about 3 hours. Before that, at around 3AM on the 7th of August, the mempool spiked to around 17 MB and was sustained for around 4 hours. After analyzing the Dash blockchain as well as MN logs and their PoSe score, there have been multiple interesting discoveries.<\/p>\n
This is likely due to custom implementations. Mining pools will change this naturally once usage approaches 1MB in an effort to capture more fees and be more profitable.<\/p>\n
We are in active communication with these pools in order to figure out what we can do such that the pools include transactions while not including what appears to be spam (1 input, 1 output, very low fee).<\/p>\n
This was a result of having to store the IS Lock for every transaction. This resulted in nodes running on minimal storage (around 15\u201320 GB) not having enough space to store all of the IS Locks. Previously, locks were removed after around 7 days. After 0.14.0.3 (PR#3048<\/a>), IS Locks will be removed as soon as they are confirmed via a ChainLock. This should reduce the size of the `llmq` directory significantly over time.<\/p>\n4. Some Masternodes banned other Masternodes under high load when on the brink of a new quorum becoming active<\/h2>\n