Simple & Accurate "2 Clicks" Bitcoin Mining Calculator ...

Bitcoin-SV: are terabyte blocks feasible?

Block propagation time and block processing time (to prepare & validate) are very crucial factors. Every node(miner) has an economic incentive in propagating its block as quickly as possible so that nodes would be more likely to build on this fork. But simultaneously having a very large number of transactions contained in the block increases the block propagation time, so a node has to optimally balance the number of transactions to include (block size) with transaction fees plus block reward so for the best outcome.
But BSVs scaling approach expects to have logical blocks at gigabytes/terabytes sizes in future, the problem outlined above can be a huge obstacle in getting there. This problem will be exacerbated when block sizes get too big and ultimately the rational economically motivated nodes begin to ration the number of transactions in a block.
I believe currently the time complexity of block propagation is at O(~n), where n is the number of transactions, as there is currently no block compression (like Graphene). Also, block processing time complexity is at O(~n) too as most of the processing is serial.
Compact blocks (BIP 152) as implemented currently in BitcoinSV already does a basic level of block compression by,
typically a Compact block is about 10 - 15 % of the full uncompressed legacy block & this reduces the effective propagation time; while this is probably good enough for Bitcoin-Core as they are not seeking to increase block size, its certainly not enough for Bitcoin-SV.
Graphene which uses Bloom filters and Invertible Bloom Lookup Tables (IBLTs) seems to provide an efficient solution to the transaction set reconciliation problem, and it offers additional (from Compact blocks) compression where a Graphene block is ~10% of the size of a typical Compact block (from the author's empirical tests)
With the above information and certain assumptions we can quickly calculate the demands of a terabyte node and its feasibility with current hardware & bandwidth limitations.
Assumptions:
1 TB block ==> 100-150 GB Compact block ==> 10 - 15 GB Graphene block
Lets conservatively go with the low of 10 GB Graphene compressed block, 10GB/ 10 Gb/s = 8 secs
we still need 8 full seconds to propagate this block one hop to the next immediate peer. Also, note that we conveniently ignored the massive parallelization that would be needed for transaction and block processing which would likely involve techniques like mempool and UTXO set sharding in the node architecture.
But the point to take home is 8 seconds is exorbitant and we need a better workable compression algorithm irrespective of other architectural improvements under the outlined assumptions.
The above led me to begin work on an "ultra compression" algorithm which is a stateful protocol and highly parallelizable (places high memory & CPU demands) and fits with the goal of a horizontally scalable architecture built on affordable consumer grade h/w. The outline of the algorithm looks promising and seems to compress the block by factor of thousands if not more especially for the block publisher and although the block size grows as we head farther from the publishing node, its still reasonable IMO.
Now, before I go further down this rabbit hole I wanted you guys to poke holes into my assumptions, requirements & calculation outlines. Subsequently I will publish (semi-formal) a paper detailing the ultra compression algorithm and how it fits with the overall node architecture per ideas expressed above.
Would appreciate if someone could point/educate me to alternative practical solutions that have already been vetted and are in the dev pipeline.
Note:
submitted by stoichammer to bitcoincashSV [link] [comments]

The Origins of the Blocksize Debate

On May 4, 2015, Gavin Andresen wrote on his blog:
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
On May 5 (1 day after Gavin submitted his first blog post), Mike Hearn published The capacity cliff on his Medium page. 2 days later, he posted Crash landing. In these posts, he argued:
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature[1][2]:
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.
Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. [1][2][3][4][5][6][7]
Matt Corallo brought Gavin's proposal up on the bitcoin-dev mailing list after a few days. He wrote:
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
Shortly thereafter, Corallo explained further:
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.
Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
Peter Todd mentioned that a limited blocksize provides the benefit of protecting against the "perverse incentives" behind potential block withholding attacks.
Slush didn't have a strong opinion one way or the other, and neither did Eric Lombrozo, though Eric was interested in developing hard-fork best practices and wanted to:
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.
Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.
Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
Gregory Maxwell echoed and extended that perspective:
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
Peter Todd also summarized some academic findings on the subject:
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Pieter Wuille said:
I am - in general - in favor of increasing the size blocks...
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
Choose wisely.
Mike Hearn responded:
this list is not a good place for making progress or reaching decisions.
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
Jorge Timón wrote an incredibly prescient reply to Mike:
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Mike Hearn's response was to demand a "leader" who could unilaterally steer the Bitcoin project and make decisions unchecked:
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadership
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
Jorge Timón responded:
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
Mike Hearn again asserted the need for a leader:
There must be a single decision maker for any given codebase.
Bryan Bishop attempted to explain why this did not make sense with git architecture.
Finally, Gavin announced his intent to merge the patch into Bitcoin XT to bypass the peer review he had received on the bitcoin-dev mailing list.
submitted by sound8bits to Bitcoin [link] [comments]

Technical discussion of Gavin's O(1) block propagation proposal

I think there isn't wide appreciation of how important Gavin's proposal is for the scalability of Bitcoin. It's the real deal, and will get us out of this sort of beta mode we've been in of a few transactions per second globally. I spent a few hours reviewing the papers referenced at the bottom of his excellent write-up and think I get it now.
If you already get it, then hang around and answer questions from me and others. If you don't get it yet, start by very carefully reading https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2.
The big idea is twofold: fix the miner's incentives to align better with users wanting transactions to clear, and eliminate the sending of redundant data in the newblock message when a block is solved to save bandwidth.
I'll use (arbitrarily) a goal of 1 million tx per block, which is just over 1000 TPS. This seems pretty achievable, without a lot of uncertainty. Really! Read on.
Today, a miner really wants to propagate a solved block as soon as possible to not jeopardize their 25 BTC reward. It's not the cpu cost for handling the transactions on the miner's side that's the problem, it's the sending of a larger newblock message around the network that just might cause her block to lose a race condition with another solution to the block.
So aside from transactions with fees of more than 0.0008 BTC that can make up for this penalty (https://gist.github.com/gavinandresen/5044482), or simply the goodwill of benevolent pools to process transactions, there is today an incentive for miners not to include transactions in a block. The problem is BTC price has grown so high so fast that 0.0008 BTC is about 50 cents, which is high for day-to-day transactions (and very high for third world transactions).
The whole idea centers around an old observation that since the network nodes (including miners) have already received transactions by the normal second-by-second operation of the p2p network, the newblock announcement message shouldn't have to repeat the transaction details. Instead, it can just tell people, hey, I approve these particular transactions called XYZ, and you can check me by taking your copy of those same transactions that you already have and running the hash to check that my header is correctly solved. Proof of work.
A basic way to do this would be to send around a Bloom filter in the newblock message. A receiving node would check all the messages they have, see which of them are in this solved block, and mark them out of their temporary memory pool. Using a BF calculator you can see that you need about 2MB in order to get an error rate of 10e-6 for 1 million entries. 2MB gives 16 million bits which is enough to almost always be able to tell if a tx that you know about is in the block or not.
There are two problems with this: there may be transactions in the solved block that you don't have, for whatever p2p network or policy reason. The BF can't tell you what those are. It can just tell you there were e.g. 1,000,000 tx in this solved block and you were able to find only 999,999 of them. The other glitch is that of those 999,999 it told you were there, a couple could be false positives. I think there are ways you could try to deal with this--send more types of request messages around the network to fill in your holes--but I'll dismiss this and flip back to Gavin's IBLT instead.
The IBLT works super well to mash a huge number of transactions together into one fixed-size (O(1)) data structure, to compare against another set of transactions that is really close, with just a few differences. The "few differences" part compared to the size of the IBLT is critical to this whole thing working. With too many differences, the decode just fails and the receiver wouldn't be able to understand this solved block.
Gavin suggests key size of 8B and data of 8B chunks. I don't understand his data size--there's a big key checksum you need in order to do full add and subtract of IBLTs (let's say 8B, although this might have to be 16B?) that I would rather amortize over more granular data chunks. The average tx is 250B anyway. So I'm going to discuss an 8B key and 64B data chunks. With a count field, this then gives 8 key + 64 data + 16 checksum + 4 count = 92B. Let's round to 100B per IBLT cell.
Let's say we want to fix our newblock message size to around 1MB, in order to not be too alarming for the change to this scheme from our existing 1MB block limit (that miners don't often fill anyway). This means we can have an IBLT with m=10K, or 10,000 cells, which with the 1.5d rule (see the papers) means we can tolerate about 6000 differences in cells, which because we are slicing transactions into multiple cells (4 on average), means we can handle about 1500 differences in transactions at the receiver vs the solver and have faith that we can decode the newblock message fully almost all the time (has to be some way to handle the occasional node that fails this and has to catch up).
So now the problem becomes, how can we define some conventions so that the different nodes can mostly agree on which of the transactions flying around the network for the past N (~10) minutes should be included in the solved block. If the solver gets it wrong, her block doesn't get accepted by the rest of the network. Strong incentive! If the receiver gets it wrong (although she can try multiple times with different sets), she can't track the rest of the network's progress.
This is the genius part around this proposal. If we define the convention so that the set of transactions to be included in a block is essentially all of them, then the miners are strongly incentivized, not just by tx fees, but by the block reward itself to include all those transactions that happened since the last block. It still allows them to make their own decisions, up to 1500 tx could be added where convention would say not to, or not put in where convention says to. This preserves the notion of tx-approval freedom in the network for miners, and some later miner will probably pick up those straggler tx.
I think it might be important to provide as many guidelines for the solver as possible to describe what is in her block, in specific terms as possible without actually having to give tx ids, so that the receivers in their attempt to decode this block can build up as similar an IBLT on their side using the same rules. Something like the tx fee range, some framing of what tx are in the early part and what tx are near the end (time range I mean). Side note: I guess if you allow a tx fee range in this set of parameters, then the solver could put it real high and send an empty block after all, which works against the incentive I mentioned above, so maybe that particular specification is not beneficial.
From http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf for example, the propagation delay is about 30-40 seconds before almost all nodes have received any particular transaction, so it may be useful for the solver to include tx only up to a certain point in time, like 30 seconds ago. Any tx that is younger than this just waits until the next block, so it's not a big penalty. But some policy like this (and some way to communicate it in the absence of centralized time management among the nodes) will be important to keep the number of differences in the two sets small, below 1500 in my example. The receiver of the newblock message would know when trying to decode it, that they should build up an IBLT on their side also with tx only from up to 30 seconds ago.
I don't understand Gavin's requirement for canonical ordering. I see that it doesn't hurt, but I don't see the requirement for it. Can somebody elaborate? It seems that's his way to achieve the same framing that I am talking about in the previous paragraph, to obtain a minimum number of differences in the two sets. There is no need to clip the total number of tx in a block that I see, since you can keep shoving into the IBLT as much as you want, as long as the number of differences is bounded. So I don't see a canonical ordering being required for clipping the tx set. The XOR (or add-subtract) behavior of the IBLT doesn't require any ordering in the sets that I see, it's totally commutative. Maybe it's his way of allowing miners some control over what tx they approve, how many tx into this canonical order they want to get. But that would also allow them to send around solved empty blocks.
What is pretty neat about this from a consumer perspective is the tx fees could be driven real low, like down to the network propagation minimum which I think as of this spring per Mike Hearn is now 0.00001 BTC or 10 "bits" (1000 satoshis), half a US cent. Maybe that's a problem--the miners get the shaft without being able to bid on which transactions they approve. If they try to not approve too many tx their block won't be decoded by the rest of the network like all the non-mining nodes running the bitpay/coinbases of the world.
Edit: 10 bits is 1000 satoshis, not 10k satoshis
submitted by sandball to Bitcoin [link] [comments]

The Origins of the (Modern) Blocksize Debate

On May 4, 2015, Gavin Andresen wrote on his blog:
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
On May 5 (1 day after Gavin submitted his first blog post), Mike Hearn published The capacity cliff on his Medium page. 2 days later, he posted Crash landing. In these posts, he argued:
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature[1][2]:
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.
Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. [1][2][3][4][5][6][7]
Matt Corallo brought Gavin's proposal up on the bitcoin-dev mailing list after a few days. He wrote:
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
Shortly thereafter, Corallo explained further:
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.
Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
Peter Todd mentioned that a limited blocksize provides the benefit of protecting against the "perverse incentives" behind potential block withholding attacks.
Slush didn't have a strong opinion one way or the other, and neither did Eric Lombrozo, though Eric was interested in developing hard-fork best practices and wanted to:
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.
Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.
Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
Gregory Maxwell echoed and extended that perspective:
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
Peter Todd also summarized some academic findings on the subject:
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Pieter Wuille said:
I am - in general - in favor of increasing the size blocks...
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
Choose wisely.
Mike Hearn responded:
this list is not a good place for making progress or reaching decisions.
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
Jorge Timón wrote an incredibly prescient reply to Mike:
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Mike Hearn's response was to demand a "leader" who could unilaterally steer the Bitcoin project and make decisions unchecked:
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadership
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
Jorge Timón responded:
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
Mike Hearn again asserted the need for a leader:
There must be a single decision maker for any given codebase.
Bryan Bishop attempted to explain why this did not make sense with git architecture.
Finally, Gavin announced his intent to merge the patch into Bitcoin XT to bypass the peer review he had received on the bitcoin-dev mailing list.
submitted by sound8bits to sound8bits [link] [comments]

BlockTorrent: The famous algorithm which BitTorrent uses for SHARING BIG FILES. Which you probably thought Bitcoin *also* uses for SHARING NEW BLOCKS (which are also getting kinda BIG). But Bitcoin *doesn't* torrent *new* blocks (while relaying). It only torrents *old* blocks (while sync-ing). Why?

This post is being provided to further disseminate an existing proposal:
This proposal was originally presented by jtoomim back in September of 2015 - on the bitcoin_dev mailing list (full text at the end of this OP), and on reddit:
https://np.reddit.com/btc/comments/3zo72i/fyi_ujtoomim_is_working_on_a_scaling_proposal/cyomgj3
Here's a TL;DR, in his words:
BlockTorrenting
For initial block sync, [Bitcoin] sort of works [like BitTorrent] already.
You download a different block from each peer. That's fine.
However, a mechanism does not currently exist for downloading a portion of each [new] block from a different peer.
That's what I want to add.
~ jtoomim
The more detailed version of this "BlockTorrenting" proposal (as presented by jtoomim on the bitcoin_dev mailing list) is linked and copied / reformatted at the end of this OP.
Meanwhile here are some observations from me as a concerned member of the Bitcoin-using public.
Questions:
Whoa??
WTF???
Bitcoin doesn't do this kind of "blocktorrenting" already??
But.. But... I thought Bitcoin was "p2p" and "based on BitTorrent"...
... because (as we all know) Bitcoin has to download giant files.
Oh...
Bitcoin only "torrents" when sharing one certain kind of really big file: the existing blockchain, when a node is being initialized.
But Bitcoin doesn't "torrent" when sharing another certain kind of moderately big file (a file whose size, by the way, has been notoriously and steadily growing over the years to the point where the system running the legacy "Core"/Blockstream Bitcoin implementation is starting to become dangerously congested - no matter what some delusional clowns "Core" devs may say): ie, the world's most wildly popular, industrial-strength "p2p file sharing algorithm" is mysteriously not being used where the Bitcoin network needs it the most in order to get transactions confirmed on-chain: when a a newly found block needs to be shared among nodes, when a node is relaying new blocks.
https://np.reddit.com/Bitcoin+bitcoinxt+bitcoin_uncensored+btc+bitcoin_classic/search?q=blocktorrent&restrict_sr=on
How many of you (honestly) just simply assumed that this algorithm was already being used in Bitcoin - since we've all been told that "Bitcoin is p2p, like BitTorrent"?
As it turns out - the only part of Bitcoin which has been p2p up until now is the "sync-ing a new full-node" part.
The "running an existing full-node" part of Bitcoin has never been implemented as truly "p2p2" yet!!!1!!!
And this is precisely the part of the system that we've been wasting all of our time (and destroying the community) fighting over for the past few months - because the so-called "experts" from the legacy "Core"/Blockstream Bitcoin implementation ignored this proposal!
Why?
Why have all the so-called "experts" at "Core"/Blockstream ignored this obvious well-known effective & popular & tested & successful algorithm for doing "blocktorrenting" to torrent each new block being relayed?
Why have the "Core"/Blockstream devs failed to p2p-ize the most central, fundamental networking aspect of Bitcoin - the part where blocks get propagated, the part we've been fighting about for the past few years?
This algorithm for "torrenting" a big file in parallel from peers is the very definition of "p2p".
It "surgically" attacks the whole problem of sharing big files in the most elegant and efficient way possible: right at the lowest level of the bottleneck itself, cleverly chunking a file and uploading it in parallel to multiple peers.
Everyone knows torrenting works. Why isn't Bitcoin using it for its new blocks?
As millions of torrenters already know (but evidently all the so-called "experts" at Core/Blocsktream seem to have conveniently forgotten), "torrenting" a file (breaking a file into chunks and then offering a different chunk to each peer to "get it out to everyone fast" - before your particular node even has the entire file) is such a well-known / feasible / obvious / accepted / battle-tested / highly efficient algorithm for "parallelizing" (and thereby significantly accelerating) the sharing of big files among peers, that many people simply assumed that Bitcoin had already been doing this kind of "torrenting of new-blocks" these whole past 7 years.
But Bitcoin doesn't do this - yet!
None of the Core/Blockstream devs (and the Chinese miners who follow them) have prioritized p2p-izing the most central and most vital and most resource-consuming function of the Bitcoin network - the propagation of new blocks!
Maybe it took someone who's both a miner and a dev to "scratch" this particular "itch": Jonathan Toomim jtoomim.
  • A miner + dev who gets very little attention / respect from the Core/Blockstream devs (and from the Chinese miners who follow them) - perhaps because they feel threatened by a competing implementation?
  • A miner + dev who may have come up with the simplest and safest and most effective algorithmic (ie, software-based, not hardware-consuming) scaling proposal of anyone!
  • A dev who who is not paid by Blockstream, and who is therefore free from the secret, undisclosed corporate restraints / confidentiality agreements imposed by the shadowy fiat venture-capitalists and legacy power elite who appear to be attempting to cripple our code and muzzle our devs.
  • A miner who has the dignity not to let himself be forced into signing a loyalty oath to any corporate overlords after being locked in a room until 3 AM.
Precisely because jtoomim is both a indepdendent miner and an independent dev...
  • He knows what needs to be done.
  • He knows how to do it.
  • He is free to go ahead and do it - in a permissionless, decentralized fashion.
Possible bonus: The "blocktorrent" algorithm would help the most in the upload direction - which is precisely where Bitcoin scaling needs the most help!
Consider the "upload" direction for a relatively slow full-node - such as Luke-Jr, who reports that his internet is so slow, he has not been able to run a full-node since mid-2015.
The upload direction is the direction which everyone says has been the biggest problem with Bitcoin - because, in order for a full-node to be "useful" to the network:
  • it has to able to upload a new block to (at least) 8 peers,
  • which places (at least) 8x more "demand" on the full-node's upload bandwidth.
The brilliant, simple proposed "blocktorrent" algorithm from jtoomim (already proven to work with Bram Cohen's BitTorrent protocol, and also already proven to work for initial sync-ing of Bitcoin full-nodes - but still un-implemented for ongoing relaying among full-nodes) looks like it would provide a significant performance improvement precisely at this tightest "bottleneck" in the system, the crucial central nexus where most of the "traffic" (and the congestion) is happening: the relaying of new blocks from "slower" full-nodes.
The detailed explanation for how this helps "slower" nodes when uploading, is as follows.
Say you are a "slower" node.
You need to send a new block out to (at least) 8 peers - but your "upload" bandwidth is really slow.
If you were to split the file into (at least) 8 "chunks", and then upload a different one of these (at least) 8 "chunks" to each of your (at least) 8 peers - then (if you were using "blocktorrenting") it only would take you 1/8 (or less) of the "normal" time to do this (compared to the naïve legacy "Core" algorithm).
Now the new block which your "slower" node was attempting to upload is already "out there" - in 1/8 (or less) of the "normal" time compared to the naïve legacy "Core" algorithm.[ 1 ]
... [ 1 ] There will of course also be a tiny amount of extra overhead involved due to the "housekeeping" performed by the "blocktorrent" algorithm itself - involving some additional processing and communicating to decompose the block into chunks and to organize the relaying of different chunks to different peers and the recompose the chunks into a block again (all of which, depending on the size of the block and the latency of your node's connections to its peers, would in most cases be negligible compared to the much-greater speed-up provided by the "blocktorrent" algorithm itself).
Now that your block is "out there" at those 8 (or more) peer nodes to whom you just blocktorrented it in 1/8 (or less) of the time - it has now been liberated from the "bottleneck" of your "slower" node.
In fact, its further propagation across the net may now be able to leverage much faster upload speeds from some other node(s) which have "blocktorrent"-downloaded it in pieces from you (and other peers) - and which might be faster relaying it along, than your "slower" node.
For some mysterious reason, the legacy Bitcoin implementation from "Core"/Blockstream has not been doing this kind of "blocktorrenting" for new blocks.
It's only been doing this torrenting for old blocks. The blocks that have already been confirmed.
Which is fine.
But we also obviously need this sort of "torrenting" to be done for each new block is currently being confirmed.
And this is where the entire friggin' "scaling bottleneck" is occurring, which we just wasted the past few years "debating" about.
Just sit down and think about this for a minute.
We've had all these so-called "experts" (Core/Blockstream devs and other small-block proponents) telling us for years that guys like Hearn and Gavin and repos like Classic and XT and BU were "wrong" or at least "unserious" because they "merely" proposed "brute-force" scaling: ie, scaling which would simply place more demands on finite resources (specifically: on the upload bandwidth from full-nodes - who need to relay to at least 8 peer full-nodes in order to be considered "useful" to the network).
These "experts" have been beating us over the head this whole time, telling us that we have to figure out some (really complicated, unproven, inefficient and centralized) clever scaling algorithms to squeeze more efficiency out of existing infrastructure.
And here is the most well-known / feasible / obvious / accepted / battle-tested algorithm for "parallelizing" (and thereby massively accelerating) the sharing of big file among peers - the BitTorrent algorithm itself, the gold standard of p2p relaying par excellence, which has been a major success on the Internet for decades, at one point accounting for nearly 1/3 of all traffic on the Internet itself - and which is also already being used in one part of Bitcoin: during the phase of sync-ing a new node.
And apparently pretty much only jtoomim has been talking about using it for the actual relaying of new blocks - while Core/Blockstream devs have so far basically ignored this simple and safe and efficient proposal.
And then the small-block sycophants (reddit users or wannabe C/C++ programmers who have beaten into submission and/or by the FUD and "technological pessimism" of the Core/Blockstream devs, and by the censorhip on their legacy forum), they all "laugh" at Classic and proclaim "Bitcoin doesn't need another dev team - all the 'experts' are at Core / Blockstream"...
...when in fact it actually looks like jtoomim (an independent minedev, free from the propaganda and secret details of the corporate agenda of Core/Blockstream - who works on the Classic Bitcoin implementation) may have proposed the simplest and safest and most effective scaling algorithm in this whole debate.
By the way, his proposal estimates that we could get about 1 magnitude greater throughput, based on the typical latency and blocksize for blocks of around 20 MB and bandwidth of around 8 Mbps (which seems like a pretty normal scenario).
So why the fuck isn't this being done yet?
This is such a well-known / feasible / obvious / accepted / battle-tested algorithm for "parallelizing" (and thereby significantly accelerating) the sharing of big files among peers:
  • It's already being used for the (currently) 65 gigabytes of "blocks in the existing blockchain" itself - the phase where a new node has to sync with the blockchain.
  • It's already being used in BitTorrent - although the BitTorrent protocol has been optimized more to maximize throughput, whereas it would probably be a good idea to optimize the BlockTorrent protocol to minimize latency (since avoiding orphans is the big issue here) - which I'm fairly sure should be quite doable.
This algorithm is so trivial / obvious / straightforward / feasible / well-known / proven that I (and probably many others) simply assumed that Bitcoin had been doing this all along!
But it has never been implemented.
There is however finally a post about it today on the score-hidden forum /Bitcoin, from eragmus:
[bitcoin-dev] BlockTorrent: Torrent-style new-block propagation on Merkle trees
https://np.reddit.com/Bitcoin/comments/484nbx/bitcoindev_blocktorrent_torrentstyle_newblock/
And, predictably, the top-voted comment there is a comment telling us why it will never work.
And the comment after that comment is from the author of the proposal, jtoomim, explaining why it would work.
Score hidden on all those comments.
Because the immature tyrant theymos still doesn't understand the inherent advantages of people using reddit's upvoting & downvoting tools to hold decentralized, permissionless debates online.
Whatever.
Questions:
(1) Would this "BlockTorrenting" algorithm from jtoomim really work?
(2) If so, why hasn't it been implemented yet?
(3) Specifically: With all the "dev firepower" (and $76 million in venture capital) available at Core/Blockstream, why have they not prioritized implementing this simple and safe and highly effective solution?
(4) Even more specifically: Are there undisclosed strategies / agreements / restraints imposed by Blockstream financial investors on Bitcoin "Core" devs which have been preventing further discussion and eventual implementation of this possible simple & safe & efficient scaling solution?
Here is the more-detailed version of this proposal, presented by Jonathan Toomim jtoomim back in September of 2015 on the bitcoin-dev mailing list (and pretty much ignored for months by almost all the "experts" there):
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011176.html
As I understand it, the current block propagation algorithm is this:
  1. A node mines a block.
  2. It notifies its peers that it has a new block with an inv. Typical nodes have 8 peers.
  3. The peers respond that they have not seen it, and request the block with getdata [hash].
  4. The node sends out the block in parallel to all 8 peers simultaneously. If the node's upstream bandwidth is limiting, then all peers will receive most of the block before any peer receives all of the block. The block is sent out as the small header followed by a list of transactions.
  5. Once a peer completes the download, it verifies the block, then enters step 2.
(If I'm missing anything, please let me know.)
The main problem with this algorithm is that it requires a peer to have the full block before it does any uploading to other peers in the p2p mesh. This slows down block propagation to:
O( p • log_p(n) ) 
where:
  • n is the number of peers in the mesh,
  • p is the number of peers transmitted to simultaneously.
It's like the Napster era of file-sharing. We can do much better than this.
Bittorrent can be an example for us.
Bittorrent splits the file to be shared into a bunch of chunks, and hashes each chunk.
Downloaders (leeches) grab the list of hashes, then start requesting their peers for the chunks out-of-order.
As each leech completes a chunk and verifies it against the hash, it begins to share those chunks with other leeches.
Total propagation time for large files can be approximately equal to the transmission time for an FTP upload.
Sometimes it's significantly slower, but often it's actually faster due to less bottlenecking on a single connection and better resistance to packet/connection loss.
(This could be relevant for crossing the Chinese border, since the Great Firewall tends to produce random packet loss, especially on encrypted connections.)
Bitcoin uses a data structure for transactions with hashes built-in. We can use that in lieu of Bittorrent's file chunks.
A Bittorrent-inspired algorithm might be something like this:
  1. (Optional steps to build a Merkle cache; described later)
  2. A seed node mines a block.
  3. It notifies its peers that it has a new block with an extended version of inv.
  4. The leech peers request the block header.
  5. The seed sends the block header. The leech code path splits into two.
  6. (a) The leeches verify the block header, including the PoW. If the header is valid,
  7. (a) They notify their peers that they have a header for an unverified new block with an extended version of inv, looping back to 2. above. If it is invalid, they abort thread (b).
  8. (b) The leeches request the Nth row (from the root) of the transaction Merkle tree, where N might typically be between 2 and 10. That corresponds to about 1/4th to 1/1024th of the transactions. The leeches also request a bitfield indicating which of the Merkle nodes the seed has leaves for. The seed supplies this (0xFFFF...).
  9. (b) The leeches calculate all parent node hashes in the Merkle tree, and verify that the root hash is as described in the header.
  10. The leeches search their Merkle hash cache to see if they have the leaves (transaction hashes and/or transactions) for that node already.
  11. The leeches send a bitfield request to the node indicating which Merkle nodes they want the leaves for.
  12. The seed responds by sending leaves (either txn hashes or full transactions, depending on benchmark results) to the leeches in whatever order it decides is optimal for the network.
  13. The leeches verify that the leaves hash into the ancestor node hashes that they already have.
  14. The leeches begin sharing leaves with each other.
  15. If the leaves are txn hashes, they check their cache for the actual transactions. If they are missing it, they request the txns with a getdata, or all of the txns they're missing (as a list) with a few batch getdatas.
Features and benefits
The main feature of this algorithm is that a leech will begin to upload chunks of data as soon as it gets them and confirms both PoW and hash/data integrity instead of waiting for a fully copy with full verification.
Inefficient cases, and mitigations
This algorithm is more complicated than the existing algorithm, and won't always be better in performance.
Because more round trip messages are required for negotiating the Merkle tree transfers, it will perform worse in situations where the bandwidth to ping latency ratio is high relative to the blocksize.
Specifically, the minimum per-hop latency will likely be higher.
This might be mitigated by reducing the number of round-trip messages needed to set up the BlockTorrent by using larger and more complex inv-like and getdata-like messages that preemptively send some data (e.g. block headers).
This would trade off latency for bandwidth overhead from larger duplicated inv messages.
Depending on implementation quality, the latency for the smallest block size might be the same between algorithms, or it might be 300% higher for the torrent algorithm.
For small blocks (perhaps < 100 kB), the BlockTorrent algorithm will likely be slightly slower.
Sidebar from the OP: So maybe this would discourage certain miners (cough Dow cough) from mining blocks that aren't full enough:
Why is [BTCC] limiting their block size to under 750 all of a sudden?
https://np.reddit.com/Bitcoin/comments/486o1u/why_is_bttc_limiting_their_block_size_to_unde

For large blocks (e.g. 8 MB over 20 Mbps), I expect the BlockTorrent algo will likely be around an order of magnitude faster in the worst case (adversarial) scenarios, in which none of the block's transactions are in the caches.

One of the big benefits of the BlockTorrent algorithm is that it provides several obvious and straightforward points for bandwidth saving and optimization by caching transactions and reconstructing the transaction order.

Future work: possible further optimizations
A cooperating miner [could] pre-announce Merkle subtrees with some of the transactions they are planning on including in the final block.
Other miners who see those subtrees [could] compare the transactions in those subtrees to the transaction sets they are mining with, and can rearrange their block prototypes to use the same subtrees as much as possible.
In the case of public pools supporting the getblocktemplate protocol, it might be possible to build Merkle subtree caches without the pool's help by having one or more nodes just scraping their getblocktemplate results.
Even if some transactions are inserted or deleted, it [might] be possible to guess a lot of the tree based on the previous ordering.
Once a block header and the first few rows of the Merkle tree [had] been published, they [would] propagate through the whole network, at which time full nodes might even be able to guess parts of the tree by searching through their txn and Merkle node/subtree caches.
That might be fun to think about, but probably not effective due to O( n2 ) or worse scaling with transaction count.
Might be able to make it work if the whole network cooperates on it, but there are probably more important things to do.
Leveraging other features from BitTorrent
There are also a few other features of Bittorrent that would be useful here, like:
  • prioritizing uploads to different peers based on their upload capacity,
  • banning peers that submit data that doesn't hash to the right value.
Sidebar from the OP: Hmm...maybe that would be one way to deal with the DDoS-ing we're experiencing right now? I know the DDoSer is using a rotating list of proxies, but still it could be a quick-and-dirty way to mitigate against his attack.
DDoS started again. Have a nice day, guys :)
https://np.reddit.com/Bitcoin_Classic/comments/47zglz/ddos_started_again_have_a_nice_day_guys/d0gj13y
(It might be good if we could get Bram Cohen to help with the implementation.)
Using the existing BitTorrent algorithm as-is - versus tailoring a new algorithm optimized for Bitcoin
Another possible option would be to just treat the block as a file and literally Bittorrent it.
But I think that there should be enough benefits to integrating it with the existing bitcoin p2p connections and also with using bitcoind's transaction caches and Merkle tree caches to make a native implementation worthwhile.
Also, BitTorrent itself was designed to optimize more for bandwidth than for latency, so we will have slightly different goals and tradeoffs during implementation.
Concerns, possible attacks, mitigations, related work
One of the concerns that I initially had about this idea was that it would involve nodes forwarding unverified block data to other nodes.
At first, I thought this might be useful for a rogue miner or node who wanted to quickly waste the whole network's bandwidth.
However, in order to perform this attack, the rogue needs to construct a valid header with a valid PoW, but use a set of transactions that renders the block as a whole invalid in a manner that is difficult to detect without full verification.
However, it will be difficult to design such an attack so that the damage in bandwidth used has a greater value than the 240 exahashes (and 25.1 BTC opportunity cost) associated with creating a valid header.
Related work: IBLT (Invertible Bloom Lookup Tables)
As I understand it, the O(1) IBLT approach requires that blocks follow strict rules (yet to be fully defined) about the transaction ordering.
If these are not followed, then it turns into sending a list of txn hashes, and separately ensuring that all of the txns in the new block are already in the recipient's mempool.
When mempools are very dissimilar, the IBLT approach performance degrades heavily and performance becomes worse than simply sending the raw block.
This could occur if a node just joined the network, during chain reorgs, or due to malicious selfish miners.
Also, if the mempool has a lot more transactions than are included in the block, the false positive rate for detecting whether a transaction already exists in another node's mempool might get high for otherwise reasonable bucket counts/sizes.
With the BlockTorrent approach, the focus is on transmitting the list of hashes in a manner that propagates as quickly as possible while still allowing methods for reducing the total bandwidth needed.
Remark
The BlockTorrent algorithm does not really address how the actual transaction data will be obtained because, once the leech has the list of txn hashes, the standard Bitcoin p2p protocol can supply them in a parallelized and decentralized manner.
Thoughts?
-jtoomim
submitted by ydtm to btc [link] [comments]

Torrent-style new-block propagation on Merkle trees | Jonathan Toomim (Toomim Bros) | Sep 23 2015

Jonathan Toomim (Toomim Bros) on Sep 23 2015:
As I understand it, the current block propagation algorithm is this:
  1. A node mines a block.
  2. It notifies its peers that it has a new block with an inv. Typical nodes have 8 peers.
  3. The peers respond that they have not seen it, and request the block with getdata [hash].
  4. The node sends out the block in parallel to all 8 peers simultaneously. If the node's upstream bandwidth is limiting, then all peers will receive most of the block before any peer receives all of the block. The block is sent out as the small header followed by a list of transactions.
  5. Once a peer completes the download, it verifies the block, then enters step 2.
(If I'm missing anything, please let me know.)
The main problem with this algorithm is that it requires a peer to have the full block before it does any uploading to other peers in the p2p mesh. This slows down block propagation to O( p • log_p(n) ), where n is the number of peers in the mesh, and p is the number of peers transmitted to simultaneously.
It's like the Napster era of file-sharing. We can do much better than this. Bittorrent can be an example for us. Bittorrent splits the file to be shared into a bunch of chunks, and hashes each chunk. Downloaders (leeches) grab the list of hashes, then start requesting their peers for the chunks out-of-order. As each leech completes a chunk and verifies it against the hash, it begins to share those chunks with other leeches. Total propagation time for large files can be approximately equal to the transmission time for an FTP upload. Sometimes it's significantly slower, but often it's actually faster due to less bottlenecking on a single connection and better resistance to packet/connection loss. (This could be relevant for crossing the Chinese border, since the Great Firewall tends to produce random packet loss, especially on encrypted connections.)
Bitcoin uses a data structure for transactions with hashes built-in. We can use that in lieu of Bittorrent's file chunks.
A Bittorrent-inspired algorithm might be something like this:
  1. (Optional steps to build a Merkle cache; described later)
  2. A seed node mines a block.
  3. It notifies its peers that it has a new block with an extended version of inv.
  4. The leech peers request the block header.
  5. The seed sends the block header. The leech code path splits into two.
5(a). The leeches verify the block header, including the PoW. If the header is valid,
6(a). They notify their peers that they have a header for an unverified new block with an extended version of inv, looping back to 2. above. If it is invalid, they abort thread (b).
5(b). The leeches request the Nth row (from the root) of the transaction Merkle tree, where N might typically be between 2 and 10. That corresponds to about 1/4th to 1/1024th of the transactions. The leeches also request a bitfield indicating which of the Merkle nodes the seed has leaves for. The seed supplies this (0xFFFF...).
6(b). The leeches calculate all parent node hashes in the Merkle tree, and verify that the root hash is as described in the header.
  1. The leeches search their Merkle hash cache to see if they have the leaves (transaction hashes and/or transactions) for that node already.
  2. The leeches send a bitfield request to the node indicating which Merkle nodes they want the leaves for.
  3. The seed responds by sending leaves (either txn hashes or full transactions, depending on benchmark results) to the leeches in whatever order it decides is optimal for the network.
  4. The leeches verify that the leaves hash into the ancestor node hashes that they already have.
  5. The leeches begin sharing leaves with each other.
  6. If the leaves are txn hashes, they check their cache for the actual transactions. If they are missing it, they request the txns with a getdata, or all of the txns they're missing (as a list) with a few batch getdatas.
The main feature of this algorithm is that a leech will begin to upload chunks of data as soon as it gets them and confirms both PoW and hash/data integrity instead of waiting for a fully copy with full verification.
This algorithm is more complicated than the existing algorithm, and won't always be better in performance. Because more round trip messages are required for negotiating the Merkle tree transfers, it will perform worse in situations where the bandwidth to ping latency ratio is high relative to the blocksize. Specifically, the minimum per-hop latency will likely be higher. This might be mitigated by reducing the number of round-trip messages needed to set up the blocktorrent by using larger and more complex inv-like and getdata-like messages that preemptively send some data (e.g. block headers). This would trade off latency for bandwidth overhead from larger duplicated inv messages. Depending on implementation quality, the latency for the smallest block size might be the same between algorithms, or it might be 300% higher for the torrent algorithm. For small blocks (perhaps < 100 kB), the blocktorrent algorithm will likely be slightly slower. For large blocks (e.g. 8 MB over 20 Mbps), I expect the blocktorrent algo will likely be around an order of magnitude faster in the worst case (adversarial) scenarios, in which none of the block's transactions are in the caches.
One of the big benefits of the blocktorrent algorithm is that it provides several obvious and straightforward points for bandwidth saving and optimization by caching transactions and reconstructing the transaction order. A cooperating miner can pre-announce Merkle subtrees with some of the transactions they are planning on including in the final block. Other miners who see those subtrees can compare the transactions in those subtrees to the transaction sets they are mining with, and can rearrange their block prototypes to use the same subtrees as much as possible. In the case of public pools supporting the getblocktemplate protocol, it might be possible to build Merkle subtree caches without the pool's help by having one or more nodes just scraping their getblocktemplate results. Even if some transactions are inserted or deleted, it may be possible to guess a lot of the tree based on the previous ordering.
Once a block header and the first few rows of the Merkle tree have been published, they will propagate through the whole network, at which time full nodes might even be able to guess parts of the tree by searching through their txn and Merkle node/subtree caches. That might be fun to think about, but probably not effective due to O(n2) or worse scaling with transaction count. Might be able to make it work if the whole network cooperates on it, but there are probably more important things to do.
There are also a few other features of Bittorrent that would be useful here, like prioritizing uploads to different peers based on their upload capacity, and banning peers that submit data that doesn't hash to the right value. (It might be good if we could get Bram Cohen to help with the implementation.)
Another option is just to treat the block as a file and literally Bittorrent it, but I think that there should be enough benefits to integrating it with the existing bitcoin p2p connections and also with using bitcoind's transaction caches and Merkle tree caches to make a native implementation worthwhile. Also, Bittorrent itself was designed to optimize more for bandwidth than for latency, so we will have slightly different goals and tradeoffs during implementation.
One of the concerns that I initially had about this idea was that it would involve nodes forwarding unverified block data to other nodes. At first, I thought this might be useful for a rogue miner or node who wanted to quickly waste the whole network's bandwidth. However, in order to perform this attack, the rogue needs to construct a valid header with a valid PoW, but use a set of transactions that renders the block as a whole invalid in a manner that is difficult to detect without full verification. However, it will be difficult to design such an attack so that the damage in bandwidth used has a greater value than the 240 exahashes (and 25.1 BTC opportunity cost) associated with creating a valid header.
As I understand it, the O(1) IBLT approach requires that blocks follow strict rules (yet to be fully defined) about the transaction ordering. If these are not followed, then it turns into sending a list of txn hashes, and separately ensuring that all of the txns in the new block are already in the recipient's mempool. When mempools are very dissimilar, the IBLT approach performance degrades heavily and performance becomes worse than simply sending the raw block. This could occur if a node just joined the network, during chain reorgs, or due to malicious selfish miners. Also, if the mempool has a lot more transactions than are included in the block, the false positive rate for detecting whether a transaction already exists in another node's mempool might get high for otherwise reasonable bucket counts/sizes.
With the blocktorrent approach, the focus is on transmitting the list of hashes in a manner that propagates as quickly as possible while still allowing methods for reducing the total bandwidth needed. The blocktorrent algorithm does not really address how the actual transaction data will be obtained because, once the leech has the list of txn hashes, the standard Bitcoin p2p protocol can supply them in a parallelized and decentralized manner.
Thoughts?
-jtoomim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <[http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/2015092...[message truncated here by reddit bot]...
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011176.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

The Big Blocks Mega Thread

Since this is a pressing and prevalent issue, I thought maybe condensing the essential arguments into one mega thread is better than rehashing everything in new threads all the time. I chose a FAQ format for this so a certain statement can be answered. I don't want to re-post everything here so where appropriate I'm just going to use links.
Disclaimer: This is biased towards big blocks (BIP 101 in particular) but still tries to mention the risks, worries and fears. I think this is fair because all other major bitcoin discussion places severely censor and discourage big block discussion.
 
What is the block size limit?
The block size limit was introduced by Satoshi back in 2010-07-15 as an anti-DoS measure (though this was not stated in the commit message, more info here). Ever since, it has never been touched because historically there was no need and raising the block size limit requires a hard fork. The block size directly limits the number of transactions in a block. Therefore, the capacity of Bitcoin is directly limited by the block size limit.
 
Why does a raise require a hard fork?
Because larger blocks are seen as invalid by old nodes, a block size increase would fork these nodes off the network. Therefore it is a hard fork. However, it is possible to downsize the block limit with a soft fork since smaller blocks would still be seen as valid from old nodes. It is considerably easier to roll out a soft fork. Therefore, it makes sense to roll out a more ambitious hard fork limit and downsize as needed with soft forks if problems arise.
 
What is the deal with soft and hard forks anyways?
See this article by Mike Hearn: https://medium.com/@octskyward/on-consensus-and-forks-c6a050c792e7#.74502eypb
 
Why do we need to increase the block size?
The Bitcoin network is reaching its imposed block size limit while the hard- and software would be able to support more transactions. Many believe that in its current phase of growth, artificially limiting the block size is stifling adoption, investment and future growth.
Read this article and all linked articles for further reading: http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
Another article by Mike Hearn: https://medium.com/@octskyward/crash-landing-f5cc19908e32#.uhky4y1ua (this article is a little outdated since both Bitcoin Core and XT now have mempool limits)
 
What is the Fidelity Effect?
It is the Chicken and Egg problem applied to future growth of Bitcoin. If companies do not see how Bitcoin can scale long term, they don't invest which in turn slows down adoption and development.
See here and here.
 
Does an increase in block size limit mean that blocks immediately get larger to the point of the new block size limit?
No, blocks are as large as there is demand for transactions on the network. But one can assume that if the limit is lifted, more users and businesses will want to use the blockchain. This means that blocks will get bigger, but they will not automatically jump to the size of the block size limit. Increased usage of the blockchain also means increased adoption, investment and also price appreciation.
 
Which are the block size increase proposals?
See here.
It should be noted that BIP 101 is the only proposal which has been implemented and is ready to go.
 
What is the long term vision of BIP 101?
BIP 101 tries to be as close to hardware limitations regarding bandwidth as possible so that nodes can continue running at normal home-user grade internet connections to keep the decentralized aspect of Bitcoin alive. It is believed that it is hard to increase the block size limit, so a long term increase is beneficial to planning and investment in the Bitcoin network. Go to this article for further reading and understand what is meant by "designing for success".
BIP 101 vs actual transaction growth visualized: http://imgur.com/QoTEOO2
Note that the actual growth in BIP 101 is piece-wise linear and does not grow in steps as suggested in the picture.
 
What is up with the moderation and censorship on bitcoin.org, bitcointalk.org and /bitcoin?
Proponents of a more conservative approach fear that a block size increase proposal that does not have "developeexpert consensus" should not be implemented via a majority hard fork. Therefore, discussion about the full node clients which implement BIP 101 is not allowed. Since the same individuals have major influence of all the three bitcoin websites (most notably theymos), discussion of Bitcoin XT is censored and/or discouraged on these websites.
 
What is Bitcoin XT anyways?
More info here.
 
What does Bitcoin Core do about the block size? What is the future plan by Bitcoin Core?
Bitcoin Core scaling plan as envisioned by Gregory Maxwell: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe011865.html
 
Who governs or controls Bitcoin Core anyways? Who governs Bitcoin XT? What is Bitcoin governance?
Bitcoin Core is governed by a consensus mechanism. How it actually works is not clear. It seems that any major developer can "veto" a change. However, there is one head maintainer who pushes releases and otherwise organizes the development effort. It should be noted that the majority of the main contributors to Bitcoin Core are Blockstream employees.
BitcoinXT follows a benevolent dictator model (as Bitcoin used to follow when Satoshi and later Gavin Andresen were the lead maintainers).
It is a widespread believe that Bitcoin can be separated into protocol and full node development. This means that there can be multiple implementations of Bitcoin that all follow the same protocol and overall consensus mechanism. More reading here. By having multiple implementations of Bitcoin, single Bitcoin implementations can be run following a benevolent dictator model while protocol development would follow an overall consensus model (which is enforced by Bitcoin's fundamental design through full nodes and miners' hash power). It is still unclear how protocol changes should actually be governed in such a model. Bitcoin governance is a research topic and evolving.
 
What are the arguments against a significant block size increase and against BIP 101 in particular?
The main arguments against a significant increase are related to decentralization and therefore robustness against commercial interests and government regulation and intervention. More here (warning: biased Wiki article).
Another main argument is that Bitcoin needs a fee market established by a low block size limit to support miners long term. There is significant evidence and game theory to doubt this claim, as can be seen here.
Finally, block propagation and verification times increase with an increased block size. This in turn increases the orphan rate of miners which means reduced profit. Some believe that this is a disadvantage to small miners because they are not as well connected to other big miners. Also, there is currently a large miner centralization in China. Since most of these miners are behind the Great Firewall of China, their bandwidth to the rest of the world is limited. There is a fear that larger block propagation times favor Chinese miners as long as they have a mining majority. However, there are solutions in development that can drastically reduce block propagation times so this problem will be less of an issue long term.
 
What is up with the fee market and what is the Lightning Network (LN)?
Major Bitcoin Core developers believe that a fee market established by a low block size is needed for future security of the bitcoin network. While many believe fundamentally this is true, there is major dispute if a fee market needs to be forced by a low block size. One of the main LN developers thinks such a fee market through low block size is needed (read here). The Lightning Network is a non-bandwidth scaling solution. It uses payment channels that can be opened and closed using Bitcoin transactions that are settled on the blockchain. By routing transactions through many of these payment channels, in theory it is possible to support a lot more transactions while a user only needs very few payment channels and therefore rarely has to use (settle on) the actual blockchain. More info here.
 
How does LN and other non-bandwidth scaling solutions relate to Bitcoin Core and its long term scaling vision?
Bitcoin Core is headed towards a future where block sizes are kept low so that a fee market is established long term that secures miner incentives. The main scaling solution propagated by Core is LN and other solutions that only sometimes settle transactions on the main Bitcoin blockchain. Essentially, Bitcoin becomes a settlement layer for solutions that are built on top of Bitcoin's core technology. Many believe that long term this might be inevitable. But forcing this off-chain development already today seems counterproductive to Bitcoin's much needed growth and adoption phase before such solutions can thrive. It should also be noted that no major non-bandwidth scaling solution (such as LN) has been tested or even implemented. It is not even clear if such off-chain solutions are needed long term scaling solutions as it might be possible to scale Bitcoin itself to handle all needed transaction volumes. Some believe that the focus on a forced fee market by major Bitcoin Core developers represents a conflict of interest since their employer is interested in pushing off-chain scaling solutions such as LN (more reading here).
 
Are there solutions in development that show the block sizes as proposed via BIP 101 are viable and block propagation times in particular are low enough?
Yes, most notably: Weak Blocks, Thin Blocks and IBLT.
 
What is Segregated Witness (SW) and how does it relate to scaling and block size increases?
See here. SW among other things is a way to increase the block size once without a hard fork (the actual block size is not increased but there is extra information exchanged separately to blocks).
 
Feedback and more of those question/answer type posts (or revised question/answer pairs) appreciated!
 
ToDo and thoughts for expansion:
@Mods: Maybe this could be stickied?
submitted by BIP-101 to btc [link] [comments]

segwit after a 2MB hardfork

Disclaimer: My preferred plan for bitcoin is soft-forking segregated witness in asap, and scheduling a 2MB hardforked blocksize increase sometime mid-2017, and I think doing a 2MB hardfork anytime soon is pretty crazy. Also, I like micropayments, and until I learnt about the lightning network proposal, bitcoin didn't really interest me because a couple of cents in fees is way too expensive, and a few minutes is way too slow. Maybe that's enough to make everything I say uninteresting to you, dear reader, in which case I hope this disclaimer has saved you some time. :)
Anyway there's now a good explanation of what segwit does beyond increasing the blocksize via accounting tricks or however you want to call it: https://bitcoincore.org/en/2016/01/26/segwit-benefits/ [0] I'm hopeful that makes it a bit easier to see why many people are more excited by segwit than a 2MB hardfork. In any event hopefully it's easy to see why it might be a good idea to do segwit asap, even if you do a hardfork to double the blocksize first.
If you were to do a 2MB hardfork first, and then apply segwit on top of that [1], I think there are a number of changes you'd want to consider, rather than just doing a straight merge. Number one is that with the 75% discount for witness data and a 2MB blocksize, you run the risk of worst-case 8MB blocks which seems to be too large at present [2]. The obvious solution is to change the discount rate, or limit witness data by some other mechanism. The drawback is that this removes some of the benefits of segwit in reducing UTXO growth and in moving to a simpler cost formula. Not hard, but it's a tradeoff, and exactly what to do isn't obvious (to me, anyway).
If IBLT or weak blocks or an improved relay network or something similar comes out after deploying segwit, does it then make sense to increase the discount or otherwise raise the limit on witness data, and is it possible to do this without another hardfork and corresponding forced upgrade? For the core roadmap, I think the answer would be "do segwit as a soft-fork now so no one has to upgrade, and after IBLT/etc is ready perhaps do a hard-fork then because it will be safer" so there's only one forced upgrade for users. Is some similar plan possible if there's an "immediate" hard fork to increase the block size, to avoid users getting hit with two hardforks in quick succession?
Number two is how to deal with sighashes -- segwit allows the hash calculation to be changed, so that for 2MB of transaction data (including witness data), you only need to hash up to around 4MB of data when verifying signatures, rather than potentially gigabytes of data. Compare that to Gavin's commits to the 0.11.2 branch in Classic which include a 1.3GB limit on sighash data to make the 2MB blocksize -- which is necessary because the quadratic scaling problem means that the 1.3GB limit can already be hit with 1MB blocks. Do you keep the new limit once you've got 2MB+segwit, or plan to phase it out as more transactions switch to segwit, or something else?
Again, I think with the core roadmap the plan here is straightforward -- do segwit now, get as many wallets/transactions switched over to segwit asap (whether due to all the bonus features, or just that they're cheaper in fees), and then revise the sighash limits later as part of soft-forking to increase the blocksize.
Finally, and I'm probably projecting my own ideas here, I think a 2MB hardfork in 2017 would give ample opportunity to simultaneously switch to a "validation cost metric" approach, making fees simpler to calculate and avoiding people being able to make sigop attacks to force near-empty blocks and other such nonsense. I think there's even the possibility of changing the limit so that in future it can be increased by soft-forks [3], instead of needing a hard fork for increases as it does now. ie, I think if we're clever, we can get a gradual increase to 1.8MB-2MB starting in the next few months via segwit with a soft-fork, then have a single hard-fork flag day next year, that allows the blocksize to be managed in a forwards compatible way more or less indefinitely.
Anyhoo, I'd love to see more technical discussion of classic vs core, so in the spirit of "write what you want to read", voila...
[0] I wrote most of the text for that, though the content has had a lot of corrections from people who understand how it works better than I do; see the github pull request if you care --https://github.com/bitcoin-core/website/pull/67
[1] https://www.reddit.com/btc/comments/42mequ/jtoomim_192616_utc_my_plan_for_segwit_was_to_pull/
[2] I've done no research myself; jtoomim's talk at Hong Kong said 2MB/4MB seemed okay but 8MB/9MB was "pushing it" -- http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet/ and his talks with miners indicated that BIP101's 8MB blocks were "Too much too fast" https://docs.google.com/spreadsheets/d/1Cg9Qo9Vl5PdJYD4EiHnIGMV3G48pWmcWI3NFoKKfIzU/edit#gid=0 Tradeblock's stats also seem to suggest 8MB blocks is probably problematic for now: https://tradeblock.com/blog/bitcoin-network-capacity-analysis-part-6-data-propagation
[3] https://botbot.me/freenode/bitcoin-wizards/2015-12-09/?msg=55794797&page=4
submitted by ajtowns to btc [link] [comments]

Ran some simulations, fee market has a slight mining centralization pressure of its own.

Some of you may remember, I developed an equation to model mining revenue, orphan rates, and optimal transaction fee choosing behavior. Published here with reddit discussion on /bitcoin here and /bitcoinxt here. At first I was doing stuff in Excel, which was really cumbersome, so last night I ported it to python (source code) which has enabled me to model different scenarios much quicker. An environment of 3 equal hashpower miners (each 33.3% of network hashrate) with varying bandwidth (high, medium, low) was modeled to estimate each miner's net revenue in a variety of situations. I chose an average transaction size of 500 bytes and a normal distribution of fee rates centered on the indicated average with the indicated standard deviation. Below are some results. Note that block validation time is not taken into account (it should be equal between all three miners anyway), so block propagation time is the driving force for competitive miner revenue. Scenarios for thin blocks, IBLTs, or the relay network could also be modeled but they weren't included here.
Excess block space: 1MB max blocks, 1000 transactions 20k sat/kB (10k sat/kB stdev)
72Mbps miner: 945 tx included, 8.3607183558 revenue per block
24Mbps miner: 799 tx included, 8.34913691169 revenue per block 0.9986147788243619 of big miner)
8Mbps miner: 103 tx included, 8.33403980315 revenue per block (0.9968090597584246 of big miner)
Just enough block space: 1MB max blocks, 2000 transactions, 40k sat/kB (20k sat/kB stdev)
72Mbps miner: 1934 tx included, 8.4556784577 revenue per block
24Mbps miner: 1853 tx included, 8.43190872485 revenue per block (0.9971889029403247 of big miner)
8Mbps miner: 1259 tx included, 8.37275409707 revenue per block (0.9901930565306103 of big miner)
Not enough block space, small fee market: 1MB max blocks 2500 transactions, 50k sat/kB (25 sat/kB stdev)
72Mbps miner: 2000 tx included, 8.51527235392 revenue per block
24Mbps miner: 2000 tx included, 8.49104458683 revenue per block (0.997154786590138 of big miner)
8Mbps miner: 1866 tx included, 8.41914315005 revenue per block (0.9887109654424915 of big miner)
Not enough block space, medium fee market: 1MB max blocks 4000 transactions, 60k sat/kB (30 sat/kB stdev)
72Mbps miner: 2000 tx included, 8.88629695674 revenue per block
24Mbps miner: 2000 tx included, 8.85361797721 revenue per block (0.9963225424843344 of big miner)
8Mbps miner: 2000 tx included, 8.77432436011 revenue per block (0.9873994086428687 of big miner)
Not enough block space, large fee market: 1MB max blocks 5000 transactions, 100k sat/kB (50 sat/kB stdev)
72Mbps miner: 2000 tx included, 8.81583923195 revenue per block
24Mbps miner: 2000 tx included, 8.7902398931 revenue per block 0.9970962107887331 of big miner)
8Mbps miner: 2000 tx included, 8.7092241768 revenue per block (0.9879064202119737 of big miner)
Not enough block space, exponential fee market: 1MB max blocks 5000 transactions, 200k sat/kB (100k sat/kB stdev)
72Mbps miner: 2000 tx included, 9.39007600909 revenue per block
24Mbps miner: 2000 tx included, 9.37585138273 revenue per block (0.9967630403971791 of big miner)
8Mbps miner: 2000 tx included, 9.28584067493 revenue per block (0.9876859413849169 of big miner)
Excess block space: 2MB max blocks, 2000 transactions, 20k sat/kB (10k sat/kB stdev)
72Mbps miner: 1889 tx included, 8.38774109874 revenue per block
24Mbps miner: 1649 tx included, 8.36598001758 revenue per block (0.9974056088637179 of big miner)
8Mbps miner: 183 tx included, 8.33485379663 revenue per block (0.9936946906816253 of big miner)
Just enough block space: 2MB max blocks, 4000 transactions, 40k sat/kB (20k sat/kB stdev)
72Mbps miner: 3846 tx included, 8.5752264105 revenue per block
24Mbps miner: 3701 tx included, 8.52725262904 revenue per block (0.9944055376309064 of big miner)
8Mbps miner: 2487 tx included, 8.41129045585 revenue per block (0.9808826091811095 of big miner)
Not enough block space, small fee market: 2MB max blocks, 5000 transactions, 50k sat/kB (25k sat/kB stdev)
72Mbps miner: 4000 tx included, 8.70130849382 revenue per block
24Mbps miner: 4000 tx included, 8.6469427816 revenue per block (0.9937520072689513 of big miner)
8Mbps miner: 3690 tx included, 8.49500931425 revenue per block (0.9762910164929193 of big miner)
Just enough block space: 5MB blocks, 10,000 transactions, 40k sat/kB (20k sat/kB stdev)
72Mbps miner: 9664 tx included, 8.94011214658 revenue per block
24Mbps miner: 9257 tx included, 8.81696423816 revenue per block (0.9862252389678233 of big miner)
8Mbps miner: 5843 tx included, 8.50764915262 revenue per block (0.9516266701279092 of big miner)
The fee market centralizing effect happens because when we have an excess of block space and lower fees, low bandwidth miners are able to smartly choose whether or not to include a transaction based on fee paid and their calculated risk of orphan thereby optimizing their revenue by taking orphan rate into account (building on Peter R's work that a minimum tx fee exists even in the absense of a block size limit). With lots of transactions and high fees the ability to optimize based on orphan rate is taken away, low bandwidth miners must mine full blocks to maximize their profitability. This increases their orphan rate such that they become even less profitable than high bandwidth miners than they were when fees were low and blocks were not full.
Do note though, that this fee market centralizing effect is less than the centralizing effect of moving to higher max block size, but it exists nonetheless.
I commented the source code linked above pretty thoroughly, but do let me know if you have any questions if you are inclined to play around with it.
submitted by peoplma to btc [link] [comments]

Technical question: IBLT and IPC

I've been reading about Gavin's good work on invertible bloom lookup tables (IBLT) and also looking at systems to calculate the risk of accepting 0 confirmation transactions, like instant partial confirmations (IPC).
Am I right in thinking that IBLTs could make IPC more efficient/viable as well?
submitted by ej159 to Bitcoin [link] [comments]

Bitcoin einfach erklärt - Definiton Bitcoin-Mine: Hier werden Millionen verdient  Galileo ... YouTube How to mine $1,000,000 of Bitcoin using just a laptop ... 5GHs Bitcoin mining rig - YouTube

Der Bitcoin nimmt nach der Korrektur auf unter 9.000 Dollar wieder Fahrt auf. Für positive fundamentale Impulse sorgt mal wieder PayPal. Denn die Pläne des Payment-Giganten, Bitcoin in sein ... Mentre la ricompensa del blocco Bitcoin continua a dimezzarsi, si prevede che il valore di Bitcoin aumenti. Finora quella tendenza è stata confermata. Finora quella tendenza è stata confermata. In primo luogo, la quantità di BTC coniata di recente (spesso indicata come coinbase, da non confondere con lo scambio di coinbase) è dimezzata a 25 BTC e l'attuale ricompensa di coinbase è di 12,5 ... Conclusion: A Bitcoin Mining Calculator Predicts the Future. To conclude, a Bitcoin mining calculator can give you a much better idea about your potential to run a profitable mining operation. Remember, however, that some factors such as Bitcoin’s price and mining difficulty, change every day and can have dramatic effects on profitability, so it’s important to conduct up to date ... Bitcoin Fee Calculator & Estimator. tipo di transazione: Confermare entro blocchi ( ~ min) satoshis/ Per una transazione standard con input ~ s . e output ~ satoshis ~ USD . Ulteriori informazioni sulle commissioni Bitcoin... I Bitcoin sono costituiti da blocchi. I blocchi sono un insieme di transazioni e attualmente sono limitati a un importo massimo di 1.000.000 di byte e sono progettati in ... Calculate your Bitcoin mining profitability and estimated mining rewards by starting with the Bitcoin mining hashrate calculator inputs above; mining hardware, mining costs, and mining reward. How Bitcoin Mining Works. Bitcoin mining is the process of securing and validation Bitcoin transactions on the Bitcoin blockchain.

[index] [14186] [30674] [27107] [26093] [33554] [42323] [39990] [20716] [29628] [30502]

Bitcoin einfach erklärt - Definiton

Auf YouTube findest du großartige Videos und erstklassige Musik. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder mit der ganzen Welt teilen. Zu Besuch in der Bitcoin-Mine: Hier fließt die virtuelle Währung in Millionenhöhe. Mehr Galileo: http://www.galileo.tv/ Galileo auf YouTube abonnieren: htt... Was bedeutet Bitcoin? Einfach erklärt (Kryptowährung Definitionen) Hier erkläre ich Dir ganz einfach, was der Begriff Bitcoin im Bereich Kryptowährungen bede... 5Gh/z in hashing power, just added another mining rig, A very compacted set up, currently with 3 system it only takes up 3 x 3 x 2 feet. I have 8x5870 and 4x... Mining Bitcoin is as easy as installing the mining software on the PC you already own and clicking start. Anyone can do this and see the money start rolling ...

#