Bitcoin-Qt (Settings) reports incorrect "Size of database ...

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

Is it normal for a new Full Node server to repeatedly show Potential Sale Tip detected messages? What does this even mean. I surprisingly can't find anything with a solution on the internet. Please look at my log dump and give me an idea what to do. Im trying to run a full node in service to the net

Here is my log messages:

2019-11-14T03:05:38Z Bitcoin Core version v0.18.1 (release build)
2019-11-14T03:05:38Z Assuming ancestors of block 0000000000000000000f1c54590ee18d15ec70e68c8cd4cfbadb1b4f11697eee have valid signatures.
2019-11-14T03:05:38Z Setting nMinimumChainWork=0000000000000000000000000000000000000000051dc8b82f450202ecb3d471
2019-11-14T03:05:38Z Using the 'sse4(1way),sse41(4way)' SHA256 implementation
2019-11-14T03:05:38Z Default data directory /home/norman/.bitcoin
2019-11-14T03:05:38Z Using data directory /media/norman/Seagate Expansion Drive/.bitcoin/
2019-11-14T03:05:38Z Config file: /media/norman/Seagate Expansion Drive/.bitcoin/bitcoin.conf (not found, skipping)
2019-11-14T03:05:38Z Using at most 125 automatic connections (1024 file descriptors available)
2019-11-14T03:05:38Z Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements
2019-11-14T03:05:38Z Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements
2019-11-14T03:05:38Z Using 8 threads for script verification
2019-11-14T03:05:38Z scheduler thread start
2019-11-14T03:05:38Z HTTP: creating work queue of depth 16
2019-11-14T03:05:38Z No rpcpassword set - using random cookie authentication.
2019-11-14T03:05:38Z Generated RPC authentication cookie /media/norman/Seagate Expansion Drive/.bitcoin/.cookie
2019-11-14T03:05:38Z HTTP: starting 4 worker threads
2019-11-14T03:05:38Z Using wallet directory /media/norman/Seagate Expansion Drive/.bitcoin/
2019-11-14T03:05:38Z init message: Verifying wallet(s)...
2019-11-14T03:05:38Z Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010)
2019-11-14T03:05:38Z Using wallet /media/norman/Seagate Expansion Drive/.bitcoin/
2019-11-14T03:05:38Z BerkeleyEnvironment::Open: LogDir=/media/norman/Seagate Expansion Drive/.bitcoin/database ErrorFile=/media/norman/Seagate Expansion Drive/.bitcoin/db.log
2019-11-14T03:05:39Z init message: Loading banlist...
2019-11-14T03:05:39Z Cache configuration:
2019-11-14T03:05:39Z * Using 2.0 MiB for block index database
2019-11-14T03:05:39Z * Using 8.0 MiB for chain state database
2019-11-14T03:05:39Z * Using 440.0 MiB for in-memory UTXO set (plus up to 286.1 MiB of unused mempool space)
2019-11-14T03:05:39Z init message: Loading block index...
2019-11-14T03:05:39Z Opening LevelDB in /media/norman/Seagate Expansion Drive/.bitcoin/blocks/index
2019-11-14T03:05:39Z Opened LevelDB successfully
2019-11-14T03:05:39Z Using obfuscation key for /media/norman/Seagate Expansion Drive/.bitcoin/blocks/index: 0000000000000000
2019-11-14T03:05:39Z LoadBlockIndexDB: last block file = 0
2019-11-14T03:05:39Z LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=1, size=293, heights=0...0, time=2009-01-03...2009-01-03)
2019-11-14T03:05:39Z Checking all blk files are present...
2019-11-14T03:05:39Z Opening LevelDB in /media/norman/Seagate Expansion Drive/.bitcoin/chainstate
2019-11-14T03:05:40Z Opened LevelDB successfully
2019-11-14T03:05:40Z Using obfuscation key for /media/norman/Seagate Expansion Drive/.bitcoin/chainstate: fb03fb54abfe4745
2019-11-14T03:05:40Z Loaded best chain: hashBestChain=000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f height=0 date=2009-01-03T18:15:05Z progress=0.000000
2019-11-14T03:05:40Z init message: Rewinding blocks...
2019-11-14T03:05:40Z init message: Verifying blocks...
2019-11-14T03:05:40Z block index 1516ms
2019-11-14T03:05:40Z init message: Loading wallet...
2019-11-14T03:05:40Z BerkeleyEnvironment::Open: LogDir=/media/norman/Seagate Expansion Drive/.bitcoin/database ErrorFile=/media/norman/Seagate Expansion Drive/.bitcoin/db.log
2019-11-14T03:05:40Z [default wallet] nFileVersion = 180100
2019-11-14T03:05:40Z [default wallet] Keys: 2001 plaintext, 0 encrypted, 2001 w/ metadata, 2001 total. Unknown wallet records: 0
2019-11-14T03:05:41Z [default wallet] Wallet completed loading in 449ms
2019-11-14T03:05:41Z [default wallet] setKeyPool.size() = 2000
2019-11-14T03:05:41Z [default wallet] mapWallet.size() = 0
2019-11-14T03:05:41Z [default wallet] mapAddressBook.size() = 0
2019-11-14T03:05:41Z mapBlockIndex.size() = 1
2019-11-14T03:05:41Z nBestHeight = 0
2019-11-14T03:05:41Z torcontrol thread start
2019-11-14T03:05:41Z Imported mempool transactions from disk: 0 succeeded, 0 failed, 0 expired, 0 already there
2019-11-14T03:05:41Z Bound to [::]:8333
2019-11-14T03:05:41Z Bound to 0.0.0.0:8333
2019-11-14T03:05:41Z init message: Loading P2P addresses...
2019-11-14T03:05:41Z Loaded 253 addresses from peers.dat 16ms
2019-11-14T03:05:41Z init message: Starting network threads...
2019-11-14T03:05:41Z net thread start
2019-11-14T03:05:41Z dnsseed thread start
2019-11-14T03:05:41Z opencon thread start
2019-11-14T03:05:41Z init message: Done loading
2019-11-14T03:05:41Z addcon thread start
2019-11-14T03:05:41Z msghand thread start
2019-11-14T03:05:52Z Loading addresses from DNS seeds (could take a while)
2019-11-14T03:05:54Z 187 addresses found from DNS seeds
2019-11-14T03:05:54Z dnsseed thread exit
2019-11-14T03:37:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 1890 seconds ago)
2019-11-14T03:48:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 2520 seconds ago)
2019-11-14T03:58:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 3150 seconds ago)
2019-11-14T04:09:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 3780 seconds ago)
2019-11-14T04:19:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 4410 seconds ago)
2019-11-14T04:30:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 5040 seconds ago)
2019-11-14T04:40:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 5670 seconds ago)
2019-11-14T04:51:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 6300 seconds ago)
2019-11-14T05:01:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 6930 seconds ago)
2019-11-14T05:12:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 7560 seconds ago)
2019-11-14T05:22:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 8190 seconds ago)
2019-11-14T05:33:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 8820 seconds ago)
2019-11-14T05:43:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 9450 seconds ago)
2019-11-14T05:54:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 10080 seconds ago)
2019-11-14T06:04:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 10710 seconds ago)
2019-11-14T06:15:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 11340 seconds ago)
2019-11-14T06:25:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 11970 seconds ago)
2019-11-14T06:36:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 12600 seconds ago)
2019-11-14T06:46:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 13230 seconds ago)
2019-11-14T06:57:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 13860 seconds ago)
2019-11-14T07:07:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 14490 seconds ago)
2019-11-14T07:18:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 15120 seconds ago)
2019-11-14T07:28:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 15750 seconds ago)
2019-11-14T07:39:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 16380 seconds ago)
2019-11-14T07:49:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 17010 seconds ago)
2019-11-14T08:00:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 17640 seconds ago)
2019-11-14T08:10:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 18270 seconds ago)
2019-11-14T08:21:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 18900 seconds ago)
2019-11-14T08:31:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 19530 seconds ago)
2019-11-14T08:42:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 20160 seconds ago)
2019-11-14T08:52:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 20790 seconds ago)
2019-11-14T09:03:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 21420 seconds ago)
2019-11-14T09:13:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 22050 seconds ago)
2019-11-14T09:24:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 22680 seconds ago)
2019-11-14T09:34:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 23310 seconds ago)
2019-11-14T09:45:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 23940 seconds ago)
2019-11-14T09:55:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 24570 seconds ago)
2019-11-14T10:06:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 25200 seconds ago)
2019-11-14T10:16:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 25830 seconds ago)
2019-11-14T10:27:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 26460 seconds ago)
2019-11-14T10:37:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 27090 seconds ago)
2019-11-14T10:48:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 27720 seconds ago)
2019-11-14T10:58:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 28350 seconds ago)
2019-11-14T11:09:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 28980 seconds ago)
2019-11-14T11:19:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 29610 seconds ago)
2019-11-14T11:30:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 30240 seconds ago)
2019-11-14T11:40:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 30870 seconds ago)
2019-11-14T11:51:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 31500 seconds ago)
2019-11-14T12:01:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 32130 seconds ago)
2019-11-14T12:12:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 32760 seconds ago)
2019-11-14T12:22:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 33390 seconds ago)
2019-11-14T12:33:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 34020 seconds ago)
2019-11-14T12:43:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 34650 seconds ago)
2019-11-14T12:54:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 35280 seconds ago)
2019-11-14T13:04:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 35910 seconds ago)
2019-11-14T13:15:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 36540 seconds ago)
2019-11-14T13:25:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 37170 seconds ago)
2019-11-14T13:36:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 37800 seconds ago)
2019-11-14T13:46:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 38430 seconds ago)
2019-11-14T13:57:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 39060 seconds ago)
2019-11-14T14:07:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 39690 seconds ago)
2019-11-14T14:18:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 40320 seconds ago)
2019-11-14T14:28:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 40950 seconds ago)
2019-11-14T14:39:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 41580 seconds ago)
2019-11-14T14:49:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 42210 seconds ago)
2019-11-14T15:00:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 42840 seconds ago)
2019-11-14T15:10:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 43470 seconds ago)
2019-11-14T15:21:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 44100 seconds ago)
2019-11-14T15:31:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 44730 seconds ago)
2019-11-14T15:42:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 45360 seconds ago)
2019-11-14T15:52:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 45990 seconds ago)
2019-11-14T16:03:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 46620 seconds ago)
2019-11-14T16:13:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 47250 seconds ago)
2019-11-14T16:24:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 47880 seconds ago)
2019-11-14T16:34:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 48510 seconds ago)
2019-11-14T16:45:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 49140 seconds ago)
2019-11-14T16:55:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 49770 seconds ago)
2019-11-14T17:06:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 50400 seconds ago)
2019-11-14T17:16:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 51030 seconds ago)
2019-11-14T17:27:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 51660 seconds ago)
2019-11-14T17:37:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 52290 seconds ago)
2019-11-14T17:48:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 52920 seconds ago)
2019-11-14T17:58:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 53550 seconds ago)
2019-11-14T18:09:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 54180 seconds ago)
2019-11-14T18:19:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 54810 seconds ago)
2019-11-14T18:30:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 55440 seconds ago)
2019-11-14T18:40:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 56070 seconds ago)
2019-11-14T18:51:24Z Potential stale tip detected, will try using extra outbound peer (last tip update: 56700 seconds ago)
2019-11-14T19:01:54Z Potential stale tip detected, will try using extra outbound peer (last tip update: 57330 seconds ago)


this goes on until I decided just to shut down server after 24 hours and the .dat blockchain file was stuck at 16MB in size. Seemed like going nowhere.
submitted by Alchemy333 to Bitcoin [link] [comments]

[DEVELOPMENT] Bitcoind IPV4 testnet port (18332) is failing to bind

[SOLVED] Thanks for everyone that have helped!


Hello everyone, this is a development problem that I'm currently having. Since the BTC Development sub is kind of inactive and I couldn't find any rule contraty to posting about BTC Development, I'll try my luck in here as I'm hopeless already. I've posted on BTC Stack Exchange but no answers also. Please, don't get me wrong, I'm trying to solve this problem for many days now, I've looked up everywhere for this.
I'm new to Bitcoin development and I'm currently having difficulties trying to make RPC calls from a Docker Container to a Bitcoin-Core daemon running in a SSH server. I suppose that the problem may be with Firewall or closed ports, but I also do not know much about Network settings.
I'm using nbobtc/bitcoind-php package to make the RPC calls with HTTP requests, and it is running in a Docker container. I'm sure the container is functional and is not the problem.
So here's what happening: when I run bitcoind in root user (but normal also won't work) in my SSH server, the IPV4 testnet port seems to be not opened. This message goes up when I run bitcoind:
Binding RPC on address 0.0.0.0 port 18332 failed.
Here's what my bitcoin.conf looks like (I want to use testnet in here). I'm using Bitcoin-Core "subversion": "Satoshi:0.17.1".
server=1 debug=net txindex=1 testnet=1 rpcuser=userb rpcpassword=test test.rpcport=18332 # I've already tried allowing the IP these 3 ways: # rpcallowip=192.168.xx.xx # My machine's IP # rpcallowip=172.19.x.x/xx # Docker's NBOBTC container IP # rpcallowip=0.0.0.0/0 # Allowing all IP datadir=/home/bitcoin-dev/.bitcoin debuglogfile=/home/bitcoin-dev/.bitcoin/debug.log 
Here's what appears in debug.log right after I run Bitcoind:
2019-05-06T14:43:10Z Bitcoin Core version v0.17.1 (release build) 2019-05-06T14:43:10Z InitParameterInteraction: parameter interaction: -whitelistforcerelay=1 -> setting -whitelistrelay=1 2019-05-06T14:43:10Z Assuming ancestors of block 0000000000000037a8cd3e06cd5edbfe9dd1dbcc5dacab279376ef7cfc2b4c75 have valid signatures. 2019-05-06T14:43:10Z Setting nMinimumChainWork=00000000000000000000000000000000000000000000007dbe94253893cbd463 2019-05-06T14:43:10Z Using the 'sse4(1way),sse41(4way)' SHA256 implementation 2019-05-06T14:43:10Z Default data directory /root/.bitcoin 2019-05-06T14:43:10Z Using data directory /home/bitcoin-dev/.bitcoin/testnet3 2019-05-06T14:43:10Z Using config file /home/bitcoin-dev/.bitcoin/bitcoin.conf 2019-05-06T14:43:10Z Using at most 125 automatic connections (1024 file descriptors available) 2019-05-06T14:43:10Z Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements 2019-05-06T14:43:10Z Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements 2019-05-06T14:43:10Z Using 4 threads for script verification 2019-05-06T14:43:10Z scheduler thread start 2019-05-06T14:43:10Z Binding RPC on address 0.0.0.0 port 18332 failed. 2019-05-06T14:43:10Z HTTP: creating work queue of depth 16 2019-05-06T14:43:10Z Config options rpcuser and rpcpassword will soon be deprecated. Locally-run instances may remove rpcuser to use cookie-based auth, or may be replaced with rpcauth. Please see share/rpcauth for rpcauth auth generation. 2019-05-06T14:43:10Z HTTP: starting 4 worker threads 2019-05-06T14:43:10Z Using wallet directory /home/bitcoin-dev/.bitcoin/testnet3/wallets 2019-05-06T14:43:10Z init message: Verifying wallet(s)... 2019-05-06T14:43:10Z Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010) 2019-05-06T14:43:10Z Using wallet wallet.dat 2019-05-06T14:43:10Z BerkeleyEnvironment::Open: LogDir=/home/bitcoin-dev/.bitcoin/testnet3/wallets/database ErrorFile=/home/bitcoin-dev/.bitcoin/testnet3/wallets/db.log 2019-05-06T14:43:10Z net: setting try another outbound peer=false 2019-05-06T14:43:10Z Cache configuration: 2019-05-06T14:43:10Z * Using 2.0MiB for block index database 2019-05-06T14:43:10Z * Using 56.0MiB for transaction index database 2019-05-06T14:43:10Z * Using 8.0MiB for chain state database 2019-05-06T14:43:10Z * Using 384.0MiB for in-memory UTXO set (plus up to 286.1MiB of unused mempool space) 2019-05-06T14:43:10Z init message: Loading block index... 2019-05-06T14:43:10Z Opening LevelDB in /home/bitcoin-dev/.bitcoin/testnet3/blocks/index 2019-05-06T14:43:10Z Opened LevelDB successfully 2019-05-06T14:43:10Z Using obfuscation key for /home/bitcoin-dev/.bitcoin/testnet3/blocks/index: 0000000000000000 2019-05-06T14:43:19Z LoadBlockIndexDB: last block file = 161 2019-05-06T14:43:19Z LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=755, size=30875345, heights=1513309...1514061, time=2019-04-29...2019-05-03) 2019-05-06T14:43:19Z Checking all blk files are present... 2019-05-06T14:43:20Z Opening LevelDB in /home/bitcoin-dev/.bitcoin/testnet3/chainstate 2019-05-06T14:43:20Z Opened LevelDB successfully 2019-05-06T14:43:20Z Using obfuscation key for /home/bitcoin-dev/.bitcoin/testnet3/chainstate: 2686d59caeb1917c 2019-05-06T14:43:20Z Loaded best chain: hashBestChain=00000000b3b6a5db140b6058b7abe5cb00d8af61afd2a237ae3468cd36e387fa height=927391 date=2016-09-08T15:04:00Z progress=0.311180 2019-05-06T14:43:20Z init message: Rewinding blocks... 2019-05-06T14:43:29Z init message: Verifying blocks... 2019-05-06T14:43:29Z Verifying last 6 blocks at level 3 2019-05-06T14:43:29Z [0%]...[16%]...[33%]...[50%]...[66%]...[83%]...[99%]...[DONE]. 2019-05-06T14:43:29Z No coin database inconsistencies in last 6 blocks (500 transactions) 2019-05-06T14:43:29Z block index 19450ms 2019-05-06T14:43:29Z Opening LevelDB in /home/bitcoin-dev/.bitcoin/testnet3/indexes/txindex 2019-05-06T14:43:30Z Opened LevelDB successfully 2019-05-06T14:43:30Z Using obfuscation key for /home/bitcoin-dev/.bitcoin/testnet3/indexes/txindex: 0000000000000000 2019-05-06T14:43:30Z init message: Loading wallet... 2019-05-06T14:43:30Z txindex thread start 2019-05-06T14:43:30Z [default wallet] nFileVersion = 170100 2019-05-06T14:43:30Z [default wallet] Keys: 2005 plaintext, 0 encrypted, 2005 w/ metadata, 2005 total. Unknown wallet records: 1 2019-05-06T14:43:30Z Syncing txindex with block chain from height 694205 2019-05-06T14:43:30Z [default wallet] Wallet completed loading in 123ms 2019-05-06T14:43:30Z [default wallet] setKeyPool.size() = 2000 2019-05-06T14:43:30Z [default wallet] mapWallet.size() = 7 2019-05-06T14:43:30Z [default wallet] mapAddressBook.size() = 4 2019-05-06T14:43:30Z mapBlockIndex.size() = 1515581 2019-05-06T14:43:30Z nBestHeight = 927391 2019-05-06T14:43:30Z torcontrol thread start 2019-05-06T14:43:30Z Bound to [::]:18333 2019-05-06T14:43:30Z Bound to 0.0.0.0:18333 2019-05-06T14:43:30Z init message: Loading P2P addresses... 2019-05-06T14:43:30Z Loaded 10420 addresses from peers.dat 36ms 2019-05-06T14:43:30Z init message: Loading banlist... 2019-05-06T14:43:30Z Loaded 0 banned node ips/subnets from banlist.dat 29ms 2019-05-06T14:43:30Z init message: Starting network threads... 2019-05-06T14:43:30Z net thread start 2019-05-06T14:43:30Z dnsseed thread start 2019-05-06T14:43:30Z addcon thread start 2019-05-06T14:43:30Z msghand thread start 2019-05-06T14:43:30Z init message: Done loading 2019-05-06T14:43:30Z opencon thread start 
After all that appears above, there are just "UpdateTip", "Requesting block", "received block" and "getdata" messages. (so the P2P port, 18333, works).

And here is when I netstat:

sudo netstat -nap|grep bitcoin|grep LISTEN
tcp 0 0 0.0.0.0:18333 0.0.0.0:* LISTEN 31185/bitcoind tcp6 0 0 :::18332 :::* LISTEN 31185/bitcoind tcp6 0 0 :::18333 :::* LISTEN 31185/bitcoind 
Thank you in advance!

PS: A few days ago I could make it work when running bitcoind with root user, but now even that won't solve the problem.
submitted by VicPietro to Bitcoin [link] [comments]

memo.cash: IPFS based media support with no changes to the memo protocol

What is IPFS?
IPFS is a peer to peer filesystem, that addresses content rather than hosts. When a person adds a file to IPFS, a hash is calculated and used to address the file. When other people access the file, they get it from the location nearest to them that has the file, and not necessarily the original uploader.
IPFS doesn't actually host files. At least one node has to have the file pinned down to guarantee availability, although files can persist in node cache even if there aren't permanent hosts around. Any IPFS node can access any file on IPFS given its hash.
Example: This is the same as this
Why use IPFS with memo.cash?
Memo.cash users are currently hosting their images on imgur and other platforms. While that seems to be working out great for now, we're linking up an immutable decentralized database (the blockchain) to a centralized one, when we don't have to.
Let's say we were to host instead on IPFS nodes. We'd avail these advantages:
What would an implementation look like?
It could look like this.
Hey guys! Check out my new profile pic: i/Qmb8wsGZNXt5VXZh1pEmYynjB6Euqpq3HYyeAdw2vScTkQ
The frontend would resolve the IPFS url using any public IPFS node, and display the image on the page.
The IPFS address is 46 characters in length (+2 for i/), and it wouldn't be much of a problem after the increase in OP_RETURN size. It could refer to a single image, a gallery of images, full length videos, and even raw files. An IPFS hash can also refer to a directory, like this folder containing XKCD comics. You'd need just one link in your memo, and that could resolve to multiple pieces of media. Most of the magic would happen at the frontend.
IPFS needs at least one person to pin files to guarantee availability. This could be done in several ways:
So what changes need to be made?
  1. The memo frontend needs to resolve IPFS links, using either their own node, or a public gateway such as ipfs.io/ipfs/
  2. The memo frontend can make IPFS integration easier by uploading your media and temporarily pinning it
  3. Volunteers (or just the memo hosts) need to run a script that finds IPFS addresses in posts and pin them to their IPFS peer (provided they are images, and not too large, and alongside a policy to eventually GC them)
It's quite possible I'm missing something. Do tell me if this is the case.
Thanks! bitcoincash:qpanv2sc5jz93nrerlr3tmg0h8qjhla47gu5ma5jxc
submitted by lunaroyster to btc [link] [comments]

Developer Update- 4/4/18

It has been a little bit over a week since version 11 of the Nano Node was released. Currently, around 45% of the network has been upgraded and is running the latest version.
We’ve been monitoring its rollout and working diligently with those reporting issues to diagnose, understand, test and patch bugs that were found. Our team would like to thank all users who have reported any issues they have faced to us, this engagement helps immensely to refine the Nano protocol.
Version 11 – patches and learnings
In version 11.2, found at https://github.com/nanocurrency/raiblocks/releases/tag/V11.2, we have fixed an issue where a cached value was being ignored and when recalculated would cause the node to crash.
Also, work generation was moved outside of database transactions, which could cause large database sizes if work generation was very slow or there were a lot of wallet operations.
Universal Blocks were not the cause of any of these nor was their future rollout via canary block affected.
Exchange questions
Exchanges running nodes are a unique situation due to the scale at which they operate. Our team is available 24/7 to answer any technical questions exchanges may have and to offer full support in resolving any issues they are facing. When a problem arises and it is brought to our attention, our team dedicates full resources to solve the issue as quickly as possible.
Two things to keep in mind:
1) Due to the nature of some questions or issues that arise that are internal to the exchanges’ business or sensitive in their nature, we do not feel it is appropriate to comment on the issues they are facing and we feel it is best to let exchanges to provide updates as they see fit.
2) Many coins that are forks of bitcoin or are ERC-20 tokens have a standard API and integration that exchanges are very familiar with. Because Nano is not a derivative project, our node and API are unique and may experience issues that other coins’ nodes do not.
As a final note, our team would like to extend our thanks to the Binance team for their professionalism, time and dedication in resolving the wallet issues we recently faced. Our team was able to diagnose their problem and get a fix in place, resulting in minimal downtime.
submitted by troyretz to nanocurrency [link] [comments]

Soo after almost 3 months of setting up I have my own LN full node running on RP3

Soo after almost 3 months of setting up I have my own LN full node running on RP3
I have been eager to try LN mainnet since the very beginning of it. I've found out about lnd, eclair, zap and other wallets but every scenario I tried to use it failed because of critical issues:
  • eclair does not really constitute a wallet, it's more like a credit card - you can send money but not receive it
  • lnd is okay, but requires a server and tons of resources for maintaining a full node, can't be used securely, efficiently and mobily at the same time
  • zap offers some cloud wallet (in testnet!) by default, this is a serious misunderstanding of my cryptoanarchy needs
  • web wallets - ah, forget it
So I've decided to use my Raspberry Pi with a very old laptop HDD attached (200GB so the pruning function has to be used) to create a backend wallet service and zap desktop (temporarily!) as my frontend control panel.
https://preview.redd.it/0vcq147887q11.png?width=1024&format=png&auto=webp&s=7bb6eccdd4110a857e5af0400acc2d7e1ee7ee85
Setting up Pi is easy, lots of tutorials over the internet, not gonna discuss it here. Then I had to obtain bitcoind (current rel: bitcoin-0.17.0-arm-linux-gnueabihf.tar.gz) and lnd (lnd-linux-armv7-v0.5-beta.tar.gz), create a bitcoin technical user, deploy the tools, configure and install new systemd services and go through the configs. This is a tricky part, so let's share:
# Generated by https://jlopp.github.io/bitcoin-core-config-generato # This config should be placed in following path: # ~/.bitcoin/bitcoin.conf # [core] # Set database cache size in megabytes; machines sync faster with a larger cache. Recommend setting as high as possible based upon machine's available RAM. dbcache=100 # Keep at most  unconnectable transactions in memory. maxorphantx=10 # Keep the transaction memory pool below  megabytes. maxmempool=50 # Reduce storage requirements by only storing most recent N MiB of block. This mode is incompatible with -txindex and -rescan. WARNING: Reverting this setting requires re-downloading the entire blockchain. (default: 0 = disable pruning blocks, 1 = allow manual pruning via RPC, greater than 550 = automatically prune blocks to stay under target size in MiB). prune=153600 # [network] # Maintain at most N connections to peers. maxconnections=40 # Use UPnP to map the listening port. upnp=1 # Tries to keep outbound traffic under the given target (in MiB per 24h), 0 = no limit. maxuploadtarget=5000 # [debug] # Log IP Addresses in debug output. logips=1 # [rpc] # Accept public REST requests. rest=1 # [wallet] # Do not load the wallet and disable wallet RPC calls. disablewallet=1 # [zeromq] # Enable publishing of raw block hex to 
. zmqpubrawblock=tcp://127.0.0.1:28332 # Enable publishing of raw transaction hex to
. zmqpubrawtx=tcp://127.0.0.1:28333 # [rpc] # Accept command line and JSON-RPC commands. server=1 # Username and hashed password for JSON-RPC connections. The field comes in the format: :$. RPC clients connect using rpcuser=/rpcpassword= arguments. You can generate this value with the ./share/rpcauth/rpcauth.py script in the Bitcoin Core repository. This option can be specified multiple times. rpcauth=xxx:yyy$zzz
Whooaa, this online config generator is really helpful, but I still had to manually correct a few things. The last line is obviously generated by rpcauth.py, I disabled the wallet functionality as lnd is going to take care of my funds. ZMQ is not available to the network so only my LND can use it, RPC usage I still have to think through a little, in general I would like to have my own block explorer some day but also be safe from any hacking attempts (thus I would need at least 2 RPC ports/user accounts - one for lnd, one for block explorer frontend). No ports open on firewall at this time, only UPnP is active and gently opens 8333 for block/tx transfers.
Now, synchronizing the blockchain took me time from mid-July to early September... The hard drive is really slow, also my external HDD drive has some trouble with its A/C adapter so Pi was getting undervoltage alerts all the time. Luckily, it is just downclocking when it happens and slowly but steadily synchronized the whole history. After all, I'm not paying even $5 monthly for a VPS, it is by design the cheapest hardware I could use to set up my LN wallet.
When bitcoind was ready (I've heard some stories about btcd but I don't trust this software yet, sorry), it's time to configure lnd.conf:
[Application Options] debuglevel=trace rpclisten=0.0.0.0:10009 externalip=X.X.X.X:9735 listen=0.0.0.0:9735 alias=X color=#XXXXXX [Bitcoin] bitcoin.active=1 bitcoin.mainnet=1 bitcoin.node=bitcoind [Bitcoind] bitcoind.rpchost=127.0.0.1 bitcoind.rpcuser=X bitcoind.rpcpass=X bitcoind.zmqpubrawblock=tcp://127.0.0.1:28332 bitcoind.zmqpubrawtx=tcp://127.0.0.1:28333 
Here I've had to XXX a little more fields, as not only the bitcoind RPC credentials are stored here, but also my node's public information (it should be illegal to run nodes without specifically selected color and alias!). It is public (and I had to open port 9735 on my firewall), but not necessarily connected to my reddit account for most of the adversaries, so let's keep it this way. In fact, I also see a security vulnerability here: my whole node's stability depends on the IP being static. I could swap it for a .tk domain but who can tell if the bad guys won't actively fight DNS system in order to prevent global economic revolution? As such, I would rather see node identification in LN based on a public key only with possible *hints* of last-known-ip-address but the whole discovery should be performed by the nodes themself in a p2p manner, obviously preventing malicious actors from poisoning the network in some way. For now, I consider the IP stability a weak link and will probably have to pay extra Bitcoin TX fees when something happens to it (not much of a cost luckily!).

https://preview.redd.it/hjd1nooo77q11.png?width=741&format=png&auto=webp&s=14214fc36e3edf139faade930f4069fc31a3e883
Okay then, lnd is up and running, had to create a wallet and give it a night for getting up to speed. I don't know really what took it so long, I'm not using Windows nor 'localhost' in the config so the issues like #1027 are not the case. But there are others like #1545 still open so I'm not going to ponder much on this. I haven't really got any idea how to automatically unlock the wallet after Pi restart (could happen any time!), especially since I only tried to unlock it locally with lncli (why would I enter the password anywhere outside that host?), but let's say that my wallet will only be as stable as my cheap hardware. That's okay for the beta phase.
Finally, zap-desktop required me to copy tls.cert and admin.macaroon files to my desktop. If my understanding of macaroon (it's like an authentication cookie, that can later be revoked) is correct then it's not an issue, however it would be nice to have a "$50 daily limit" macaroon file in the future too, just to avoid any big issues when my client machine gets stolen. Thanks to this, I can ignore the silly cloud-based modes and have fully-secure environment of my home network being the only link from me to my money.
https://preview.redd.it/11bw3dgw47q11.png?width=836&format=png&auto=webp&s=b7fa7c88d14f22441cbbfc0db036cddfd7ea8424
Aaand there it is. The IP took some time to advertise, I use 1ml.com to see if my node is there. The zap interface (ZapDesktop-linux-amd64-v0.2.2-beta.deb) lacks lots of useful information so I keep learning lncli syntax to get more data about my new peers or the routes offered. The transactions indeed run fast and are ridiculously cheap. I would really love to run Eclair with the same settings but it doesn't seem to support custom lnd (why?). In fact, since all I need is really a lncli wrapper, maybe it will be easy to write my own (seen some web gui which weighs 700MB after downloading all dependencies with npm - SICK!). Zap for iOS alpha test registration is DOWN so I couldn't try it (and I'm not sure if it allows custom lnd selection), Zap for Android doesn't even exist yet... I made a few demo transactions and now I will explore all those fancy t-shirt stores as long as the prices are still in "early investor" mode - I remember times when one could get 0.001 BTC from a faucet...
https://preview.redd.it/42sdyoce57q11.png?width=836&format=png&auto=webp&s=7ec8917eaf8f3329d51ce3e30e455254027de0ee
If you find any of the facts presented by me false, I am happy to find out more in the discussion. However what I did I did mostly for fun, without paying much attention to the source code, documentation and endless issue lists on github. By no means I claim this tutorial will work for you but I do think I shared the key points and effort estimations to help others decide if they want a full-node LN client too. I'm also interested in some ideas on what to do with it next (rather unlikely that I will share my lnd admin.macaroon with anyone!) especially if it gives me free money. For example, I can open 1000 channels and start earning money from fees, although I no longer have more Bitcoins than the LN capacity yields... I will probably keep updating the software on my Pi until it leaves beta phases and only then will pour more money inside. I'm also keen on improving the general security of my rig and those comments I will answer more seriously.
submitted by pabou to Bitcoin [link] [comments]

Merkle Trees and Mountain Ranges - Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

Original link: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html
Unedited text and originally written by:

Peter Todd pete at petertodd.org
Tue May 17 13:23:11 UTC 2016
Previous message: [bitcoin-dev] Bip44 extension for P2SH/P2WSH/...
Next message: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
timestamping.

We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.


# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.


## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.


## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.


2) STXO set

Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to the
TXO commitment.


3) TXO journal

FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.


4) TXO MMR list

Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok. We'll
cover this in more in detail later.


### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two
cases:

1) Recently created output

Output created after the most recent TXO commitment, so it should be in the
UTXO set; the transaction spending it does not need a TXO commitment proof.
Remove the output from the UTXO set and append it to the TXO journal.

2) Archived output

Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and if
not add it. Append the output and TXO commitment proof to the TXO journal.

In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.


### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer
needed.

Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.


### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:

0
/ \
a b

If we add another entry we get state #1:

1
/ \
0 \
/ \ \
a b c

Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:

2
/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e

This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:

2
/ \
2 \
\
\
\
\
\
e

Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h

Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h

If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:

2
/
2
/
/
/
0
\
b

We can add that data to our local knowledge of the TXO MMR, unpruning part of
it:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h

Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h

Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)

Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g

After unpruning we have the following data for state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g

That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:

6
/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k

Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).


### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.

A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.


### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.

When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next
section).

Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.


### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that
data.

Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.

For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.


## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.


## Further Work

While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may
rot).

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.

6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.


# References

1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,
https://github.com/opentimestamps/opentimestamps-serveblob/mastedoc/merkle-mountain-range.md

2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html

3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012103.html

--
https://petertodd.org 'peter'[:-1]@petertodd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160517/33f69665/attachment-0001.sig>
submitted by Godballz to CryptoTechnology [link] [comments]

Disk Digger Pro Apk || DiskDigger Importance || Recover Lost Files

In the Present scenario, Technology is growing too fast and we all are experiencing that our smartphone, PC, and Laptops carrying various files with the help of SD cards and internal memory. We will think that our files are in very secure places but if any file deleted unexpectedly then what you will do?. Don't worry, I already told you that tech is growing as fast as it can. Here I'm going to discuss the best data recovery services aka (DiskDigger) which are a perfect example of it. Let's go down to get more details of DiskDigger( Deep file recovery from any drive).
DiskDigger is a tool which can recover deleted files like photos, documents, music, video and much more.
DiskDigger Features:
DiskDigger can recover lost files from most types of media that your computer can read: hard disks, USB flash drives, memory cards, CDs, DVDs, and floppy disks. (Note: Make sure that you have to connect your device with a USB port to recover lost data from Android and IOS devices ). And one more important thing is you have to download the diskdigger app on your android phone to recover lost files. Suppose if your Android device uses a microSD card for saving the data, please remove the card and connect it directly to your PC using a card reader, so that you can scan it directly using DiskDigger for Windows.)
DiskDigger has two processes which you have to choose every time while scanning a disk. These methods are named as “dig deep” and “dig deeper“.
Dig Deep:
Dig Deeper:
Advanced Features
To find more information go through remaining articles in our site like hard drive data recovery, SD card data Recovery, Android Data Recovery, USB flash drive data recovery, Linux Data Recovery etc.
submitted by diskdiggerproapk to u/diskdiggerproapk [link] [comments]

How can I make test-net?

I've tried to make test-net for mining pool test. There's no information or seed node for testnet. and find reddit and add testnet node to conf, but, testnode ip is very old , not working now. Is there latest information for make testnet? thanks,
(update)
my config also have a testnet=1 addnode=nz.nutty.one:20888 from searched community .
-- here's logs --
2018-03-12 13:38:46 Bitcoin version v0.14.2.5-6ad93ba 2018-03-12 13:38:46 InitParameterInteraction: parameter interaction: -whitelistforcerelay=1 -> setting -whitelistrelay=1 2018-03-12 13:38:46 Assuming ancestors of block ff983c72147a81ac5b8ebfc68b62b39358cac4b8eb5518242e87f499b71c6a51 have valid signatures. 2018-03-12 13:38:49 Default data directory /home/nomp/.myriadcoin 2018-03-12 13:38:49 Using data directory /home/nomp/nomp_chaindata/myriadcoin-test/testnet 2018-03-12 13:38:49 Using config file /home/nomp/nomp_chaindata/myriadcoin-test/myriadcoin.conf 2018-03-12 13:38:49 Using at most 125 automatic connections (1024 file descriptors available) 2018-03-12 13:38:49 Using 32 MiB out of 32 requested for signature cache, able to store 1048576 elements 2018-03-12 13:38:49 Using 2 threads for script verification 2018-03-12 13:38:49 scheduler thread start 2018-03-12 13:38:49 HTTP: creating work queue of depth 16 2018-03-12 13:38:49 Config options rpcuser and rpcpassword will soon be deprecated. Locally-run instances may remove rpcuser to use cookie-based auth, or may be replaced with rpcauth. Please see share/rpcuser for rpcauth auth generation. 2018-03-12 13:38:49 HTTP: starting 4 worker threads 2018-03-12 13:38:49 Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010) 2018-03-12 13:38:49 Using wallet wallet.dat 2018-03-12 13:38:49 init message: Verifying wallet... 2018-03-12 13:38:51 CDBEnv::Open: LogDir=/home/nomp/nomp_chaindata/myriadcoin-test/testnet/database ErrorFile=/home/nomp/nomp_chaindata/myriadcoin-test/testnet/db.log 2018-03-12 13:38:51 Bound to [::]:10898 2018-03-12 13:38:51 Bound to 0.0.0.0:10898 2018-03-12 13:38:51 Cache configuration: 2018-03-12 13:38:51 * Using 2.0MiB for block index database 2018-03-12 13:38:51 * Using 8.0MiB for chain state database 2018-03-12 13:38:51 * Using 440.0MiB for in-memory UTXO set (plus up to 286.1MiB of unused mempool space) 2018-03-12 13:38:51 init message: Loading block index... 2018-03-12 13:38:51 Opening LevelDB in /home/nomp/nomp_chaindata/myriadcoin-test/testnet/blocks/index 2018-03-12 13:38:59 Opened LevelDB successfully ... 2018-03-12 13:43:39 keypool added key 100, size=100 2018-03-12 13:43:42 keypool added key 101, size=101 2018-03-12 13:43:43 keypool reserve 1 2018-03-12 13:43:44 keypool keep 1 2018-03-12 13:43:50 wallet 282608ms 2018-03-12 13:43:50 setKeyPool.size() = 100 2018-03-12 13:43:50 mapWallet.size() = 0 2018-03-12 13:43:50 mapAddressBook.size() = 1 2018-03-12 13:43:51 UpdateTip: new best=0000017ce2a79c8bddafbbe47c004aa92b20678c354b34085f62b762084b9788 height=0 version=0x00000002 algo=0 (sha256d) log2_work=17.678071 tx=1 date='2014-02-20 06:06:33' progress=0.000003 cache=0.0MiB(0tx) 2018-03-12 13:43:51 mapBlockIndex.size() = 1 2018-03-12 13:43:51 Failed to open mempool file from disk. Continuing anyway. 2018-03-12 13:43:51 nBestHeight = 0 2018-03-12 13:43:51 torcontrol thread start 2018-03-12 13:43:51 AddLocal(x.x.2x.x:10898,1) 2018-03-12 13:43:51 Discover: IPv4 enp3s0: 175.2x.x.x 2018-03-12 13:43:51 init message: Loading addresses... 2018-03-12 13:43:51 ERROR: Read: Failed to open file /home/nomp/nomp_chaindata/myriadcoin-test/testnet/peers.dat 2018-03-12 13:43:51 Invalid or missing peers.dat; recreating 2018-03-12 13:43:52 init message: Loading banlist... ... 2018-03-12 13:55:05 addcon thread start 2018-03-12 13:55:05 opencon thread start 2018-03-12 13:55:05 dnsseed thread start 2018-03-12 13:55:05 net thread start 2018-03-12 13:55:05 connect() to 75.19.27.27:20888 failed after select(): Connection refused (111) 2018-03-12 13:55:06 connect() to 75.19.27.28:20888 failed after select(): Connection refused (111) 2018-03-12 13:55:16 Loading addresses from DNS seeds (could take a while) 2018-03-12 13:55:17 3 addresses found from DNS seeds 2018-03-12 13:55:17 dnsseed thread exit 2018-03-12 13:55:17 connect() to 75.19.27.27:20888 failed after select(): Connection refused (111) 2018-03-12 13:55:18 connect() to 75.19.27.28:20888 failed after select(): Connection refused (111) 2018-03-12 13:55:22 connect() to 75.19.27.27:20888 failed after select(): Connection refused (111) 2018-03-12 13:55:23 connect() to 75.19.27.28:20888 failed after select(): Connection refused (111) 2018-03-12 1 ....
same forever until today. can't encrease test node heights.
submitted by trustfarmhub to myriadcoin [link] [comments]

Why am I getting exit code 1 after stopping bitcoin RPC server?

This is how I'm starting and stopping bitcoin from a service unit
[Service] ExecStart=/usbin/bitcoind -daemon=0 -datadir=/home/jsonrpc/bitcoin -conf=/home/jsonrpc/bitcoin/settings.conf ExecStop=/usbin/bitcoin-cli -datadir=/home/jsonrpc/bitcoin -conf=/home/jsonrpc/bitcoin/settings.conf stop 
And this is what I get when I stop the service:
Shutdown requested. Exiting. Interrupting HTTP RPC server Interrupting RPC Shutdown: In progress... Stopping HTTP RPC server Stopping RPC RPC stopped. scheduler thread interrupt Shutdown: done bitcoin.service: Main process exited, code=exited, status=1/FAILURE 
debug.log
2018-11-21T18:02:16Z Bitcoin Core version v0.17.0.0-ge1ed37edaedc85b8c3468bd9a726046344036243 (release build) 2018-11-21T18:02:16Z InitParameterInteraction: parameter interaction: -whitelistforcerelay=1 -> setting -whitelistrelay=1 2018-11-21T18:02:16Z Assuming ancestors of block 0000000000000000002e63058c023a9a1de233554f28c7b21380b6c9003f36a8 have valid signatures. 2018-11-21T18:02:16Z Setting nMinimumChainWork=0000000000000000000000000000000000000000028822fef1c230963535a90d 2018-11-21T18:02:16Z Using the 'standard' SHA256 implementation 2018-11-21T18:02:16Z Default data directory /home/jsonrpc/.bitcoin 2018-11-21T18:02:16Z Using data directory /home/jsonrpc/bitcoin 2018-11-21T18:02:16Z Using config file /home/jsonrpc/bitcoin/settings.conf 2018-11-21T18:02:16Z Using at most 4 automatic connections (1024 file descriptors available) 2018-11-21T18:02:16Z Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements 2018-11-21T18:02:16Z Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements 2018-11-21T18:02:16Z Using 0 threads for script verification 2018-11-21T18:02:16Z HTTP: creating work queue of depth 16 2018-11-21T18:02:16Z Starting RPC 2018-11-21T18:02:16Z Starting HTTP RPC server 2018-11-21T18:02:16Z Config options rpcuser and rpcpassword will soon be deprecated. Locally-run instances may remove rpcuser to use cookie-based auth, or may be replaced with rpcauth. Please see share/rpcauth for rpcauth auth generation. 2018-11-21T18:02:16Z HTTP: starting 2 worker threads 2018-11-21T18:02:16Z Using wallet directory /home/jsonrpc/bitcoin 2018-11-21T18:02:16Z init message: Verifying wallet(s)... 2018-11-21T18:02:16Z Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010) 2018-11-21T18:02:16Z Using wallet wallet.dat 2018-11-21T18:02:16Z BerkeleyEnvironment::Open: LogDir=/home/jsonrpc/bitcoin/database ErrorFile=/home/jsonrpc/bitcoin/db.log 2018-11-21T18:02:16Z scheduler thread start 2018-11-21T18:02:24Z Cache configuration: 2018-11-21T18:02:24Z * Using 2.0MiB for block index database 2018-11-21T18:02:24Z * Using 8.0MiB for chain state database 2018-11-21T18:02:24Z * Using 40.0MiB for in-memory UTXO set (plus up to 286.1MiB of unused mempool space) 2018-11-21T18:02:24Z init message: Loading block index... 2018-11-21T18:02:24Z Opening LevelDB in /home/jsonrpc/bitcoin/blocks/index 2018-11-21T18:02:25Z Opened LevelDB successfully 2018-11-21T18:02:25Z Using obfuscation key for /home/jsonrpc/bitcoin/blocks/index: 0000000000000000 2018-11-21T18:03:38Z LoadBlockIndexDB: last block file = 1425 2018-11-21T18:03:38Z LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=71, size=79377521, heights=549167...549288, time=2018-11-07...2018-11-08) 2018-11-21T18:03:38Z Checking all blk files are present... 2018-11-21T18:03:47Z Opening LevelDB in /home/jsonrpc/bitcoin/chainstate 2018-11-21T18:03:47Z Opened LevelDB successfully 2018-11-21T18:03:48Z Using obfuscation key for /home/jsonrpc/bitcoin/chainstate: XXXXXXXXXXXXXXXX 2018-11-21T18:03:50Z Loaded best chain: hashBestChain=0000000000000000001d43d5aeb32c7d5158e48da84b896413e6439d09e53081 height=548521 date=2018-11-03T01:39:02Z progress=0.989162 2018-11-21T18:03:50Z init message: Rewinding blocks... 2018-11-21T18:04:22Z init message: Verifying blocks... 2018-11-21T18:04:22Z Verifying last 6 blocks at level 3 2018-11-21T18:04:22Z [0%]...[16%]...ThreadRPCServer method=stop user=deploy 2018-11-21T18:17:13Z block index 889465ms 2018-11-21T18:17:13Z Shutdown requested. Exiting. 2018-11-21T18:17:14Z Interrupting HTTP RPC server 2018-11-21T18:17:14Z Interrupting RPC 2018-11-21T18:17:14Z Shutdown: In progress... 2018-11-21T18:17:15Z Stopping HTTP RPC server 2018-11-21T18:17:15Z Stopping RPC 2018-11-21T18:17:15Z RPC stopped. 2018-11-21T18:17:16Z scheduler thread interrupt 2018-11-21T18:17:19Z Shutdown: done 
submitted by rraallvv to Bitcoin [link] [comments]

Error when running a bitcoin core node: Corruption: not an sstable (bad magic number)

I try to run a bitcoin core node using 180 GB of blockchain data that I have in a hard drive:
bitcoind.exe --datadir=I:\Bitcoin\Bitcoin_core\Bitcoin\blockchain 
But I get the error
Error: Error: A fatal internal error occurred, see debug.log for details 
Debug.log looks like this:
2018-07-18 19:36:03 Bitcoin Core version v0.16.0 (release build) 2018-07-18 19:36:03 InitParameterInteraction: parameter interaction: -whitelistforcerelay=1 -> setting -whitelistrelay=1 2018-07-18 19:36:03 Warning: Reducing -maxconnections from 9999 to 1015, because of system limitations. 2018-07-18 19:36:03 Assuming ancestors of block 0000000000000000005214481d2d96f898e3d5416e43359c145944a909d242e0 have valid signatures. 2018-07-18 19:36:03 Setting nMinimumChainWork=000000000000000000000000000000000000000000f91c579d57cad4bc5278cc 2018-07-18 19:36:03 Using the 'sse4' SHA256 implementation 2018-07-18 19:36:03 Using RdRand as an additional entropy source 2018-07-18 19:36:05 Default data directory C:\Users\Pedro FR\AppData\Roaming\Bitcoin 2018-07-18 19:36:05 Using data directory G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain 2018-07-18 19:36:05 Using config file G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain\bitcoin.conf 2018-07-18 19:36:05 Using at most 1015 automatic connections (2048 file descriptors available) 2018-07-18 19:36:05 Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements 2018-07-18 19:36:05 Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements 2018-07-18 19:36:05 Using 4 threads for script verification 2018-07-18 19:36:05 scheduler thread start 2018-07-18 19:36:05 libevent: getaddrinfo: nodename nor servname provided, or not known 2018-07-18 19:36:05 Binding RPC on address ::1 port 8332 failed. 2018-07-18 19:36:05 HTTP: creating work queue of depth 16 2018-07-18 19:36:05 Config options rpcuser and rpcpassword will soon be deprecated. Locally-run instances may remove rpcuser to use cookie-based auth, or may be replaced with rpcauth. Please see share/rpcuser for rpcauth auth generation. 2018-07-18 19:36:05 HTTP: starting 4 worker threads 2018-07-18 19:36:05 Using wallet directory G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain 2018-07-18 19:36:05 init message: Verifying wallet(s)... 2018-07-18 19:36:05 Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010) 2018-07-18 19:36:05 Using wallet wallet.dat 2018-07-18 19:36:05 CDBEnv::Open: LogDir=G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain\database ErrorFile=G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain\db.log 2018-07-18 19:36:05 Cache configuration: 2018-07-18 19:36:05 * Using 56.2MiB for block index database 2018-07-18 19:36:05 * Using 8.0MiB for chain state database 2018-07-18 19:36:05 * Using 385.8MiB for in-memory UTXO set (plus up to 2861.0MiB of unused mempool space) 2018-07-18 19:36:05 init message: Loading block index... 2018-07-18 19:36:05 Opening LevelDB in G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain\blocks\index 2018-07-18 19:36:05 Opened LevelDB successfully 2018-07-18 19:36:05 Using obfuscation key for G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain\blocks\index: 0000000000000000 2018-07-18 19:36:29 LoadBlockIndexDB: last block file = 1259 2018-07-18 19:36:29 LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=215, size=130126593, heights=522250...522561, time=2018-05-11...2018-05-13) 2018-07-18 19:36:29 Checking all blk files are present... 2018-07-18 19:36:29 LoadBlockIndexDB: transaction index enabled 2018-07-18 19:36:29 Opening LevelDB in G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain\chainstate 2018-07-18 19:36:30 Opened LevelDB successfully 2018-07-18 19:36:30 Using obfuscation key for G:\Bitcoin\Bitcoin_core\Bitcoin\blockchain\chainstate: f686f3dcea7b64ab 2018-07-18 19:36:30 Loaded best chain: hashBestChain=0000000000000000003f9855f4abee6286e4ff677d366db7b0f24c414caf49cc height=518349 date=2018-04-15 17:14:57 progress=0.916016 2018-07-18 19:36:30 init message: Rewinding blocks... 2018-07-18 19:36:36 Corruption: not an sstable (bad magic number) 2018-07-18 19:36:36 *** System error while flushing: Database corrupted 2018-07-18 19:36:36 Error: Error: A fatal internal error occurred, see debug.log for details 2018-07-18 19:36:36 Shutdown requested. Exiting. 2018-07-18 19:36:36 Shutdown: In progress... 2018-07-18 19:36:36 scheduler thread interrupt 2018-07-18 19:36:36 Corruption: not an sstable (bad magic number) 2018-07-18 19:36:36 *** System error while flushing: Database corrupted 2018-07-18 19:36:36 Error: Error: A fatal internal error occurred, see debug.log for details 2018-07-18 19:36:36 Corruption: not an sstable (bad magic number) 2018-07-18 19:36:36 *** System error while flushing: Database corrupted 2018-07-18 19:36:36 Error: Error: A fatal internal error occurred, see debug.log for details 2018-07-18 19:36:36 Shutdown: done 
I don't remeber messing around with the blockchain data. I think the error first occurred when I was trying to run a node on a Linux machine, I get the same error on both devices. Would be great if I didn't have to download the blockchain all over again.
submitted by johnturtle to BitcoinBeginners [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2015-10-22)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last weeks summarization
Disclaimer
Please bear in mind I'm not a developer and I'd have problems coding "hello world!", so some things might be incorrect or plain wrong. Like any other write-up it likely contains personal biases, although I try to stay as neutral as I can. There are no decisions being made in these meetings, so if I say "everyone agrees" this means everyone present in the meeting, that's not consensus, but since a fair amount of devs are present it's a good representation. The dev IRC and mailinglist are for bitcoin development purposes. If you have not contributed actual code to a bitcoin-implementation, this is probably not the place you want to reach out to. There are many places to discuss things that the developers read, including this sub-reddit.
link to this week logs Meeting minutes by meetbot
Main topics discussed where:
Mempool Memory Usage LevelDB replacement Median Past locktime & CLTV
Short topics/notes
BIP 9 Versionbits PR #6816 is ready for implementation and needs more reviews.
A 3 month moderation period on the bitcoin-dev mailinglist has started, as well as a new list bitcoin-discuss. more details: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Octobe011591.html
"bitcoin.org had incorrect release notes for 0.11.1. It's corrected now. They had posted the release notes for the initial RC and not updated them. Process wise it would be good to watch out for that in the future."
Mempool Memory Usage
When a transaction is relayed across the network it is held by the nodes in memory, until it gets into a block. All these transactions that sit in memory are called the memorypool or mempool for short. Like we could see during the spam-attack if there's a big back-log of transactions that couldn't make it in the blockchain this mempool can get pretty big resulting in nodes crashing.
To stop this from happening devs created a mechanism to reject and/or remove transactions from the mempool. This mempool limiting got merged this week.
Also relevant: There is an already existing limit on the database cache size called "dbCache". The default value for that is 100MB.
Testing shows there's a discrepancy between the configured mempool limit and the actual memory usage. This is caused by the amount of UTXO data when processing transactions. This data is only flushed after a block is processed (so temporarily exceeding the cache limit set in dbCache).
There are 2 "obvious" solutions for this:
  1. Always enforce the UTXO cache limit, just like the mempool limit is always enforced. Downside for that is if you misconfigure your mempool limit an attack can blow away your UTXO cache, which significantly slows down validation and propagation.
  2. Take the UTXO cache into account when limiting the mempool. Downside for that is that you could construct transactions which require way more cache space and thereby more easily kick out other transactions.
A more optimal solution would be to give priority in the cache to things in the mempool. Ways to achieve that are to kick UTXO's from transaction that are evicted from the mempool out of the cache and from transactions that never made it into the mempool. Something TheBlueMatt is working on
Continue to research and optimize.
LevelDB replacement
LevelDB is the database system currently used in bitcoin. Since this is not being maintained for some time devs are looking for replacements.
jgarzik worked on a patch for SQLite Some people express concerns whether the performance will be good enough with SQLite, but there are no benchmark results yet.
Do research into other options Do lots of benchmarks and report results
Median Past locktime & CLTV
When a block is created miners include a timestamp. This timestamp has to be between the median of the previous 11 blocks and the network-adjusted time +2 hours. So this timestamp can vary a decent amount from the real time. With the introduction of lock-time transactions, that are only valid after a certain time, miners are incentivised to lie about the time in order to include time-locked transactions (and their fees) that wouldn't otherwise be valid. BIP 113 enables the usage of GetMedianTimePast (the median of the previous 11 blocks) from the prior block in lock-time transactions to combat this behaviour. Users can compensate for this by adding 1 hour (6 blocks) to their lock times.
CLTV stands for CheckLockTimeVerify, BIP65 Commonly reffered to as: How you thought nLockTime worked before you actually tried to use it.
CLTV is ready to be merged (and has been merged at time of writing) Questions of whether to add median past locktime as mempool only or as softfork Overall questions as to what to include in the CLTV deployment, what to include as mem-pool only and what as softfork. Median past locktime violates current 'standard' behavior, so we would prefer to have that violation dead in the network before the median past locktime softfork moves forward.
review BIP-113: Mempool-only median time-past as endpoint for lock-time calculations review the CLTV backports (done and merged at time of writing) Backport median past locktime to 0.10 and 0.11
Participants
btcdrak btcdrak sipa Pieter Wuille gmaxwell Gregory Maxwell BlueMatt Matt Corallo morcos Alex Morcos petertodd Peter Todd CodeShark Eric Lombrozo jgarzik Jeff Garzik maaku Mark Friedenbach kanzure Bryan Bishop jcorgan Johnathan Corgan Luke-Jr Luke Dashjr jonasschnelli Jonas Schnelli sdaftuar Suhas Daftuar
submitted by G1lius to Bitcoin [link] [comments]

Ulord Project Progress (From Dec 13 to Dec 19, 2018)

Ulord Project Progress (From Dec 13 to Dec 19, 2018)
Ulord project progress includes the progress from the underlying layer team, the platform team, the application team, and test team.
The Ulord public blockchain development work mainly focused on the testnet deployment of the sidechain UOS and stress testing, producter node optimization, alliance function improvement, the code writing of UDFS node storage, the developer community V1.2.0 and Uwallet V2.0.

Technical progress

Development progress
1.The function that the uosio user can directly revoke the voting history of a producer node was added to prevent the nodes that do not perform block production from being elected as producer nodes.
  1. The sidechain UOS testnet was redeployed, the configuration of the mongodb database was optimized, including modifying the cache queue size and optimizing the data storage method.
  2. The stress testing of the UOS testnet was taken, and the results were summarized to provide reference for the UOS node deployment.
  3. The double sorting of transaction data has been used in the exchange between UT and UOS to make sure the consistent data.
  4. The problem of network reconnection between the alliance and the full node was solved. After the disconnection, the alliance will continuously access all the producter nodes and automatically select other producter nodes to connect.
  5. The alliance proposal initiation was optimized, and the delay waiting for proposal initiation was added to prevent the database update inconsistency caused by the too quick proposal initiation.
  6. The documents to deploy the sidechain UOS nodes were organized. Users can configure their own UOS nodes according to the document.
  7. The first phase of the Umaster system was tested to provide storage node support for UDFS.
  8. The code writing of the UDFS node storage traffic data was complete to make a foundation for the subsequent scoring rewards based on traffic data.
  9. The blacklist interface development of the UDFS system was completed. After the review, the blacklist can be sent to UDNS to realize the deletion and block on the entire network.
Peripherals
  1. The development of Wallet V2.0 was finished and now it is in test:
a. The formal server configuration and service setup was completed.
b. The account interface encryption processing was improved to improve system security.
c. The test of the exchange between SUT and UT on their own official chain was finished.
d. The function test of the current version was completed.
  1. Developer community V 1.2.0 was completed and now is in test.
  2. The function development and interface test of UOS faucet were finished.
  3. The first phase of the UOS block browser revision was discussed and the homepage static interface development was completed.
  4. The Ulord content management system was modified.
  5. The front-end information module data docking was completed, including the latest news, project progress and official announcements.

Operation Progress

On Dec 14, Ulord New York office and the strategic partner HOMEBLOC jointly launched the preparations for New Year Eve 2019 - The Genesis Block: Bitcoin's 10 Year Anniversary, which will be held on Dec 31 at 9:00 PM EST in at Mykonos Blue Rooftop, 127 West 28th Street, New York City.

On Dec 19, Ulord CEO Dam Woods was invited to attend the “Innovation Leads Rising” 2018 Changsha Science and Technology Annual Meeting on AI, blockchain, and big data. Dam Woods delivered a speech of “From Internet to Blockchain”, and discussed about the technology development with the technical leaders.

Terminology

UOS faucet: it is a window for getting UOS test coins and supports the creation of testnet accounts. Users can receive test coins at regular intervals.
UDNS: for the resources on blockchain and UDFS, it usually needs to be represented by a string of 34 characters, which is not easy to remember, nor convenient to use. In the Ulord design, UDNS provides users with decentralized domain name services.
submitted by ulordchain to u/ulordchain [link] [comments]

System Update summary

Summary of key updates to Malaysia 2018 operating system:
full changelog.pdf available from government server - warning: 40Gb+ file size
Service Pack 3.0
Please help to list more key changes in the comment section
submitted by Bobalob2018 to malaysia [link] [comments]

Need help with Bitcoin Core.

I cannot get the bitcoin blockchain to download on my computers. I've tried two different computers over the past 4 days and I can not get my wallet to open. Just recently I made it to 14 weeks behind when Core says that a fatal error was made. The debug .txt file says as follows:
2016-10-25 20:17:53 Bitcoin version v0.13.0
2016-10-25 20:17:53 InitParameterInteraction: parameter interaction: -whitelistforcerelay=1 -> setting -whitelistrelay=1
2016-10-25 20:17:53 GUI: "registerShutdownBlockReason: Successfully registered: Bitcoin Core didn't yet exit safely..."
2016-10-25 20:17:53 Default data directory C:\Users\Admin\AppData\Roaming\Bitcoin
2016-10-25 20:17:53 Using data directory D:\
2016-10-25 20:17:53 Using config file D:\bitcoin.conf
2016-10-25 20:17:53 Using at most 125 connections (2048 file descriptors available)
2016-10-25 20:17:53 Using 2 threads for script verification
2016-10-25 20:17:53 Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010)
2016-10-25 20:17:53 scheduler thread start
2016-10-25 20:17:53 Using wallet wallet.dat
2016-10-25 20:17:53 init message: Verifying wallet...
2016-10-25 20:17:53 CDBEnv::Open: LogDir=D:\database ErrorFile=D:\db.log
2016-10-25 20:17:53 Bound to [::]:8333
2016-10-25 20:17:53 Bound to 0.0.0.0:8333
2016-10-25 20:17:53 Cache configuration:
2016-10-25 20:17:53 * Using 2.0MiB for block index database
2016-10-25 20:17:53 * Using 8.0MiB for chain state database
2016-10-25 20:17:53 * Using 990.0MiB for in-memory UTXO set
2016-10-25 20:17:53 init message: Loading block index...
2016-10-25 20:17:53 Opening LevelDB in D:\blocks\index
2016-10-25 20:17:53 Opened LevelDB successfully
2016-10-25 20:17:53 Using obfuscation key for D:\blocks\index: 0000000000000000
2016-10-25 20:17:53 Opening LevelDB in D:\chainstate
2016-10-25 20:17:53 Opened LevelDB successfully
2016-10-25 20:17:53 Using obfuscation key for D:\chainstate: 8bc5520d7531c59a
2016-10-25 20:17:59 LoadBlockIndexDB: last block file = 601
2016-10-25 20:17:59 LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=140, size=112338215, heights=425371...425882, time=2016-08-15...2016-08-19)
2016-10-25 20:17:59 Checking all blk files are present...
2016-10-25 20:17:59 LoadBlockIndexDB: transaction index disabled
2016-10-25 20:17:59 LoadBlockIndexDB: hashBestChain=000000000000000001af076ed997ca6e3f0dc286b76214c80ab7ab4708d69ac8 height=425042 date=2016-08-13 17:10:24 progress=0.964667
2016-10-25 20:17:59 init message: Rewinding blocks...
2016-10-25 20:18:00 Corruption: block checksum mismatch
2016-10-25 20:18:00 *** System error while flushing: Database corrupted
2016-10-25 20:18:03 Aborted block database rebuild. Exiting.
Can anyone help me? I'm beyond frustrated at this point. Thanks in advance.
*edit - I'm now at 9 weeks and keep on getting this:
2016-10-25 22:32:08 LevelDB read failure: Corruption: block checksum mismatch
2016-10-25 22:32:08 Corruption: block checksum mismatch
2016-10-25 22:32:46 Error reading from database: Database corrupted
So I finally got it to sync. I'm not sure if I did anything different, I just kept on restarting my computer after each time it crashed and it finally got current. I have no idea what changed as I did not change any settings or run core from the command line. I'm completely lost with this piece of software.
Thanks to everyone for your help! I super appreciate it.
submitted by richielaw to Bitcoin [link] [comments]

Blockchain to fix horribly broken e-mail system like it is today?

E-mail as it is, is horribly broken. Horrendously broken.
It wasn't that many years ago that you could be assured your e-mail reaches whoever you were mailing to. Today it is a mere suggestion, that perhaps this should be delivered to this person, at least for any automated e-mail. This seems to be creeping to manual, organic email as well. Hell, we are seeing even internal e-mails being flagged by spamassassin as spam, organic, human written conversations! In that instance, the spamassassin is also maintained by one of the largest hosting providers in the world...
Hotmail/MS services has been for years (atleast about 4 years now!) been silently dropping email, not all, but some. There's a bit of relief lately, as they have started to favor a bit more marking as spam, rather than silently dropping.
I know, most email users don't see this problem, but those who use email a lot to do their work, and those who need to send automated emails (say, welcome e-mails for a service) this is a big problem. (Disclaimer, for us, our niche of hosting probably causes flagging as well. Our site is blocked by many corporate firewalls for example)
Blockchain to the rescue?
This is an idea i've been toying around with a few years. What if any single e-mail would cost a faction of a cent, and who receives the e-mail, gets paid for it? Now that would solve a lot of problems. I realize there has been some half assed attempts on blockchain based e-mail, but they are about replacing email (never going to happen). Using blockchain to enhance the current experience, with least minimal friction should be the goal, not re-inventing the wheel.
Imagine a say 0.01 cent (0.0001 USD) cost per e-mail. This price would not be cost prohibitive even for free e-mail service providers (Ad revenue etc. should exceed this value), never mind any legit e-mail users. Especially considering you get paid for receiving. So all legit e-mail services would work rather well regardless of the cost. (never mind free email service could profit from this)
Spam however? To send 1 million emails you would need to pay 100$. How many spammers would continue doing so? At least it makes things much harder, not so easy to use a botnet to send your email when you need to include your private key(s) to the botnet, or make some kind of private key management system, makes more complicated.
Small business newsletters? Say you need to send 100k e-mails to legit customers, 10$ is nothing. To human time crafting that newsletter is order (possibly orders) of magnitude greater than that.
Price would also fluctuate as per the market. The most difficult thing would probably be setting the self balancing mechanisms to keep per mail cost sensible. As such, the biggest hurdle in this might not be technical at all.
Technically, how could this work?
Sender sends a TX for e-mail they are sending for recipient. This TX contains message with mail ID, and a segment which can be used with the email contents to unlock the private key for the payment. This way it is verified that recipient mail servers receives and reads the email. Once the recipient server has calculated the private key, they can either TX the received sum to their wallet, or this needs to be formatted so that once the sender has sent it, they cannot recover the private key and double spend (technical hurdle A. For someone who knows their stuff unlikely to be an major hurdle)
Step by step repeat: * Sender checks if recipient has "MailCoin" capability * Sender sends TX to recipient * Sender sends the email to recipient * Recipient notices on mail header (say x-mailcoin-tx: TXID_HERE) that this is a "mailcoin" mail * Recipient checks TX if it has been received * Recipient puts the mail on delivery queue, antispam is instructed of heavy negative score (MTA admin configurable) * Recipient claims the value of the TX (this is the hurdle A). Recipient can only claim the TX value in case they have received the full e-mail. (Question, can this step be pushed even further down the delivery chain, but still remain MTA only level without mail client support?). Most likely solution is that the header contains the encrypted private key, and chain TX contains the key to decrypt that private key to claim the coins, or vice-versa?
Once recipient has the email & payment, they simply mark on their Antispam a automatic lower score and deliver it normally.
E-mail server side we have several components:
Most typical scenario would be the Recipient server works as outgoing as well, with single wallet. So depending on your mail volume, do you send or receive more on that wallet you might never need to worry about the coins (except for value going skyhigh and having like 10k $ worth of "MailCoins").
So perhaps additional components on per use case are needed, or more likely rudimentary scripting capability (ie. "MailCoin" daemon api) to keep the balances in check.
Technical hurdle B: This needs to be super super simple to setup. Or sufficient financial incentive. One would need to develop standard components & configs for exim, postfix, and other MTAs. Infact, make it autogenerate wallet ID etc. and easy to replace or import private keys etc. to put in coins for sending if you need to.
Privacy: On the blockchain you would not see the e-mail contents, only that e-mail likely took place (TX with mail UUID) to recipient. If sender can be deciphered it depends on them if it can be traced who they were. Automatic mixers? :) Recipient can also keep cycling the receive addresses to keep things private if they want to.
The biggest problem i see here, is that if an attacker can deduce the sender and/or recipient, it might to lead to some issues out of the scope of technical solutions. If attacker could read the emails, they would already have accomplished MitM and could just grab all e-mails.
Default implementation should be so, that from recipient address outsider cannot deduce the recipient server nor hostname.
Also, if attacker gains access to your mail with full headers, they could see the TXs in blockchain. MTA might need to scrub mailcoin related headers (yuck, scrubbing headers ....) for paranoid users, but most likely solution is that recipient retransmits those mailcoins as soon as they got the private key for the balance.
Blockchain: Blocks needs to be done every 10seconds or so, it needs to be fast. Preferrably even every 5 seconds, as not to cause any undue delay. Then again, if your application is reliant on receiving email within seconds, one should consider another means for communicating. Imho, email should be considered a little bit like snail mail, but on internet pace: Couple minutes delay is just OK.
Block size given the e-mail volume needs to be fairly large as well, considering the time between blocks. This is technical hurdle C: Hosting the full blockchain. I can easily foresee that this would grow to be terabytes in size. However, any large email operator would have vested interest in ensuring smooth operation of the blockchain, and for them, running a full node would have neglible cost.
(Technical hurdle C) Single email sent using the system could easily have TX contents of 100 bytes + TX headers + block headers etc. Say 100 bytes, and 100 million emails per day: 9.31GiB per day, 3 399GiB per year, 5 years later: 16.60 TiB just for the mail TXs.
Some estimate there is 200+ billion emails per day, but we all know large portion of this is spam. But even at 50 billion emails a day, 100 bytes per mail TX would add to 4.55TiB per day! So optimizing the blockchain size is obviously going to be important. The volume will be obviously much smaller as semi-spam (those daily half opt-in spamvertising from companies you know) will be lower as well. So probs 100+ billion emails per day at 100% adoption.
Blockchain should then be compressed, the whole block. Algorithm probably should favor speed over compression rate, and should be task specifically optimized (needs a simple reference release, where you can just stream the block contents into it and get output as compressed or uncompressed). The more compression there is, the more full nodes will be hosted by smaller operators :)
For large e-mail server clusters there should be central store for the blockchain, but this can be accessed on the system administratoconfig level already. The MTA components will just remotely talk to single full node daemon (so not really different from many implementations in existence right now), instead of each one running locally a full node.
At today's cheapest hosting rates 16.60TiB is roughly around 85-100€ a month. Purchase cost per 8TB drive is around 230€ mark right now, externals are cheaper. Not an issue for any even semi serious mail provider. Not even issue for datahoarder individuals.
However at 100 billion mails per day: 9.09TiB per day added, which is prohibitively large! We should be targeting something like 20bytes per mail final storage spent, or even less.
If it looks like it is going to grow really large, full node needs to have configurable multiple storages, so they can store parts of the blockchain on multiple different devices (ie. individual might choose to have it on 4 different external drives).
Filesystem side optimizations are needed as well, but these are fairly simple, just split into multiple subdirectories by the 10 thousand blocks or so, ie. 1 for blocks 1-10k, 2 for blocks 10 001 to 20k etc. Filesystems get exponentially slower the more files there is per directory. 10k might start to show slowing down, but is not significant yet.
Nodes could also implement secondary compression (compress multiple blocks together), if the blockchain starts to become stupid large. If it starts to become impossible to maintain, we could possibly implement a scrubbing methodology, where very old blocks get the TX contents wiped as they are not necessary anymore. Should not be an issue
Blocks with 10second target generated per annum: 3 153 600 Mails per 10second: 115 740 e-mails per 10second block. Final compressed size (say 20 bytes per mail): 2.20MiB + headers etc. per block Let's start small and allow linear growth to this, say 0.1% per day (36.5% annual) and start from 20k / 512KiB. After 3 years: 41.9k / 1072.64KiB per block, After 10 years: 93k / 2380.8KiB. (2027 we should have HDDs in the size of 30TB and daily max size for chain growth is 19.61TiB)
On the positive side every problem is an opportunity in disguise. If the blockchain is large, once again botnets will have a hard hard time to spamming, they can't host the full blockchain on infected machines. They will need to develop centralized mechanisms on this regard as well. One method i can see is by having TOR client built in, and via .onion domain to anonymize, but this is two way street, security researchers could exploit this (see above about the private keys) as well. Even without botnets, spammers will need to dedicate significant resources to host the full blockchain.
On the flip side, if spammer has also mining operation on the same local area network, they have both the income for mailcoins + full blockchain, and could leverage economies of scale, but this too would increase cost. And after all: This is all about increasing cost for spamming, while having the price in vicinity where real e-mail users, real businesses it is not a significant impact, or may even be an income source
Client side
Zero, Nada changes. No changes to outlook, thunderbird etc. Everything works under the hood at the MTA level. Very easy adoption for the end user. Everything is in the backend, server side.
Economics for users
Cost of operation has above been shown to increase wildly for spammers. But how about normal use cases?
Joe Average: They receive e-mail a lot more than they send, all kinds of order confirmations, invoices, newsletters and other automated e-mail. They will actually earn (however tiny amounts) from using this system. So for the masses, this is a good thing, they will see the earning potentials! which brings us to ....
New business opportunities! I could foresee a business setting up spam traps, the more e-mail you receive the more you earn! So it pays to get your receiver into spam lists. You don't ever need to read these, just confirm receive of them. All of sudden we could see even greater numbers of invalid e-mail addresses in spam lists, making spamming ever more expensive!
Free email services might proof to be extremely profitable, to the point of potential revenue sharing with Joe Averages (and above spamtraps). Because free email is mostly joe averages, they will have greater influx than outgoing. On the caveat, free email needs to have limits, but due to the low cost and potential of earnings, they could implement "mail credits" system, base is like 20 emails a day, but each received email could increase this credit limit. As such, it makes actually sense for free email services to implement this at the very least on the receiving side.
Business mass emailings. A business which has 100k valid e-mails on their database will not have a problem with paying few dozen bucks to have their mass mailing delivered. BUT they will make extra sure the content is good and targeted, something the recipient wants to receive. These will be the biggest spenders on email, apart from spammers.
ISPs, hell they get paid to provide e-mail. And they are on the same spot as free email service providers, they stand to earn more than spend!
Blockchain economics
This is where things might get interesting, there is so much potential.
However, there are several things definitively should not be done:
1 & 2 are easy, just do not mine outside of testnet prior to launch. (If devs get paid by companies, there is conflict of interest as well, but let's not get into that right now)
3: Miners and/or full node maintainers decide what goes on. Probably miners like bitcoin is supposed to.
4: Infinite & preferential supply: No after the launch "contracts" etc. to give coins to preferential parties, it should remain as on the launch unless majority consensus says there will be a change. Proof of stake is gray area imho, but then again also proof of work is the rich gets richer.
Mining: Storage requirement is a blessing in disguise, the massive storages required for this to function means that there will be no central hardware developer who sells all the shovels, without significant other markets. Ie. WD, Seagate, Toshiba the main players.
This means algo needs to be based on the full blockchain being hosted. The hashing needs to be so that GPUs are the king most likely, since almost anything good for CPUs is also doable in GPUs. Eventually someone will likely come with ASIC alternative, but due to masses of data it WILL require high bandwidth, high memory. Nothing like bitcoin currently, where low bandwidth, no memory requirement for the ASIC. There needs to be some expensive commodity components in there (RAM, Storage), and as such GPUs are the most likely candidate, and the bottleneck will not likely be computation, but I/O bandwidth.
Quickly thinking, previous block could include number of blocks to be included on the next for verification, in a highly compressible format. Let's say difficulty is number of blocks to be hashed, or from difficulty you can calculate number of blocks to be included. Previous blocks miner just chooses on random blocks to be included on the next one. Listing 10 series of blocks to be included, which can include series instructions. It could request block #5729375+100, or #357492+500 stepping 5 (every 5th block). Hell the random generator could use last block as seed for the next one to make it deterministic YET random as the emails and TXs change. (WTF, Did i just solve how the algo needs to work?!?) Only blocks which would differentiate is the first few, and obviously Genesis, for which an "empty" block would be what is to be hashed.
Hashing algo could be SHA256 because of the high requirement of streaming data, and most ASIC miners lacking in bandwidth (infact, it could be made compatible with bitcoin, but only those ASICS with higher I/O bandwidth than storage/ram I/O bandwidth is could actually boost the perf)
Different hashable list operations could be (on the block list what to be hashed on the next one): * Single block * Block # + number of blocks * Block # + (number of blocks with stepping) * Block # + number of blocks chosen by random using each hashed block as the seed for choosing next one (makes prefetch, preread, caching not work efficiently) * Number of previous blocks mined (ie. 50 last blocks) * Above but with stepping operator * Above but with choose random next X blocks, with variations based on the last hashed, sum of the hashed * All random pickers would have operation modes for the seed to be used: From hashed sum, the whole block, block contents, block header
These modes would ensure the blocks are there and makes it a lot dependable on variable factors, RAM speed, I/O seek time, I/O bandwidth.
This way we have proof that the miner has access to those blocks in efficient manner and the full blockchain is stored there, even if it is not practically retrievable from him / her over the internet for others to obtain a copy. HOWEVER, due to the data volumes, i think it is given they have fast access, but a miner would probably prefer not to share their blockchain contents to have bandwidth free for their mining, as the deadlines are tight. It could be built into the full node spec that they do not accept new blocks from sources which are not ready to supply any given block, and perhaps even periodic test of this. However, this would be unenforceable if people start running custom coded nodes which disables this, as it is not part of the blockchain calculation. It is not miner's benefit to "waste" precious bandwidth to serve others the vast blockchain, meanwhile it is end users benefit those running full nodes without mining to get them fast. So an equilibrium might be reached, if miners start loosing out because other miners will not share their blocks, they will start offering them, even if prioritized.
At 2MiB blocks, 10 second deadline, a miner would preferentially want the new block within 500ms, which would be barely sufficient time for a round trip across the globe. 500ms for 2MiB is 4MiB/s transfer rate inbound, and when block found you want it out even faster, say 250ms you'll need 8MiB/s burst which very very few have at a home. At more usual 1MiB/s it would take 2secs to submit your new block. On the other hand, if you found the block, you'd have immediate access to begin calcing the next one.
Block verification needs to be fast, and as such the above difficulty setting alone is not sufficient, there needs to be nonce. Just picking the right block is not guarantee there will be match, so traditional !???? nonce needs to be set as well most likely. As such, a lot of maths needs to be done to ensure this algorithm does not have dead ends, yet ensures certain blocks needs to be read as full and stored fully by the miners, just plain hashes of the blocks is not sufficient.
Perhaps it should be block data + nonce, then all the blocks hashes (with nonce, or pre-chosen salt) and to be generated block combined hash with nonce needs to have certain number of zeroes. Needs testing and maths :)
So there are many ways to accomplish proof of storage, we'd need just to figure out the which is the best.
Sidenote, this same algo could potentially be used with different settings for immutable, forever storage of data. Since there is no continuing cost to store data, TX Fee for every message (data) byte should be very high in such a coin.
Supply. Needs to be predictable and easy to understand. It would be preferential the standard mailing out is always 1x MailCoin, albeit coin itself should be practically infinitively divisable, and as such supply needs to be in the trillions eventually. But these things get complicated really fast, so we need to set a schedule.
Current email use is very large, so we should have something in the same magnitude. 8640 blocks per day - so maybe 10 000 coins per block == 86 400 000 new coins per day == 31 536 000 000 new coins per year, halving every 2 years. First halving: 63 072 000 000, Second halving: 94 608 000 000, Third (6 years): 110 376 000 000, but only halving 4 or 5 times to keep some new supply for ever increasing adoption and lost coins.
Got all the way here? :D
Thanks for reading up. Let me know what you think, and let's start a discussion on the feasibility of such a system!
I cannot develop this myself, but i would definitively back an effort up in the ways i can if anyone attempts to do something like this :) And i know i got probably many of the details incorrect
The main point of the methods described above is ease of adoption. Without adoption any system is worthless, and with email, you just cannot replace it like that (see the attempts trying to replace IPv4 with IPv6 ...), but you can enhance it. adoption is very critical in communications systems. (No one would have a phone if no one else had a phone)
Addendum 1: Forgot to add about pricing and markets, read comment here
Addendun 2: Bad actors and voting
submitted by PulsedMedia to Bitcoin [link] [comments]

How To Fix FiveM Building Pool Size Error - YouTube Spring Boot - Caching Data - Introduction  Part 1  Simple Programming Data Networking for Medium Size Bitcoin Mining Operations ... JMeter Beginner Tutorial 8 - How to create a Database Test ... What is cache? Explained in tamil  simple explanation of cache

Specify configuration file (default: bitcoin.conf) -datadir=<dir> Specify data directory -dbcache=<n> Set database cache size in megabytes (4 to 16384, default: 300) -loadblock=<file> Imports blocks from external blk000??.dat file on startup -maxorphantx=<n> Keep at most <n> unconnectable transactions in memory (default: 100) -maxmempool=<n> This page displays the number and size of the unconfirmed bitcoin transactions, also known as the transactions in the mempool. It gives a real-time view and shows how the mempool evolves over the time. The transactions are colored by the amount of fee they pay per (virtual) byte. The data is generated from my full node and is updated every minute. Note that in bitcoin there is no global ... Bitcoin is a distributed, worldwide, decentralized digital money. Bitcoins are issued and managed without any central authority whatsoever: there is no government, company, or bank in charge of Bitcoin. You might be interested in Bitcoin if you like cryptography, distributed peer-to-peer systems, or economics. A large percentage of Bitcoin enthusiasts are libertarians, though people of all ... Both Bitcoin-Qt and bitcoind log the correct amount, only Bitcoin-Qt UI reports 450 MB instead of 8 GB. I tried to start it with -dbcache=8000 , it still shows 450 MB. Ubuntu 17.10 x64 This switches bitcoin's transaction/block verification logic to use a "coin database", which contains all unredeemed transaction output scripts, amounts and heights. The name ultraprune comes from the fact that instead of a full transaction index, we only (need to) keep an index with unspent outputs. For now, the blocks themselves are kept as usual, although they are only necessary for serving ...

[index] [18186] [17380] [26904] [403] [31954] [27998] [48581] [47916] [49238] [32215]

How To Fix FiveM Building Pool Size Error - YouTube

more about Raghav - https://automationstepbystep.com/ How to create a Database Test Plan Step 1 - Add mysql jdbc jar to Jmeter lib folder Restart Jmeter ht... Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. The main function of a Bitcoin calculator is to compute how much processing power it will take to generate Bitcoins with a given hardware setup. Because of the deterministic nature of all the ... Skip navigation Sign in. Search Cache memory is a small sized type of volatile computer memory that gives you high speed data access to a processor and stores frequently used computer programs then applications and data as well ...

#