Jump to content

Tomas Brod

  • Content Count

  • Joined

  • Last visited

  • Time Online

    53m 12s

Community Reputation

33 Great Reputation

Social Info

About Tomas Brod

  • Rank
  • Birthday March 22

Personal Information

  • Gender
  • Location
  • Country

Contact Methods

  • Website


  • Operating System
  • Motherboard
  • Processor
    AMD Ryzen 1700
  • RAM
    DDR4, not samsung, 16G
  • SSD / HDD Storage
    120 GB SSD SATA M2, 250 GB HDD, 500 GB HDD, 3 TB HDD

Recent Profile Visitors

690 profile views
  1. INVOICE Constant Block Rewards experimental code: April, 4.5 hours. Monitoring superblock woring (no code): April, 3.5 Hours Addnode command and debugging address sharing issues "aries": April, 2 hours Blockindex Investor cpid corruption debugging/fix: April, 3 hours Superblock Contract Forwarding, testing: April/May, 21 hours (more than I should have) Remove unused fields from appcache, fix triplicate polls: April, 1 hour Added bunch of data acquisition commands: Feb/Mar/April, 10 hours TomasBrod, address Rz6LRCd3LWdEQX8F9eWK66rfm9YkTL51vT Signed version as paste here
  2. I suggest you to wait few weeks. I am sorry for the inconvenience. This is a known problem which is/was happening right now.
  3. This error comes when the local blockchain index is corrupted. This happens quite often, unfortunately. The solution is to delete all files from the data directory, EXCEPT walletbackups folder, gridcoinresearch.conf and wallet.dat. Backup your wallet, to be safe. Then when you start the wallet again, the error should be gone. To speed up sync use the download chain option from the menu. No blkindex.dat file will be created, it is just in error message.
  4. INVOICE Research and work on fixing the recent forking issues: Jan/Feb 2018, 25 hours. Various contributions to strengthen the security during Dec17/Jan18, 29 hours. Unreleased user interface and stats additions during nov-jan, 8 hours. TomasBrod, address SL77ns581aSyHsFWDYwSVS4RT3mY4icjPB Signed version as paste here
  5. The distributed storage and DHT keywords got me interested. The ambitions are great! Can I buy something right now? The min 2 BTC investment is too high. I will copy some code and ideas when you have something.
  6. Tomas Brod


    Let me bring a similar-themed project to general attention. Anderson Attack Which is/was a practical demonstration of a attack algorithm on a specific cipher (A5/1).
  7. Tomas Brod


    A poll has been created in topic of removing this project from the white-list.
  8. The poll has been created. Transaction There already exist thread on Moowrap, I will cross-post there. The idea came from another reddit thread.
  9. For the stakeminer: I looked at PIVX/src/miner.cpp and I can say that it is very similar to what we had (kernel v1/v3). Comment "// ppcoin: if coinstake available add coinstake tx" suggests that it is taken from PPCOIN codebase. At this point, our stakeminer is higher quality as the code-flow is untangled. And they are using, with some minor differences (constants, order...) THE SAME KERNEL as we currently do. For the masternodes: My observations suggest that there is no need for such masternodes. Again, there are enough nodes to support the flow of blocks and transactions. Even with most of the bootstrap mechanisms defunct (blame admin), there are enough nodes with public IP addresses to support a bootstrap. Every wallet there is in gridcoin is a full nodes, as there is no light or miner optimized wallet developed for gridcoin yet. The nature of P2P (that was inherited from bitcoin and purposely crippled) enables discovery of further, faster and closer connections as soon as at least one connection is made (via bootstrap).
  10. Yeah, but the CRC32 (or sha256 if boinc would provide it) is not going to help in that. The CRC32 is for the original file. Then this file is modified (filtered and consolidated). From definition of hash function, crc32 of this new file will not (most likely) match the new file. The old crc32 has no correlation to the filtered file. The filtered file can be verified only by downloading the original file (matching the crc), performing the same filtering+consolidation and comparing it (or it's hash) to the file in question. I understand. But do you yet have an idea of how these subsets will be selected? In other words What makes an ordinary node into the 500 neural nodes? (and then the 50 core...) If not, than that is ok. We can think about it later. (or use subset selection from dwp ;) ) What makes you select x11 instead of a standard sha256 hash, or is it just and example? What purpose will the stamping with orig crc, orig-x11, and new x-11 serve, how this info is going to be used? Lets assume everything went fine and all of the 50 nodes independently fetch equal files and come up with equal hashes. Will they all share this info to the 100 node validators set? Now do I understand it correctly that the 10% (core) will each download/process all projects stat and the 20% (validators) only one each? What is the purpose of this distinction? Do I understand correctly that 10% (core) will not vote on their file, only the validators? What would be the purpose of downloading by core 10% then? Once the validators reach vote consensus on their stamp (or even final stamp), the rest of NN (and network) can trust this. I agree that, assuming the validators worked with secure data sources, that this may secure. Have you thought about what makes the nodes actually download and process the files? Because they can just listen for another vote and repeat it without expending network and cpu. (I did - dwp)
  11. This is very hard. If you are rich in the real world, you can easily become poor and benefit. If you are poor, you must somehow earn the money to become rich. It is even harder in P2P, because not only one rich can become one poor, but moreover one rich can become thousands of poor. (Sybil attack)
  12. I agree that WU hoarding is not a big issue. However the problem of determining progression of project is significant. I would not call it flaw, but rather challenge. How to determine the progression of a project? By number of project credits gained (globally). I don't think this is a viable route as all projects are allowed to assign their credits however they want. Some projects give more credits for the same computation and some very little. In this approach, it would be most viable to crunch Collatz exclusively. Assume that each day every project progressed evenly: but that is the situation we have now. Allow grc admin (or committee) to assign how important every project is. Ye, seriously. Some projects do research "on the knee" and in narrow band (yafu?), while other do larger spectrum (WCG). I thin you should be more rewarded for crunching WCG than (something other). But this approach opens a lightning storm on the committee. Can you please elaborate on this?
  13. I think you are trying to insult me. That is not nice and you should stop. TL;DR; I am certainly not trying to force my proposals onto anyone, nor trying to put myself above anyone else. However, I will continue to question even if it is an authority. Claiming that I am anally-retentive is also false. I already took a shit today (hope I understood the double-word right). And i am certain that, according to graph theory, multiple trees together make a forest [link]. What I done, was review his proposal. And they are not unrealistic edge cases. I asked for an clarification/explanation and pointed out a invalid assumption. It was meant to help him refine his idea. As you failed to notice, I have not dismissed it. None of my proposals mention GUI, but I expect there would be one, that is fairly easy to use. I get that some of these are exaggerate examples, so I will not bother correcting. You made the stake kernel that takes average users magnitude into account insecure and I do not need to explain that it could be exploited. I then made this v8 with support of the community as a temporary quick solution. It was not my intention to require average user to invest large amounts of money in order to stake. It is quite the opposite, I worked hard on coming up with different solutions to pay users faster (and other authors). I agree that my proposals are not completely pure, they do contain some intristics and heuristics. But they all have a reason and are necessary to prevent tampering. If you can come up with a system that is less complex that would be good. I certainly do not have a doctorate and if I continue wasting time arguing with you I might not even be a bachelor. I am aiming for simple solutions and you seem to be aiming for security through obscurity. If the system is designed well and is working you do not need to explain how good it is designed, because it would be visible. Because they are your idea and you understand it better? Maybe. Btw what are trigger height in dash, where can I read more and how you plan to use them. This is a discussion. Bitcoin (and Gridcoin) does check that too. Every wallet node check all transactions and won't allow any cheat. The checkblock/acceptblock is not secure because parts are executed in the wrong order on data that might be incomplete. I do not question iFoggz dedication to design of the associator. I believe he can arrive to a solution that we all would find secure. Your claim "thats secure" is not very convincing. You contradict yourself. You just described a 4 piecemeal solutions that yet unproven conceptual ideas made by You. And what is more, they are not even fully specified so we do not know wheter they would stand a perr review or not. I certainly do not want to be the only one to understand the workings. And again, after the selected proposal is implemented it will ofcourse be tested in safe environment before they can be deployed for money handling. Each piece of the Jringo's GRC4001 roadmap could be unit tested. That is why a peer review is necessary. Neither YOU not I can write a perfect code (or design). I doubt that. The history and recent actions speak otherwise. I agree. This is very much my view on the matter of rebasing. But that does not mean that it can't be done transparently. Thanks for understanding.
  14. Now this is better specified. Filtering the raw xml by beacon list is i think a good idea. It reduces the amount of data to transfer and store. But let me quote Ravon/Marco: In context of verifying them against a project server. I understand and fully agree that what you wrote here will work. I understand that Somehow these "50 nodes" will be selected and only they would download the full stats files from the white-list URL filter them and build a super-majority on the "GRC stamped hash (of the filtered file)". Then this (filtered and processed) data would be shared to the rest of the NN and rest of the network, and they will trust because the GRChash matches, right? Now please explain how the 100 nodes set would use the resulting CRC32 of the range query to verify the filteret and GRChash stamped files? There filtered files will surely not match the project crc. Also you said that we have "500 NN nodes", 50 of them would download full and 100 would do range query, what the rest 350 nodes will be doing? Then a last question, do you have an idea how the "50 nodes" 100 and "500 NN nodes" is going to be chosen from all the nodes in the network? Why I suddenly understand and agree to your idea? Because if you read carefully my Dynamic Witness Participation proposal, you will notice that what you refer to as "50 nodes" almost exactly corresponds to my dynamic witness set.
  15. CRC32 is not a cryptographic function and can not be used to verify the authenticity of a file. Attacker can create a spoofed stats xml file with the same CRC32 hash as the project servers easily. Then proactively inject this file into the NN and trick nodes into voting for it, because they won't be able to tell that the file was spoofed. There is a O(1) algorithm to generate CRC32 collisions (different file - same hash) in matter of (micro?)seconds. Second, even if nodes were sharing the ~300MiB stats file in a P2P fashion, they still need to contact the project server and retrieve the GZIP header using a range query. Granted that this response is much shorter (16 bytes vs 300MiB), but it is still and request to the server. Depending on the number of participants this number of requests might either DDoS the server to death or agitate the server admins against You for loading their servers beyond fair use.

Cryptocurrenytalk Logo


News, information, and discussions about cryptocurrencies, blockchains, technology, and events. Blockchaintalk is your source for advice on what to mine, technical details, new launch announcements, and advice from trusted members of the community. Cryptocurrencytalk is your source for everything crypto. We love discussing the world of cryptocurrencies.



Important Information

By using CRYPTOCURRENCYTALK.COM, you agree to our Terms of Use.