Merge pull request #23 from b-g-goodell/master

Uploading BP and Spectre code
This commit is contained in:
Riccardo Spagni 2018-06-19 17:57:07 +02:00 committed by GitHub
commit 72a29f0a9d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
31 changed files with 5160 additions and 2 deletions

View file

@ -5,8 +5,7 @@ Copyright (c) 2014-2017, The Monero Project
## Research Resources
Web: [getmonero.org](https://getmonero.org)
Forum: [forum.getmonero.org](https://forum.getmonero.org)
Mail: [lab@getmonero.org](mailto:lab@getmonero.org)
Forum: [forum.getmonero.org](https://forum.getmonero.org)
IRC on Freenode:
- [#monero-research-lab](http://webchat.freenode.net/?randomnick=1&channels=%23monero-research-lab&prompt=1&uio=d4)
- [#monero-dev](http://webchat.freenode.net/?randomnick=1&channels=%23monero-dev&prompt=1&uio=d4)

BIN
audits/BP/README.pdf Normal file

Binary file not shown.

BIN
audits/BP/SoW-Kudelski.pdf Normal file

Binary file not shown.

BIN
audits/BP/SoW-QuarksLab.pdf Normal file

Binary file not shown.

BIN
audits/BP/SoW-X41.pdf Normal file

Binary file not shown.

View file

@ -0,0 +1 @@

Binary file not shown.

1
meta/Readme.md Normal file
View file

@ -0,0 +1 @@

View file

@ -0,0 +1 @@

View file

@ -0,0 +1,283 @@
[2018-03-12 10:46:28] <sarang> A businessperson told me they were the answer to problems
[2018-03-12 10:46:37] <sarang> How else will we reach consensus?
[2018-03-12 10:47:07] <moneromooo> You just obey your master(node).
[2018-03-12 10:49:38] <endogenic> but how will i know who my master is if i am not a master
[2018-03-12 10:49:50] <endogenic> dun dun dunnnnnnn
[2018-03-12 10:56:19] <suraeNoether> no one is a master, everyone is flawed, the big lebowski is the latest incarnation of Buddha, etc etc
[2018-03-12 10:59:24] <sarang> that's, like, your opinion
[2018-03-12 10:59:25] ⇐ seacur quit (~seacur@unaffiliated/seacur): Quit: ZNC - 1.6.0 - http://znc.in
[2018-03-12 11:00:20] <rehrar> so... :P
[2018-03-12 11:00:41] <suraeNoether> So, greetings everyone!
[2018-03-12 11:01:01] <MoroccanMalinois> Hi
[2018-03-12 11:01:18] <iDunk> Hi
[2018-03-12 11:01:18] <sarang> yo
[2018-03-12 11:02:00] <suraeNoether> Agenda today is 1) hello, 2) BP audit update 3) other stuff Sarang has been reading/working on, 4) stuff I've been working on, 5) obligatory update on MAGIC, 6) anything anyone else wanna talk about?
[2018-03-12 11:02:37] <suraeNoether> oh, I also want to talk about: how to educate our users about proper key usage and proper privacy practices
[2018-03-12 11:02:58] <ArticMine> hi
[2018-03-12 11:03:49] → Osiris1 joined (~Car@unaffiliated/osiris1)
[2018-03-12 11:03:53] <endogenic> o/
[2018-03-12 11:04:32] <suraeNoether> so, sarang: BP audit update? you gave us a brief one earlier
[2018-03-12 11:04:39] <suraeNoether> but let's recap for folks who weren't here
[2018-03-12 11:05:10] <sarang> sure thing
[2018-03-12 11:05:24] <sarang> We have raised funds for 3 audits: Benedikt Bunz, QuarksLab, Kudelski
[2018-03-12 11:05:34] <sarang> I'm finalizing contracts with them
[2018-03-12 11:05:54] <sarang> We will likely need to do supplemental funding later due to market tomfoolery
[2018-03-12 11:06:17] <sarang> I will be working with the groups during their audits, which will take place between this months and June
[2018-03-12 11:06:52] → msvb-lab joined (~michael@x55b54289.dyn.telefonica.de)
[2018-03-12 11:06:53] <sarang> That's the brief version
[2018-03-12 11:07:02] <endogenic> may i ask a question regarding our auditing efforts in general?
[2018-03-12 11:07:06] <sarang> plz
[2018-03-12 11:07:06] <endogenic> or should i wait til end?
[2018-03-12 11:07:23] <sarang> fire away
[2018-03-12 11:07:44] <endogenic> so i'm also wondering about vulnerabilities in the code in general - i know we have the bounty system for that but it's not got quite the same incentive system
[2018-03-12 11:07:54] <endogenic> just wondering if it makes sense to apply this model to other parts of the code
[2018-03-12 11:08:00] <sarang> Hiring auditors, you mean?
[2018-03-12 11:08:03] <endogenic> yeah
[2018-03-12 11:08:10] <sarang> I'm seeing more and more support for it, yes
[2018-03-12 11:08:13] <endogenic> or an FFS for an auditor
[2018-03-12 11:08:25] <suraeNoether> endogenic: so there is this clever idea
[2018-03-12 11:08:37] <sarang> At least for components of the code, like multisig or BPs that have a defined scope
[2018-03-12 11:08:37] <suraeNoether> that greg maxwell and blockstream are using for their libsecp256k1 library
[2018-03-12 11:08:49] <suraeNoether> which has a badass test suite
[2018-03-12 11:09:15] <endogenic> sarang: right i suppose i'm thinking more from the security and cracking standpoint .. like, can we confirm what % of data input fuzzing we've done and where / if / how the code fails
[2018-03-12 11:09:16] <endogenic> etc
[2018-03-12 11:09:31] <suraeNoether> where they aren't providing bug bounties for the actual library, but for the unit test suite: if you can upload a new unit test that the current system fails, and yet still passes all current unit tests, you get the bounty
[2018-03-12 11:09:38] <sarang> That's more of a question for moneromooo I think
[2018-03-12 11:09:47] <endogenic> that sounds interesting surae
[2018-03-12 11:09:56] <suraeNoether> it incentivizes things very nicely
[2018-03-12 11:10:03] <suraeNoether> but it requires a really great test suite
[2018-03-12 11:10:09] <sarang> yes indeed
[2018-03-12 11:10:27] <moneromooo> I don't think we can easily determine a percentage of inputs for fuzzing.
[2018-03-12 11:10:41] <endogenic> well that was just one example
[2018-03-12 11:10:50] <endogenic> i cant take responsibility to define all the jobs an expert cracker would do :P
[2018-03-12 11:11:15] <suraeNoether> if we are going to start putting money into auditors, then we should consider putting a proportion of that toward beefing up our test suites. perhaps require that auditors propose new unit tests, or something along those lines, in addition to a thumbs up/down and a list of recommended changes
[2018-03-12 11:11:22] <endogenic> yeah
[2018-03-12 11:11:28] <endogenic> i mean we want to record the work which was done
[2018-03-12 11:11:31] <endogenic> and tests can be nice way to do that
[2018-03-12 11:11:36] <sarang> yes
[2018-03-12 11:11:48] <suraeNoether> and that way, perhaps after a year or two, we will have a test suite sufficiently beefy to incentivize properly
[2018-03-12 11:11:55] <suraeNoether> i know its' kind of a long-term plan
[2018-03-12 11:12:01] <sarang> Too bad it's sexier to run an FFS for an auditor than for writing test suites :(
[2018-03-12 11:12:15] <suraeNoether> short of paying some smart people to audit our whole lie-berry and come up with test suites across the board
[2018-03-12 11:12:19] <suraeNoether> yeah, no kidding
[2018-03-12 11:12:22] <endogenic> sarang it can be pitched in the same way
[2018-03-12 11:12:27] <endogenic> they audit by the very activity
[2018-03-12 11:12:36] <rehrar> do unit tests require coding? (sorry if this is a stupid question)
[2018-03-12 11:12:42] <endogenic> yep
[2018-03-12 11:12:42] <sarang> yes
[2018-03-12 11:12:45] <rehrar> blerg
[2018-03-12 11:12:52] <endogenic> it's not that bad tho rehrar
[2018-03-12 11:12:57] <endogenic> it's more about understanding what you are testing for
[2018-03-12 11:12:58] <sarang> The goal is to have complete scope
[2018-03-12 11:13:00] <rehrar> it is when my coding is 1/10 :D
[2018-03-12 11:13:41] <sarang> Any questions on the current audit that anyone has?
[2018-03-12 11:13:53] <sarang> Kudelski will be the first to go
[2018-03-12 11:13:54] <moneromooo> When does the C++ based one start ?
[2018-03-12 11:14:02] <sarang> They're available this month
[2018-03-12 11:14:23] <moneromooo> More precisely ?
[2018-03-12 11:14:47] <sarang> TBD once we sign with them, but I can check on more specific dates if you need them
[2018-03-12 11:15:55] <sarang> Anything in particular?
[2018-03-12 11:19:12] <suraeNoether> ok, well
[2018-03-12 11:19:29] <ArticMine> are all the audits going through ostif?
[2018-03-12 11:19:37] <sarang> Two of them are
[2018-03-12 11:19:41] <sarang> Benedikt will be paid directly in XMR
[2018-03-12 11:19:45] <suraeNoether> ArticMine: I believe Buenz is independent
[2018-03-12 11:19:46] <suraeNoether> ^
[2018-03-12 11:20:00] <sarang> OSTIF's role is just to handle the payment
[2018-03-12 11:20:15] <sarang> They'd appreciate being thanked in our materials for helping to organize the groups and handle the exchange
[2018-03-12 11:21:05] <suraeNoether> okay, if there are no more questions on BPs
[2018-03-12 11:21:07] <sarang> So this will be an ongoing process over the next few months
[2018-03-12 11:21:14] <sarang> expect little news until someone finishes
[2018-03-12 11:21:16] <suraeNoether> Sarang: what else have you been reading/doing?
[2018-03-12 11:21:32] <sarang> I've been reviewing the latest multisig draft from suraeNoether
[2018-03-12 11:21:32] <suraeNoether> ok in that case we will stop bringing it up every meeting for 3 weeks or so :P
[2018-03-12 11:21:48] <sarang> prepping a submission for defcon china
[2018-03-12 11:21:55] <suraeNoether> that's cool
[2018-03-12 11:21:58] <sarang> prepping a talk in portland on monero security
[2018-03-12 11:22:15] <sarang> reading up on some papers involving mixing and ring representations
[2018-03-12 11:22:29] <sarang> hoping to get back to some math shortly for pippenger's algorithm
[2018-03-12 11:22:37] <sarang> for speedier multiexp
[2018-03-12 11:22:53] <sarang> more administrative work lately, unfortunately
[2018-03-12 11:23:17] <sarang> I submitted a monthly report recently that details other efforts
[2018-03-12 11:23:42] <sarang> linky linky https://forum.getmonero.org/9/work-in-progress/89005/funding-for-sarang-at-mrl-for-q1-2018?page=&noscroll=1#post-94324
[2018-03-12 11:25:05] <suraeNoether> any other questions for sarang?
[2018-03-12 11:25:28] <suraeNoether> I'd like to remind the crowd that sarang's FFS funding round I believe has been posted, although I'm not sure if it's moved to Funding Required yet
[2018-03-12 11:26:18] <rehrar> There's quite a few things that need to be moved to funding required
[2018-03-12 11:26:27] <rehrar> we should all poke fluffypony and luigi1111
[2018-03-12 11:26:40] <sarang> Mine is still in Open and not in Funding yet
[2018-03-12 11:26:52] <sarang> There hasn't been much activity regarding it anyway
[2018-03-12 11:27:16] <sarang> Not a huge rush. I write them in advance to allow for discussion if needed
[2018-03-12 11:27:56] <sarang> How about suraeNoether? Your turn
[2018-03-12 11:28:05] <suraeNoether> 4) Stuff I've been working on. Multisig paper, formal documentation work for monero, and a formal description of EABE attacks.
[2018-03-12 11:28:17] <suraeNoether> For the multisig paper, I just received notes from sarang and I'll be composing a draft for review by someone outside of MRL.
[2018-03-12 11:28:34] <sarang> suraeNoether: I'll have remaining notes added to your doc this afternoon
[2018-03-12 11:28:38] <suraeNoether> right now I need to copy-paste some intro/notation stuff from a previous version of the paper, fix some references, stuff like that, and then take sarang's changes into account
[2018-03-12 11:28:40] <suraeNoether> great thanks
[2018-03-12 11:28:49] <suraeNoether> Once the document is a little less ugly, i'll link to it again
[2018-03-12 11:29:27] <sarang> Now on to MAGIC per the agenda?
[2018-03-12 11:29:31] <suraeNoether> I've been attempting to write up a formal description of the statement being proven in a given monero ringCT authentication, for two reasons. For one thing, I think that our approach for threshold multisig could be generalizable in a way that may make it fun to publish. But I'm not sure if this description has appeared before in the literature, so I'm looking around and contacting some folks
[2018-03-12 11:30:21] <suraeNoether> For another reason, because I haven't seen it written out explicitly before.
[2018-03-12 11:30:43] <suraeNoether> And the EABE attack is concerning enough to me to be writing up some statistical arguments about churn (sgp_[m] ping)
[2018-03-12 11:31:22] <suraeNoether> i'll be linking all these documents in the next week
[2018-03-12 11:31:37] <suraeNoether> so far it looks like 3 sketches of possible papers for publication, even if not as peer reviews, as whitepapers
[2018-03-12 11:31:41] <suraeNoether> after multisig is running
[2018-03-12 11:31:50] <suraeNoether> anyway, onto MAGIC
[2018-03-12 11:32:08] <suraeNoether> i feel like folks have a lot of questions about MAGIC, so I'll ask if anyone has any questions
[2018-03-12 11:33:37] <sarang> Question I've seen is: what types of things will it fund, and how will they be determined?
[2018-03-12 11:34:39] <luigi1111w> sarang moved
[2018-03-12 11:34:43] <luigi1111w> moneromooo ready for funding
[2018-03-12 11:35:11] <rehrar> thanks luigi
[2018-03-12 11:35:36] <sarang> ty luigi1111w
[2018-03-12 11:35:38] <suraeNoether> Good question. The overall scope will be: 1) scholarships to undergraduates in the US 2) grants to graduate students in the US, 3) grants to researchers in the US, 4) grants to schools globally with an emphasis on secondary and tertiary education
[2018-03-12 11:35:50] <suraeNoether> how much of that we can actually do depends on our funding
[2018-03-12 11:36:08] <suraeNoether> oh 5) sponsoring tehcnical conferences in cryptocurrencies is also on that list
[2018-03-12 11:36:26] <sarang> suraeNoether: why restrict scholarships and grad grants to US?
[2018-03-12 11:36:31] <suraeNoether> so our first year, my goal is to provide a few scholarships, sponsor the first monero conference, and fix up a school in south africa
[2018-03-12 11:36:33] <rehrar> what other ways of funding are you searching for besides FFS stuff?
[2018-03-12 11:36:41] <endogenic> do you have any criteria to decide what is good research that gets funded?
[2018-03-12 11:36:42] <suraeNoether> sarang: because i feel like we already are going to have lots of applications
[2018-03-12 11:37:05] <endogenic> will decision making ever get delegated?
[2018-03-12 11:37:14] <sarang> The org will need established principles for determining its choices
[2018-03-12 11:37:23] <sarang> to stay transparent and accountable to its donors
[2018-03-12 11:37:41] <suraeNoether> rehrar: we'll be soliciting funding and grants from as many places as possible. one delightful property of non-profits in america: anything they spend that isn't on overhead must go to charitable purposes or other non-profits. so non-profits like the bill & melinda gates foundation give lots of money to other non-profits.
[2018-03-12 11:37:51] <sarang> rehrar: I mentioned the kernel of this idea to some fund managers, who said their groups were interested in supporting nonprofits; this may lead to new funding avenues
[2018-03-12 11:38:23] <rehrar> that's pretty awesome
[2018-03-12 11:38:29] <suraeNoether> endogenic: I haven't started thinking about the research end of MAGIC yet because i'm assumign the first year we won't necessarily get enough money to manage to give out substantial research grants
[2018-03-12 11:38:40] <endogenic> sorry replace research with project
[2018-03-12 11:38:44] <endogenic> i misspoke
[2018-03-12 11:38:58] <sarang> Sounds like there would be a clear delineation between scholarships and grants
[2018-03-12 11:39:15] → rpitricker0 joined (~rpitricke@unaffiliated/somewhatclueless)
[2018-03-12 11:39:26] <suraeNoether> ah yeah well in general, like sarang says, we need established principles for determing our choices, and this is something that needs to be discussed at our board meetings. we want to be very public, and i want to make our board meetings available as youtube videos or whatever... pending agreement by the other board members (some of whom have not yet been picked)
[2018-03-12 11:39:30] <suraeNoether> sarang yes
[2018-03-12 11:39:32] ⇐ Keniyal quit (~Keniyal@unaffiliated/keniyal): Remote host closed the connection
[2018-03-12 11:39:47] <sarang> Grants would have the expectation of deliverables
[2018-03-12 11:40:01] → nickman70 joined (~nickman70@unaffiliated/somewhatclueless)
[2018-03-12 11:40:01] → somewhatclueless joined (~somewhatc@unaffiliated/somewhatclueless)
[2018-03-12 11:40:12] ⇐ lithiumpt quit (~lithiumpt@84.39.112.114): Ping timeout: 265 seconds
[2018-03-12 11:40:18] <sarang> Scholarships are to increase the talent pool and help perhaps underrepresented student groups become involved in the space
[2018-03-12 11:40:19] <suraeNoether> scholarships for undergrads, it is my intention, to mainly be aimed at folks in law or economics or computer science or math. Not exactly the traditional STEM mix. however, i don't want an undergrad to worry about losing their money if they decide to study graph theory instead of bitcoin
[2018-03-12 11:40:27] <suraeNoether> sarang ^ yep
[2018-03-12 11:40:33] <suraeNoether> i kind of want the schoalrships nearly strings-free
[2018-03-12 11:40:37] <sarang> However, the devil's in the details
[2018-03-12 11:41:28] <suraeNoether> as far as funding goes, though, i'm matching up to 5% of donations up to 50 XMR for this venture. If we manage to get 1000 XMR, I donate 50 XMR to the cause and we'll have 1050 XMR for the first year
[2018-03-12 11:41:29] <sgp_[m]> suraeNoether a little late to chime in, but I would love to help you with the EAE paper if there's any way I can
[2018-03-12 11:41:40] <suraeNoether> sgp_[m]: PM me
[2018-03-12 11:41:56] ⇐ nickman70 quit (~nickman70@unaffiliated/somewhatclueless): Remote host closed the connection
[2018-03-12 11:41:56] ⇐ somewhatclueless quit (~somewhatc@unaffiliated/somewhatclueless): Remote host closed the connection
[2018-03-12 11:41:56] ⇐ rpitricker0 quit (~rpitricke@unaffiliated/somewhatclueless): Remote host closed the connection
[2018-03-12 11:42:09] → lithiumpt joined (~lithiumpt@84.39.112.114)
[2018-03-12 11:42:34] <suraeNoether> and if we can manage that much XMR the first year, we can pay for like 5 scholarships for undergrads, 2 grad student grants, fix up a school or two in ZA, and host the first monero conference (with no entry fee)
[2018-03-12 11:42:34] <sarang> This is an interesting pilot project that could take many different directions
[2018-03-12 11:42:41] <suraeNoether> and still have some XMR leftover for the next year
[2018-03-12 11:42:53] <sarang> I think it'll important to keep the scope balanced between too narrow and too broad
[2018-03-12 11:43:16] <suraeNoether> my primary concern right now is determining criteria for handing out scholarships
[2018-03-12 11:43:19] <sarang> An established mission is gonna be essential to establishing and maintaining this direction
[2018-03-12 11:43:47] <suraeNoether> personally i think the best students are the ones who sucked the first year or three and then completely turn around, but that's just rewarding students with a past similar to myself
[2018-03-12 11:43:55] <rehrar> how many board members and who is under consideration?
[2018-03-12 11:44:08] <rehrar> will you guys need a website?
[2018-03-12 11:44:14] <suraeNoether> yes
[2018-03-12 11:44:24] <sarang> You'll need to use the application process to determine who is excited about the crypto space and not just eager to hop on a money train
[2018-03-12 11:44:37] <rehrar> msvb-lab will get mad at me if I talk about other people's websites before I finish Kastelo
[2018-03-12 11:44:58] <sarang> I wouldn't expect the model student to know everything about this space, but I want to ensure that the recipients are those with a strong desire to succeed in it for good reasons
[2018-03-12 11:45:42] <msvb-lab> rehrar: Yes, very mad. It's our nature.
[2018-03-12 11:46:31] <suraeNoether> me, sarang, the operations manager from Globee, my advisor at Clemson university (Jim Coykendall), my wife are going to be the first board members.
[2018-03-12 11:46:34] <sarang> rehrar: we'll advertise with hip videos on FaceSpace and SnapTime and InstantGram where students like to hang
[2018-03-12 11:46:43] <suraeNoether> if anyone has an issue with my wife on the board, MAGIC was partly her idea, she has 7 years experience teaching in higher education, and she isnt' being paid
[2018-03-12 11:47:05] <sarang> Should there be broader representation?
[2018-03-12 11:47:14] <suraeNoether> I'd be happy including more board members
[2018-03-12 11:47:16] <sarang> Or is this sufficient?
[2018-03-12 11:47:20] <rehrar> I'm a Mexican if we need diversity :P
[2018-03-12 11:47:28] <sarang> I'm not leaning one way or another, just wondering if it is
[2018-03-12 11:47:29] <suraeNoether> rehrar: you are also in NM yeah?
[2018-03-12 11:47:34] <rehrar> I am
[2018-03-12 11:47:38] <suraeNoether> and NM has liiiike some serious education problems
[2018-03-12 11:47:39] <suraeNoether> iirc
[2018-03-12 11:47:42] <rehrar> Come down and we'll have a party trip to Juarez
[2018-03-12 11:47:44] <rehrar> yes
[2018-03-12 11:47:46] <rehrar> we really do
[2018-03-12 11:47:49] <suraeNoether> WELCOME ABOARD REHRAR
[2018-03-12 11:47:52] <rehrar> I'm working on this myself actually in my free time
[2018-03-12 11:48:07] <rehrar> We're like the second worst in the nation
[2018-03-12 11:48:41] <suraeNoether> cool email me at surae@getmonero.org so I can get you on a list
[2018-03-12 11:48:46] <suraeNoether> okay, lastly
[2018-03-12 11:48:51] <rehrar> I'm on a lot of NSA lists already, but sure.
[2018-03-12 11:48:53] <endogenic> yes NM does
[2018-03-12 11:49:09] <endogenic> rehrar is the only beacon
[2018-03-12 11:49:11] <rehrar> endogenic came here and saw the people sobbing in the streets
[2018-03-12 11:50:12] <suraeNoether> okay, lastly: I wanted to talk about how to educate the community about key safety with MoneroV and best practices (currently, I'm not convinced churn is non-negligibly helpful under a very specific threat model)
[2018-03-12 11:51:30] <rehrar> What about a short one minute video?
[2018-03-12 11:51:46] <suraeNoether> would be very convenient to link to
[2018-03-12 11:52:09] <rehrar> We can put it on our soon-to-come media.getmonero.org as well as youtube and stuff
[2018-03-12 11:52:09] <suraeNoether> i've been thinking about starting whiteboard youtube videos explaining how cryptocurrencies work. this could be the first one.
[2018-03-12 11:52:36] <rehrar> suraeNoether, talk with me later about Privacademy.
[2018-03-12 11:53:10] <nioc> something that would allow an idiot like me to know exactly what to do
[2018-03-12 11:53:42] <sarang> Just paste your private keys here. We'll print them out and put them in a safe for you
[2018-03-12 11:53:48] <Osiris1> ;)))
[2018-03-12 11:53:50] <sarang> OR DON'T
[2018-03-12 11:53:52] <Osiris1> nice
[2018-03-12 11:54:01] <nioc> thx
[2018-03-12 11:54:02] <ArticMine> My concern with this is that we do not end up protecting MoneroV from the claws of the bear
[2018-03-12 11:54:05] <suraeNoether> nioc, looks to me like you're a pretty smart fella, if the past few years have shown us anything about anticipating change. :P ok. Does nayone have any questions, concerns, comments? I'll be posting my next funding request this afternoon. I have a hard time gauging the mood of an IRC chat room
[2018-03-12 11:54:19] <suraeNoether> ArticMine: care to elaborate?
[2018-03-12 11:54:26] <endogenic> i heard nioc is a cabbage
[2018-03-12 11:54:27] <endogenic> literally
[2018-03-12 11:54:28] <sarang> I don't have a good sense for how many users will fall for V
[2018-03-12 11:55:01] → rpitricker0 joined (~rpitricke@unaffiliated/somewhatclueless)
[2018-03-12 11:55:02] → somewhatclueless joined (~somewhatc@unaffiliated/somewhatclueless)
[2018-03-12 11:55:11] <ArticMine> Basically i see MoneroV as an economic attack. If nobody claims their MoneroV then it price will be significantly inflated
[2018-03-12 11:55:26] → nickman70 joined (~nickman70@unaffiliated/somewhatclueless)
[2018-03-12 11:56:48] <ArticMine> So we need a process for people to claim, their MoneroV safely and without impacting their own and other's privacy and to be blunt at the appropriate time dump the MoneroV on the market
[2018-03-12 11:57:13] <ArticMine> That is where the claws of the bear come in
[2018-03-12 11:57:45] <sarang> What do you mean ArticMine? Spending an existing output on the V chain with random ring is immediately deanon
[2018-03-12 11:57:58] <sarang> and contributes to the eventual deanon of your ringmates
[2018-03-12 11:58:25] <rehrar> I see what he's saying though. If not a lot of people claim theirs, then it's a lot of immediate 'holders' which might artificalily inflate the price
[2018-03-12 11:58:30] <ArticMine> The only way I can see this working is a spend on both chains with a significant number of overlapping rings
[2018-03-12 11:58:33] <rehrar> which in turn, might make it seem like MoneroV was somewhat successful
[2018-03-12 11:58:47] <rehrar> which also in turn might make other people try to do something similar with Monero
[2018-03-12 11:59:28] <ArticMine> If the price of MoneroV crashes then this becomes a powerful deterrent for the future
[2018-03-12 11:59:47] <suraeNoether> ArticMine: i disagree. airdrops are designed to crash like that
[2018-03-12 11:59:48] <sarang> Tough part is that a given user might not care about their transaction being deanon. But it's convincing them that it contributes to others that's trickyy
[2018-03-12 11:59:54] <suraeNoether> they aren't designed for egalitarian long-term pegs
[2018-03-12 11:59:56] <Olufunmilayo> sarang, I thought (mind you I am late to the party), that spending a output on both chains with the same ring was theoretically "safe-ish" to some extent
[2018-03-12 12:00:12] <suraeNoether> Olufunmilayo: only if all your ringmates do the same, and all their ringmates, etc
[2018-03-12 12:00:26] <sarang> And their code needs to support it
[2018-03-12 12:00:30] <ArticMine> It is but trike to do
[2018-03-12 12:00:31] <sarang> they've shown they don't GAF
[2018-03-12 12:01:05] <ArticMine> Then we will have to release a patched Monerov
[2018-03-12 12:01:36] <ArticMine> It does not have to be "official"
[2018-03-12 12:01:56] <sarang> One idea I like is making it easier to fork the Monero codebase and blockchain safely
[2018-03-12 12:02:08] <rehrar> *shrug* I may be thinking a bit casually here, but since this is the first time something like this is happening, and we're already going to be getting our upped ringsize before the fork, I think we can somewhat safely wait this one out and see how it plays
[2018-03-12 12:02:08] <sarang> So for future attempts, they'd have to actively break that safety
[2018-03-12 12:02:19] <sarang> and then we can give them bad publicity for actively hurting users
[2018-03-12 12:02:36] <suraeNoether> i wonder if they doubly-hash their key images. so you check if pHp(P) is in the key image set or if pHp(pHp(P))
[2018-03-12 12:02:41] <suraeNoether> or if they could rather
[2018-03-12 12:03:07] <ArticMine> By the way the network effect is less because of spent RingCT ouputs that will not be compromised
[2018-03-12 12:03:43] <rehrar> Alright, I gotta split. Thanks for the meeting. Catch you guys later
[2018-03-12 12:03:46] <Olufunmilayo> ArticMine, what good would a patched monerov be if the core team is not behind monerov? Also, suraeNoether, time would also be a factor yes? both have to be done simultaneously
[2018-03-12 12:03:46] <ArticMine> The trouble is that the same keys are used on both chains
[2018-03-12 12:04:27] ⇐ rpitricker0 quit (~rpitricke@unaffiliated/somewhatclueless): Ping timeout: 240 seconds
[2018-03-12 12:04:58] <ArticMine> It will allow those who wish to claim and sell their MoneroV to do so safely.
[2018-03-12 12:05:10] <suraeNoether> oh no the double hash doesn't work unless all previous ring sigs do it that way. bah.
[2018-03-12 12:05:31] <ArticMine> Not all but enough to provide a good mix
[2018-03-12 12:05:49] ⇐ msvb-lab quit (~michael@x55b54289.dyn.telefonica.de): Ping timeout: 265 seconds
[2018-03-12 12:07:35] <ArticMine> and that means pre fork mixins will only work
[2018-03-12 12:07:51] <Olufunmilayo> ArticMine, you will then have two competing versions of monerov competing against each other. I do see the benefit but *shrug*
[2018-03-12 12:08:03] <suraeNoether> okay, well, unless folks have more questions or suggestions, i think our best bet is simply to put out a video that says "don't claim your MoneroV, here is why."
[2018-03-12 12:08:20] <ArticMine> No the patch can be compatible with the MoneroV consensus
[2018-03-12 12:08:24] <suraeNoether> because the math to patch monerov or to protect monero isn't obvious to me right now
[2018-03-12 12:08:53] <ArticMine> I am not sure if there is a solution
[2018-03-12 12:09:10] <iDunk> It's mooo's code to make it use the ringdb, AFAIUI.
[2018-03-12 12:09:16] <Olufunmilayo> suraeNoether, only other thing would be to I guess track monerov tx's to see just how bad it is haha
[2018-03-12 12:09:27] <sarang> We will
[2018-03-12 12:09:45] <suraeNoether> Okay, well
[2018-03-12 12:09:47] <suraeNoether> good meeting everyone
[2018-03-12 12:09:51] <suraeNoether> 1h10 minutes, not too bad
[2018-03-12 12:09:52] <suraeNoether> OH OH
[2018-03-12 12:09:59] <sarang> oh
[2018-03-12 12:10:02] <hyc> oh?
[2018-03-12 12:10:21] <suraeNoether> anyone want to volunteer to make PRs to my github with meeting logs? I'm literally never going to do it
[2018-03-12 12:10:32] * wraithm_ → wraithm
[2018-03-12 12:10:43] <suraeNoether> i intend to every week, but i think i need to practically accept that it's not going to happen. :P
[2018-03-12 12:11:07] ⇐ somewhatclueless quit (~somewhatc@unaffiliated/somewhatclueless): Remote host closed the connection
[2018-03-12 12:11:07] ⇐ nickman70 quit (~nickman70@unaffiliated/somewhatclueless): Remote host closed the connection
[2018-03-12 12:11:45] <sarang> https://www.youtube.com/watch?v=ZXsQAXx_ao0

View file

@ -0,0 +1,232 @@
[2018-05-21 11:01:06] <sarang> Let's begin the meetin
[2018-05-21 11:01:29] <suraeNoether> sure
[2018-05-21 11:01:33] <rehrar> ye
[2018-05-21 11:01:36] <suraeNoether> i have a list of stuff i want to bring up
[2018-05-21 11:01:45] <suraeNoether> but let's start with the simple stuff
[2018-05-21 11:01:57] <suraeNoether> sarang: updates on BP audits?
[2018-05-21 11:02:01] <sneurlax[m]1> hi all, I had a death in the family on the 14th so I have been travelling this week and have not made any progress on anything, really :(
[2018-05-21 11:02:12] <sneurlax[m]1> I will remind you that I've reached out to ehanoc and will be working with them on python code but yeah, delays delays delays
[2018-05-21 11:02:21] <sarang> Sure, so the audits are underway, will be checking in tomorrow with the groups for updates
[2018-05-21 11:02:24] <sarang> Noting to report yet
[2018-05-21 11:02:41] <sarang> sneurlax[m]1: sorry to hear that :(
[2018-05-21 11:02:50] <suraeNoether> sneurlax I am sorry to hear about that. :(
[2018-05-21 11:03:04] <ArticMine> What are the expected time lines for each group/
[2018-05-21 11:03:20] <sarang> Looking at mid-July all around
[2018-05-21 11:03:33] <suraeNoether> not bad
[2018-05-21 11:03:50] <sarang> Yeah given that they work on a lot of projects
[2018-05-21 11:03:55] <suraeNoether> sarang: what have you been working on for the past 2 weeks?
[2018-05-21 11:04:05] <sarang> Otherwise, I've written up a noninteractive refund scheme in collaboration w/ Purdue folks
[2018-05-21 11:04:15] <sarang> will be doing a formal journal paper for submission too
[2018-05-21 11:04:16] <suraeNoether> ah yeah, that's on my lsit of stuff to read today
[2018-05-21 11:04:29] <suraeNoether> nice
[2018-05-21 11:04:58] <sarang> Have been keeping up with some Zcash flaws and plenty of other papers that came through the pipe
[2018-05-21 11:05:10] <sarang> and advanced course prep for the upcoming crypto course
[2018-05-21 11:05:15] <suraeNoether> i've started keeping a monthly "works cited/read" list
[2018-05-21 11:05:39] <suraeNoether> any other updates?
[2018-05-21 11:05:52] <sarang> Nice, I also include my reading list in updates
[2018-05-21 11:06:29] <sarang> Also some good talk in here about BPs and fees
[2018-05-21 11:06:33] <sarang> which needs to be settled soon
[2018-05-21 11:06:46] <sarang> Can't deploy without consensus on the new fee structure
[2018-05-21 11:07:15] <rehrar> this fees thing is not something we can keep saying 'we need to talk about this soon'
[2018-05-21 11:07:19] <rehrar> it needs to get talked about ASAP
[2018-05-21 11:07:20] <suraeNoether> Allright, so when Sarang and I were in london, we started hashing out (heh) a list of things for MRL to tackle in the upcoming year. we've been late on the research road map because... well, because there are lots of possible forks in the road, so to speak, and it's not clear which are dead ends, and which the community would like us to pursue. and near the top of the list is BP fee structure
[2018-05-21 11:07:25] <suraeNoether> let's talk about it immediately after the meeting
[2018-05-21 11:08:00] <sarang> rehrar: yes, we need concrete proposals with actual values
[2018-05-21 11:08:11] <ArticMine> Once we have final figures on size and verification efficiency we can finalize on fees / blocksize
[2018-05-21 11:08:17] <suraeNoether> before i get to my big list: has anyone else been working on anything interesting? I don't want to downplay the contributions of other folks
[2018-05-21 11:08:32] <endogenic> well vtnerd has, a little
[2018-05-21 11:08:40] <suraeNoether> oh?
[2018-05-21 11:08:40] <endogenic> he was looking into xmr <> btc swaps
[2018-05-21 11:08:44] <sarang> UkoeHB worked up a great draft of his tech explanation of transactions
[2018-05-21 11:08:52] <endogenic> he came up with a funny method by which you'd have to burn your btc priv key :P
[2018-05-21 11:08:56] <sarang> endogenic: there were all sorts of curve issues tho
[2018-05-21 11:08:57] <endogenic> called it the sony method
[2018-05-21 11:09:01] <endogenic> yeah
[2018-05-21 11:09:05] <UkoeHB> I did?
[2018-05-21 11:09:23] <endogenic> oh yeah didn't koe have something to gift surae? : P
[2018-05-21 11:09:27] <sarang> UkoeHB: yeah, your extension of the magnus stuff, not sure if the latest work was before or after surae's departure
[2018-05-21 11:10:42] <UkoeHB> Ah yes give me abt 10mins
[2018-05-21 11:10:52] <endogenic> oh, one thing from my recent trip was noting a strong interest in ring sig alternatives research
[2018-05-21 11:10:55] <endogenic> fwiw
[2018-05-21 11:11:01] <suraeNoether> kurt magnus contacted me asking me for my comments before I left, and I was confused because I thought UkoeHB *took over* that paper from kurt, but kurt appears to think it's two separate projects now? maybe y'all should chat about that together...
[2018-05-21 11:11:27] <suraeNoether> endogenic: seems like very few folks in the community oppose the idea of replacing ring signatures with something else
[2018-05-21 11:12:05] <endogenic> suraeNoether: no i just meant people are excited about specific alternatives like starks
[2018-05-21 11:12:10] <UkoeHB> Don't know surae kurt is rather curt
[2018-05-21 11:12:14] <endogenic> rather than saying 'oh this is a problem'
[2018-05-21 11:12:29] <suraeNoether> oh he spelled it with a k when he first got on irc *shrug*
[2018-05-21 11:12:58] <suraeNoether> okay, so here's the list of stuff on my general MRL "todo" list:
[2018-05-21 11:15:18] <suraeNoether> 1. BP fee models.
[2018-05-21 11:15:25] <suraeNoether> 2. Transaction graph python library (see sneurlax[m]1 comment above)
[2018-05-21 11:15:32] <suraeNoether> 3. Sarang and I would both like a full technical report on "what happens if PRNG is terrible in Monero? Failure model and effects analysis sort of deal.
[2018-05-21 11:15:37] <suraeNoether> 4. Codifying Monero's best practices guidelines into a nice infographic. I believe sgp and rehrar have put some effort into this so far.
[2018-05-21 11:16:03] <suraeNoether> 5. Monero Standards in general. We have lots of source material to start gathering these together, and I would like to get MOST of this done before next month; describing the current state of monero before BPs go live is probably going to be valuable later on.
[2018-05-21 11:16:17] <sarang> 6. Payment channel infrastructure and prereqs
[2018-05-21 11:16:26] <endogenic> ^
[2018-05-21 11:16:31] <moneromooo> Ooooh yes please :)
[2018-05-21 11:17:05] <sarang> We have some good work on 6 so far, but no definite path forward atm
[2018-05-21 11:17:46] <suraeNoether> 7. network simulation library for testing things like consensus algorithms and difficulty metrics. (I am off-and-on working with a friend at University of New Mexico on using population-ecology models to look at mining incentives, etc)
[2018-05-21 11:18:00] <sarang> There's more work on the actual channel implementation that's being worked on w/ Purdue folks, but those drafts aren't released yet
[2018-05-21 11:18:09] <suraeNoether> 8. Ric's zk-s(t,n)ark zidechain proposal
[2018-05-21 11:18:10] <sarang> at their request
[2018-05-21 11:18:53] <suraeNoether> 9. I would like to write a paper on using heuristic analyses for constructing "ground truth" transaction graphs in private cryptocurrencies, and the common pitfalls that crop up from statistical points of view
[2018-05-21 11:19:04] <suraeNoether> (for example, my common sensitivity vs. specificity complaint)
[2018-05-21 11:19:16] <suraeNoether> 10. Churn analysis (ties with 9)
[2018-05-21 11:19:24] <sarang> (and with 2)
[2018-05-21 11:19:48] <sarang> Having the library will give really useful data into the churn models
[2018-05-21 11:19:55] <suraeNoether> 11. I have written here "curve optimizations," but I feel like the ones we intend to use should be included in the monero standards... but it could be helpful for other projects for us to make a technical note about them
[2018-05-21 11:20:07] <suraeNoether> in particular, seeing where we can cram them in elsewhere seems like a good idea
[2018-05-21 11:21:01] <sarang> good ideas all around
[2018-05-21 11:21:07] <suraeNoether> 12. General literature reviews (this is an ongoing thing, but since Sarang and I are constantly reading, we may as well start compiling our thoughts into common documents!). This ranges from zero knowledge proofs, to hash-based signatures, to reviews on pairings-based approaches
[2018-05-21 11:21:21] <rehrar> I may have missed it, but was the multisig paper sent off for review?
[2018-05-21 11:21:44] <sarang> There was a recent flaw in MuSig that IIRC will affect one of suraeNoether's proof strategies
[2018-05-21 11:21:45] <suraeNoether> no: the flaws in the musig paper apply to my security proof too, so we are now... reading... a lot.
[2018-05-21 11:21:49] <sarang> this happened during his absence
[2018-05-21 11:21:58] <suraeNoether> this isn't to say that they were proven insecure
[2018-05-21 11:22:01] <sarang> The MuSig fix is to add another communication round
[2018-05-21 11:22:08] <sarang> it hardens the proofs substantially
[2018-05-21 11:22:14] <suraeNoether> but merely that it's been proven that a proof of the security *cannot exist* under standard assumptions
[2018-05-21 11:22:29] <suraeNoether> subtle point, but important
[2018-05-21 11:22:55] <sarang> Yeah, and it snuck past a lot of people
[2018-05-21 11:23:04] <endogenic> phew big list in any case
[2018-05-21 11:23:08] <suraeNoether> a lot of very smart people
[2018-05-21 11:23:46] <suraeNoether> 13. New elliptic curves. *if we think it is valuable,* and I think it is, I think we should reach out to folks for developing a family of suitable ECs that are compatible with 25519
[2018-05-21 11:24:17] <sarang> Before I leave to do my crypto course, I'll continue the payment work w/ Purdue primarily, as well as get a bunch of educational material onto GitHub
[2018-05-21 11:24:33] <rehrar> you'll be gone for one month sarang?
[2018-05-21 11:24:39] <sarang> 3 weeks
[2018-05-21 11:24:45] <suraeNoether> this is the sort of thing that could be a whole masters thesis, so that alone would be a sufficient project to require funding, I think... and there are dangers in rolling our own crypto, making our own libraries... so this is a bit controversial
[2018-05-21 11:24:46] <rehrar> alright, great
[2018-05-21 11:24:51] <sarang> one week is dumbass training that'll be "multitasking" =p
[2018-05-21 11:25:14] <sarang> I'll also continue the audit coordination work during that time
[2018-05-21 11:25:24] <suraeNoether> great
[2018-05-21 11:25:40] <sarang> Otherwise it's full time teaching (not getting FFS during the month) so I'll have limited availability
[2018-05-21 11:25:54] <rehrar> are they paying you in Dash?
[2018-05-21 11:25:56] <sarang> but it's good outreach and PR
[2018-05-21 11:25:58] <sarang> lol
[2018-05-21 11:26:02] <sarang> fiat, those fools
[2018-05-21 11:26:26] <sarang> I'll assign groups to each of our MRL goals secretly =p
[2018-05-21 11:26:59] <suraeNoether> this huge list, is varying in urgency depending on items. i think BP fees, churn analysis + txn graph modeling, and the monero standards are the most important in my mind. almost everything else on the list would be great to tick off the list before another year is up
[2018-05-21 11:27:22] <suraeNoether> but these are *broad MRL goals.*
[2018-05-21 11:27:45] <rehrar> *applause*
[2018-05-21 11:27:47] <suraeNoether> not a checklist of things I personally feel responsible for and need to get done (which is why multisig wasn't included on this list.) it's a roadmap list
[2018-05-21 11:27:56] <suraeNoether> so, my question is
[2018-05-21 11:27:58] <sarang> It's my personal desire to see a path set toward payment channels within the next couple of network upgrades
[2018-05-21 11:28:15] <suraeNoether> ah yeah, i think that's super important too
[2018-05-21 11:28:16] <sarang> depending on quality of proposals
[2018-05-21 11:28:16] <UkoeHB> speaking of that
[2018-05-21 11:28:20] <UkoeHB> tadah new chapter
[2018-05-21 11:28:21] <sarang> go on...
[2018-05-21 11:28:21] <UkoeHB> https://www.pdf-archive.com/2018/05/21/zero-to-monero-first-edition-v0-14/zero-to-monero-first-edition-v0-14.pdf
[2018-05-21 11:28:32] <suraeNoether> good! i will read that today too
[2018-05-21 11:28:42] <sarang> multisig!
[2018-05-21 11:28:45] <sarang> excellent UkoeHB
[2018-05-21 11:28:53] <sarang> I will also review
[2018-05-21 11:29:33] <suraeNoether> SO! Does anyone want to add anything to the MRL broad goals for the 2018/2019 year?
[2018-05-21 11:29:35] <rehrar> wow, that looks comprehensive.
[2018-05-21 11:29:41] <sarang> Any new proposals contained in that UkoeHB, or just descriptions?
[2018-05-21 11:29:49] <endogenic> suraeNoether: is that list ordered by priority or just generally?
[2018-05-21 11:30:02] <UkoeHB> m-of-n and details on how to nest multisigs inside each other
[2018-05-21 11:30:10] <sarang> great
[2018-05-21 11:30:14] <UkoeHB> some conventions
[2018-05-21 11:30:35] <suraeNoether> endogenic: it's very loosely ordered by the order that sarang and I thought of them after meeting philkode at green man in london. :D
[2018-05-21 11:30:36] <rehrar> I think we're excited about BPs as an on-chain optimization, and we're looking for off-chain optimizations, but I think keeping a casual look at other opportunities for on-chain optimization is quite important. Not the least reason for doing so is to help quell the BTC/BCH debate from within our halls.
[2018-05-21 11:30:37] <UkoeHB> and a walkthrough of all implications for monero transactions
[2018-05-21 11:30:44] <endogenic> suraeNoether: kk
[2018-05-21 11:31:02] <sarang> rehrar: totally, but optimizations to the level people _really_ want are not immediately forthcoming
[2018-05-21 11:31:24] <suraeNoether> rehrar: one of the items on my list is "sublinear ring signatures," but because of this: we need to write a technical note to the community on why we don't intend on pursuing *that route* of on-chain optimizations.
[2018-05-21 11:31:35] <suraeNoether> so add that as 14
[2018-05-21 11:31:54] <suraeNoether> "14. explain why we don't have logarithmic ring signatures, and investigate other on-chain optimizations."
[2018-05-21 11:31:55] <sarang> 14 is pretty straightforward to do
[2018-05-21 11:32:00] <rehrar> If people see that we are pursuing both on and off chain optimizations it will hopefully keep the braindead squealing to a minimum
[2018-05-21 11:32:05] <suraeNoether> well half of it is easy. :D
[2018-05-21 11:32:21] <suraeNoether> thanks for that addition, rehrar, I agree
[2018-05-21 11:32:39] <suraeNoether> anyone else have any suggestions for the MRL roadmap for the next year?
[2018-05-21 11:32:41] <rehrar> sorry, I obviously don't have high opinions of people who adamantly hold to one side or the other of the BTC/BCH debate :P
[2018-05-21 11:33:25] <rehrar> 15. Stupid contracts
[2018-05-21 11:33:29] <suraeNoether> ha
[2018-05-21 11:33:35] <sarang> Well having payment channel infrastructure available and understood will be a Good Thing even without a definite intent to move to large off-chain operations
[2018-05-21 11:33:46] <suraeNoether> maybe the slogan of MRL should be something like "Don't be intellectually dishonest." In line with google's now-defunct code of conduct
[2018-05-21 11:33:47] <UkoeHB> oh and a one-key lstag for generating shared key images with zero-trust
[2018-05-21 11:33:49] <rehrar> you'd think so wouldn't you sarang?
[2018-05-21 11:34:02] <sarang> I would
[2018-05-21 11:34:20] <rehrar> if you'd kept up with the debates, you'd see that even good ideas, if proposed by 'the other side', become evil ideas
[2018-05-21 11:34:36] <sarang> MRL: ruining everything since 20xx
[2018-05-21 11:34:39] <rehrar> "a social/technical/something else attack"
[2018-05-21 11:34:49] <rehrar> that's going on the t shrit
[2018-05-21 11:34:57] <suraeNoether> UkoeHB: what page should i read that on, and are you comfortable with us using a lot of your document for the monero standards? (i've asked before but I want to verify)
[2018-05-21 11:35:55] <rehrar> suraeNoether and/or sarang can these MRL roadmap goals be sent to me ASAP. I'd like to make a little simple graphic to share with the community.
[2018-05-21 11:36:13] <sarang> Sure we'll work them up into something more formal on GitHub
[2018-05-21 11:36:29] <rehrar> as well, anything that has been completed in the past year should go on the roadmap section of the website
[2018-05-21 11:36:35] <sarang> agreed
[2018-05-21 11:36:40] <rehrar> which desperately needs updating :P
[2018-05-21 11:36:58] <rehrar> https://getmonero.org/resources/roadmap/
[2018-05-21 11:36:59] <sarang> I'll need to run in about 5-10 min, btw
[2018-05-21 11:37:01] <rehrar> we still in 2017
[2018-05-21 11:37:26] <sarang> suraeNoether: can we talk formal roadmap in about an hour?
[2018-05-21 11:37:53] <suraeNoether> okay, so now that the roadmap discussion is out of the way: I plan on reading about BIP47 today for endogenic, reading sarang's dual output paper with the purdue guys, and reading zero to monero again... and then after I've done those three finite tasks, I'll start reading the criticisms of the musig proof and continuing with multisig. and hten I'm going to write up my FFS for June-July-August because, like
[2018-05-21 11:37:53] <suraeNoether> an idiot, i'm off the usual fiscal year again :(
[2018-05-21 11:37:56] <suraeNoether> suraeNoether:
[2018-05-21 11:37:57] <suraeNoether> yes
[2018-05-21 11:38:00] <suraeNoether> sarang* yes
[2018-05-21 11:38:12] <suraeNoether> when you get back we'll talk about fees + roadmap
[2018-05-21 11:38:14] <sarang> suraeNoether: sarang
[2018-05-21 11:38:20] <sarang> sarang: suraeNoether
[2018-05-21 11:38:37] <suraeNoether> heh
[2018-05-21 11:38:42] <sarang> anything else before I head out? (parking metre is dumb)
[2018-05-21 11:38:49] <suraeNoether> go fix your meter bruh
[2018-05-21 11:38:51] <rehrar> serious request here
[2018-05-21 11:38:54] <suraeNoether> also move to a place where you don't have meters
[2018-05-21 11:38:58] <rehrar> can I get profile shots of both suraeNoether and sarang
[2018-05-21 11:39:00] <sarang> ikr
[2018-05-21 11:39:06] <rehrar> top of head to upper chest
[2018-05-21 11:39:12] <ArticMine> One fees I do have a preliminary proposal ideas
[2018-05-21 11:39:22] <suraeNoether> rehrar: are you making us those fake passports you promised? :D
[2018-05-21 11:39:22] <rehrar> I'll talk with both of you about it later
[2018-05-21 11:39:44] <ArticMine> When is later?
[2018-05-21 11:39:44] <suraeNoether> ArticMine: do you have them written up, by chance, or is it going to be a platonic dialogue to talk about them?
[2018-05-21 11:40:09] <ArticMine> I have not written it up yet but it is coming
[2018-05-21 11:40:17] <suraeNoether> ArticMine: he meant about the pictures. we can talk about fees as soon as sarang gets back
[2018-05-21 11:40:22] <suraeNoether> i want him to be able to ask questions
[2018-05-21 11:40:25] <suraeNoether> like, live
[2018-05-21 11:40:27] <rehrar> ArticMine: by later I mean the profile shots
[2018-05-21 11:40:37] <ArticMine> but one question that came up is verification times
[2018-05-21 11:40:58] <ArticMine> This was a very valid point raised by smooth
[2018-05-21 11:41:33] <moneromooo> performance_tests show you verification times for various cases. The only thing that I know will change it is Pippenger, if it gets coded.
[2018-05-21 11:41:43] <suraeNoether> ArticMine: yeah, i wanted to do fees proportional to both expected ver time and space, but i feel like someone shot me down when i suggested ms-kB metric
[2018-05-21 11:42:01] <suraeNoether> but i don't recall
[2018-05-21 11:42:39] <ArticMine> It more an understanding on what verification times will be with current tech
[2018-05-21 11:43:19] → spaced0ut joined (~spaced0ut@unaffiliated/spaced0ut)
[2018-05-21 11:43:27] <UkoeHB> surae the table of contents should have everything. i don't recall you asking, but sure do whatever you want with it :)
[2018-05-21 11:43:32] <suraeNoether> ah, yeah, we'll have to estimate, and it's hardware dependent but the info-theoretic lower bound on the number of operations isn't, and we can use that instead
[2018-05-21 11:43:48] <ArticMine> and this will require the optimizations
[2018-05-21 11:44:20] <suraeNoether> UkoeHB: if you seek peer review publication, we'll have to probably make sure that rights are reserved or blah blah so the monero project doesn't get sued by the publication company for copy-pasting a document you helped write while volunteering at MRL. :P
[2018-05-21 11:44:46] <ArticMine> That is where copy left comes in
[2018-05-21 11:44:49] <suraeNoether> ArticMine: well, the lower bound will be impelmentation-independent. like, "we know we have to check *at least* this many group elements, and therefore... " sort of argument
[2018-05-21 11:45:23] <UkoeHB> is there any benefit to getting it peer reviewed?
[2018-05-21 11:45:33] <suraeNoether> Last thing I wanted to mention as part of the meeting is MAGIC, the non-profit that sarang, myself, rehrar, sgp_[m], and my wife are starting. we are currently waiting on communications from our lawyer and CPA re: filing our 1023. my wife is on the phone with him this morning taking notes, and we'll probably make a more formal update later today or at least before the end of the week. the main trouble has
[2018-05-21 11:45:33] <suraeNoether> been finding CPAs and attorneys with the sufficient interest to learn about cryptocurrency law, etc
[2018-05-21 11:45:53] <ArticMine> Yes but is that a valid basis for pricing vs size, or can ti be handled instead with a clawback / weight idea
[2018-05-21 11:45:53] <suraeNoether> UkoeHB: eh, i merely thought that was your intention for the docuemnt.
[2018-05-21 11:45:57] <rehrar> interesting indeed
[2018-05-21 11:45:58] <endogenic> get scooby on your board man
[2018-05-21 11:46:13] <suraeNoether> why scooby? is he a laywer?
[2018-05-21 11:46:20] <suraeNoether> paging scoobybejesus
[2018-05-21 11:46:29] <endogenic> well you said CPA
[2018-05-21 11:46:37] <rehrar> I miss sarang already
[2018-05-21 11:46:41] <endogenic> not to doxx him..
[2018-05-21 11:46:45] <scoobybejesus> :D
[2018-05-21 11:47:15] <endogenic> lol scooby you dont mind me volunteering your life do you? :P
[2018-05-21 11:47:33] <endogenic> but anyway surae he may be able to point you in some direction
[2018-05-21 11:47:40] <suraeNoether> that would be helpful
[2018-05-21 11:47:56] <suraeNoether> right now it's our attorney calling all his CPA friends and getting shot down it looks like. :P but we will see
[2018-05-21 11:48:08] <UkoeHB> It's to be educational more than anything
[2018-05-21 11:48:51] <scoobybejesus> i snoop around the lounge, so i'll at least be sure to put in my two cents when appropriate
[2018-05-21 11:48:52] <UkoeHB> Learning crypto and monero is haphazard and frustrating with no formal approach
[2018-05-21 11:49:25] <rehrar> UkoeHB: people can only teach you about "hodling" nowadays
[2018-05-21 11:49:38] <suraeNoether> cool thanks scoobybejesus
[2018-05-21 11:49:51] <suraeNoether> UkoeHB: agreed, and you and me and sarang should chat about textbooks.
[2018-05-21 11:50:12] <scoobybejesus> i hesitate to provide to much firm advice in this crypto wild west we're in, but i can sure help with understanding context and the like
[2018-05-21 11:50:23] <suraeNoether> sool
[2018-05-21 11:50:24] <suraeNoether> cool*
[2018-05-21 11:52:02] <suraeNoether> Allright, anything else anyone want to bring up for MRL? especially anyone who feels they have helped fund MRL and they have something they want to say?
[2018-05-21 11:53:23] <rehrar> Nah.
[2018-05-21 11:53:45] <suraeNoether> okay, well, </meeting>

View file

@ -0,0 +1,102 @@
[2018-03-26 10:58:00] <suraeNoether> good morning everyone. meeting in a few minutes. fluffypony knaccc luigi1111w sarang andytoshi chachasmooth dEBRUYNE endogenic gingeropolous hyc JollyMort[m] jwinterm kenshi84 medusa_ moneromooo MoroccanMalinois nioc pigeons rehrar sgp_[m] smooth stoffu TheCharlatan vtnerd waxwing
[2018-03-26 10:59:44] <rehrar> shttp://weknowmemes.com/generator/uploads/generated/g1386523484343400473.jpg
[2018-03-26 11:00:09] <hyc> morning
[2018-03-26 11:00:33] <endogenic> o/ hyc
[2018-03-26 11:00:47] → rex4539 joined (~textual@ppp-2-87-183-80.home.otenet.gr)
[2018-03-26 11:01:02] <ArticMine> hi
[2018-03-26 11:01:05] <andytoshi> hi
[2018-03-26 11:01:22] <scoobybejesus> hi
[2018-03-26 11:02:58] <suraeNoether> Sarang is apparently en route from an airport and is not expected to make it for the meeting. So today i'll just babble a bit
[2018-03-26 11:03:02] <suraeNoether> and answer questions
[2018-03-26 11:03:30] <rehrar> Remind me, was he presenting at that Blockchain conference?
[2018-03-26 11:03:38] <suraeNoether> yes, that's why he's en route from the airport
[2018-03-26 11:03:49] <suraeNoether> he took it upon himself to disabuse some folks of some certain notions about hashgraph
[2018-03-26 11:04:14] <suraeNoether> which I think is neat
[2018-03-26 11:04:54] <suraeNoether> specifically, he's been reading a lot about graph-based currencies, and someone gave a rather misleading presentation, but Sarang's presentation (I believe) preceded it and it was an educational moment
[2018-03-26 11:05:33] <suraeNoether> but I shouldn't speak for him, I wasn't there. the conference organizers flew him out to give a presentation on behalf of MRL and I have confidence he did a great job representing us
[2018-03-26 11:05:58] <rehrar> Will it be posted online?
[2018-03-26 11:06:58] <suraeNoether> he can answer that later today. I don't know.
[2018-03-26 11:07:11] <rehrar> ok, thanks
[2018-03-26 11:07:19] <suraeNoether> So, before we proceed, does anyone have any other general questions for MRL?
[2018-03-26 11:09:05] <sgp_[m]> Sorry I'm here but mostly distracted by class. Looking forward to hopefully viewing the presentation online
[2018-03-26 11:10:58] <suraeNoether> ok, neato burrito. So, basically this week I've 1) been putting some copy-editing changes into the multisig paper like spelling and references 2) working on models of the spend-time distributions vs. ring mixin selection distributions, and 3) while driving between albuquerque and denver, I think I came up with a novel ECC signature scheme from one-way functions (staring into the desert sun), but I'm not
[2018-03-26 11:10:58] <suraeNoether> putting a lot of effort into that until I have more of a handle on spend-time distributions
[2018-03-26 11:11:58] <suraeNoether> I've also 4) been building the MRL Research Roadmap for 2018. I need to discuss with sarang, but I think we'll be putting that out mid-May, because we want to have a complete look at what's going on
[2018-03-26 11:14:10] <suraeNoether> uhm, also I've spent an enormous amount of time this week on a certain project for MRL related to churning and the EAE scenario. details to come later on
[2018-03-26 11:14:20] <hyc> sounds cool
[2018-03-26 11:14:30] <rehrar> if hyc thinks it's cool, then it's cool
[2018-03-26 11:14:38] <suraeNoether> hyc and I have been chatting about an asic-unfriendly POW expansion, also
[2018-03-26 11:14:40] <sgp_[m]> I'm highly looking forward to seeing your work with EAE
[2018-03-26 11:15:12] <hyc> yes and I'm now digging back into the bulletproofs paper to try to get more solid understanding
[2018-03-26 11:16:08] <suraeNoether> namely, if instead of a POW game like: find nonce x such that H(block || x) * difficulty < target.... we can run a POW game like: find a nonce x such that, for a random bit of javascript J(x) that is loop-free, H(block || J(x))*difficulty < target
[2018-03-26 11:17:08] <suraeNoether> this was the idea hyc originally brought to my attention, but verification requires executing the code, so I was thinking instead it could be a random arithmetic circuit instead. then you can present bulletproofs that you know the nonce x such that H(block || AC(x))*difficulty < target efficiently
[2018-03-26 11:17:27] <rehrar> oh yeah, I remember you guys discussing something like that. Just to clarify for me cuz it was a bit confusing at the time. The idea is that CPUs and GPUs compile code better than ASICs would, correct?
[2018-03-26 11:17:39] <hyc> compile and execute
[2018-03-26 11:18:13] <suraeNoether> the idea is that if the code is random, then an asic will presumably not even be able to compile the code, let alone execute it, but a cpu is built to deal with arbitrary code
[2018-03-26 11:18:54] <suraeNoether> maybe this is a bad analogy, but I think of an ASIC as a big manufacturing factory, fully automated. it makes lemon cakes. the random code you just spit out asked for a rotisserie chicken
[2018-03-26 11:19:04] <rehrar> making it so that an ASIC would have to be built with a CPU, which defeats the purpose because might as well have a computer at that point, right?
[2018-03-26 11:19:21] <hyc> that's the general idea yes
[2018-03-26 11:19:31] <rehrar> great, I understand now. Thank you for explaining. :)
[2018-03-26 11:19:35] <suraeNoether> yeah, it shifts the bottleneck away from the highly asic'able hash to finding the nonce for the hash, kinda
[2018-03-26 11:19:46] <suraeNoether> which is quite clever
[2018-03-26 11:20:10] <endogenic> hack the planet!
[2018-03-26 11:20:14] <rehrar> if this idea pans out, we can even do some looking into seeing if the random stuff can do something useful as well?
[2018-03-26 11:20:23] <hyc> useful?
[2018-03-26 11:20:41] <rehrar> never mind, this is something I know too little about. Sorry. Plz continue.
[2018-03-26 11:20:45] <hyc> the code must be highly random and unpredictable
[2018-03-26 11:20:55] <hyc> if it does something useful, that can be ASIC'd
[2018-03-26 11:22:07] <endogenic> rehrar: use the heat to warm your chickens
[2018-03-26 11:22:17] <hyc> there ya go
[2018-03-26 11:23:30] <rehrar> can the chickens consume the arbitrary code?
[2018-03-26 11:23:38] <ArticMine> The random code can provide space heating and in many parts of the world that is useful
[2018-03-26 11:24:01] <suraeNoether> does anyone have any other questions? i can sketch out my new signature scheme if folks are curious, but it'd be more of an algebra discussion. :D
[2018-03-26 11:24:06] — suraeNoether waggles eyebrows
[2018-03-26 11:24:14] <ArticMine> Sure
[2018-03-26 11:25:04] <suraeNoether> Cool. So, definition: a cartesian square of groups is a set of four groups and four group homomorphisms arranged in a square satisfying *one weird property*
[2018-03-26 11:25:18] ⇐ KnifeOfPi_ quit (uid257314@gateway/web/irccloud.com/x-jowmyyhckogdqhvs): Quit: Connection closed for inactivity
[2018-03-26 11:26:03] <suraeNoether> https://www.irccloud.com/pastebin/XXZjHHp0/
[2018-03-26 11:26:12] <suraeNoether> So the square looks like this
[2018-03-26 11:26:35] <suraeNoether> and the property is this: if group elements from B and C end up *at the same element* in D, then they must have *come from* the same element in A
[2018-03-26 11:26:55] <endogenic> scientists hate it!
[2018-03-26 11:27:30] <suraeNoether> denoting the top map f, the left map g, the rihgt map h, the bottom map j, this means: if there exist some b in B and c in C such that j(c) = h(b), then there exists some a in A such that b = f(a) and c = g(a)
[2018-03-26 11:28:08] <suraeNoether> so I'm going to set A to be my private key group Zq, and D to be my public key group G
[2018-03-26 11:28:50] <suraeNoether> and i'll just assume the middle groups B and C are also equal to my public key group
[2018-03-26 11:29:11] → thrmo joined (~thrmo@unaffiliated/thrmo)
[2018-03-26 11:29:46] <suraeNoether> then a message M can give me a signature this way: from M, build a one-way map from Zq (private keys) to G (signatures) called SIGN and a one-way map from G (signatures) to G (public keys) called VER
[2018-03-26 11:30:20] <suraeNoether> to sign the message, I evaluate my private key SIGN(x) and get a group element, my signature. To validate this game from me, I evaluate VER at my signature and check that the result is my public key, VER(SIGN(x)) = X
[2018-03-26 11:31:05] <suraeNoether> so my signature is SIGN(x) and the function VER
[2018-03-26 11:31:20] <suraeNoether> each message M has a different pair of one-way functions SIGN and VER
[2018-03-26 11:32:02] <suraeNoether> to forge this, I need to find a group element S such that VER(S) = VER(SIGN(x)) for someone's honestly computed SIGN(x), but that requires breaking the one-way-ness of all the arrows in my square
[2018-03-26 11:32:23] <suraeNoether> *this is all great in theory, but i have no implementation yet. :P*
[2018-03-26 11:33:16] <suraeNoether> oh, i missed a word: in the definition of the cartesian square, the diagram has to be commutative. so if I traverse from A to D along one path (through B), I get the same result as if I had traversed the other path (through C)
[2018-03-26 11:33:20] <suraeNoether> and that is *critical*
[2018-03-26 11:34:38] <suraeNoether> so, to construct an implementation, I need a way to map from message space to the space of one-way group homomorphisms to get SIGN and VER, and then I need to mod out by the ideal generated by all the functions that don't satisfy the cartesian property
[2018-03-26 11:35:54] <suraeNoether> more recently cartesian squares (mid-late 20th century terminology) have been called "pullback diagrams," and I haven't found any descritpions in the literature of EC-based digital signatures based on them
[2018-03-26 11:36:45] <suraeNoether> that doesn't mean that this is a novel signature scheme, only that I haven't found any old references to them. I'm emailing around asking folks, and if anyone comes across anything, please let me know
[2018-03-26 11:37:46] <suraeNoether> to forge this... <--- also, i need to find a message M such that VER is the one-way function derived from M to compute a forgery
[2018-03-26 11:37:59] <suraeNoether> okay, abstract algebra/category theory lecture done. :P hehe
[2018-03-26 11:38:13] <hyc> whew ;)
[2018-03-26 11:38:20] <suraeNoether> ikr what a blowhard
[2018-03-26 11:38:48] <suraeNoether> also s/game/came
[2018-03-26 11:38:50] <hyc> I think I missed a part, can you explain again the bit after "now listen carefully" ?
[2018-03-26 11:39:45] <suraeNoether> "i think a few pages back, you missed a negative and the error propagates. I would have said something, but you were so excited about proving P=NP"
[2018-03-26 11:40:05] <hyc> lol
[2018-03-26 11:40:55] <suraeNoether> does anyone have any questions for MRL? I believe sarang is going to be posting another FFS to fund the third audit later today or something?
[2018-03-26 11:41:48] <rehrar> how much extra is going to be needed?
[2018-03-26 11:41:58] <rehrar> and did we sign off on anyone getting started already?
[2018-03-26 11:42:58] <suraeNoether> rehrar I don't know, and I don't know. i believe nioc was encouraging us to not worry about getting it funded and to just post it so we can get the process moving, but I don't want to speak for him.
[2018-03-26 11:43:13] <rehrar> got it
[2018-03-26 11:43:15] <suraeNoether> and sarang will be back later today to talk about that
[2018-03-26 11:43:58] <suraeNoether> days like today, i want to hire a suresh noether
[2018-03-26 11:45:01] <suraeNoether> okay, next meeting, I want to talk about planning the first monero conference, and planning travel for sarang and i to other conferences between now and then
[2018-03-26 11:45:36] <suraeNoether> i'm actually attending a bitcoin/blockchain event on april 25 in denver at one of the venues i'm looking at for the monero conference
[2018-03-26 11:45:50] <suraeNoether> and I have a few meetings next week about it too
[2018-03-26 11:46:04] <suraeNoether> other than that, I got nothing left to chat about
[2018-03-26 11:46:36] <nioc> rehrar: I believe Bunz and QuarksLab have already been signed
[2018-03-26 11:47:39] <suraeNoether> i also want to chat next week about how is everyone satisfied with MRL. I want to gauge the community on direction, depth, breadth, leadership, funding models/goals etc.
[2018-03-26 11:47:39] <rehrar> cool, thanks nioc
[2018-03-26 11:47:51] <suraeNoether> so, with that, i want folks to think about what you would say to me if you had me face-to-face. :D
[2018-03-26 11:47:59] <rehrar> oy, I need to talk with the two of you fairly soon. It's already Revuo time again.
[2018-03-26 11:48:30] <suraeNoether> rehrar i believe i'm dragging sarang out to denver for that blockchain event. make it up here around that time and maybe we can make it a MAGIC board member meeting + revuo intervuo.
[2018-03-26 11:48:43] <suraeNoether> we'll drag mike from the moneromonitor by. :P it'll be historic~
[2018-03-26 11:48:59] <suraeNoether> </meeting>

Binary file not shown.

View file

@ -3,6 +3,8 @@
Sublinear-sized ring signatures without trusted
set-ups or bilinear pairings. Summarized for
Monero Research Lab by B Goodell
----------------------------
WARNING: Many errors such wow
We describe sublinear-sized ring signatures for use in cryptocurrency. This
scheme was first sent to MRL by Ruffing and co-authors. These results use

View file

@ -0,0 +1,7 @@
@misc{bp,
author = {Benedikt B\"unz and Jonathan Bootle and Dan Boneh and Andrew Poelstra and Pieter Wuille and Greg Maxwell},
title = {Bulletproofs: Efficient Range Proofs for Confidential Transactions},
howpublished = {Cryptology ePrint Archive, Report 2017/1066},
year = {2017},
note = {\url{https://eprint.iacr.org/2017/1066}},
}

View file

@ -0,0 +1,105 @@
\documentclass{mrl}
\title{Application of Bulletproofs in Monero Transactions}
\authors{Sarang Noether\footnote{\texttt{sarang.noether@protonmail.com}}}
\affiliations{Monero Research Lab}
\date{\today}
\type{TECHNICAL NOTE}
\ident{MRL-XXXX}
\begin{document}
\begin{abstract}
This technical note briefly describes the proposed application of Bulletproofs \cite{bp} in Monero. The proofs are used as a drop-in replacement of the existing Borromean bitwise non-interactive zero-knowledge range proofs used to show that a committed amount is in a specified range. Bulletproofs reduce both proof size and verification time, as well as provide a straightforward method for batch verification of proofs from multiple transactions. We describe our implementation, noting specific areas of optimization from the original paper.
\end{abstract}
\section{Introduction}
The implementation of confidential transaction amounts in Monero is accomplished using homomorphic commitments. Each input and output amount, including fees, is represented by a commitment of the form $vG + \mu H$, where $G$ and $H$ are elliptic curve generators, $v$ is the amount, and $\mu$ is a mask. Without knowledge of the commitment opening, a third party cannot determine the amount; however, it is trivial for the third party to convince itself that a transaction balances (that is, that the difference between inputs and output amounts is zero). The homomorphic property of the commitments is such that the difference in commitments must itself be a commitment to zero.
However, this is not sufficient to ensure a correct and safe transaction model. An adversary could easily construct a combination of positive and negative outputs such that the transaction amounts balance. A third party would still verify that the transaction balances, though the adversary has effectively printed free money in an undetected fashion. To combat this, we require that each amount commitment come equipped with a \textit{range proof} that convinces a verifier that the corresponding output is both positive and does not risk an overflow by being too large. The range proof scheme must be non-interactive and zero-knowledge; that is, the verifier does not need to communicate with the prover once the proof is generated, and the proof itself reveals no information about the amount except that it is within the stated range.
The current range proof style used in Monero confidential transactions is a \textit{Borromean bitwise} range proof. To generate a proof that a commitment $C \equiv vG + \mu H$ represents an amount $v \in [0,2^n-1]$ for some bit length $n > 0$ (in Monero $n = 64$), the prover generates separate commitments for each bit. The prover then generates a Borromean ring signature showing that each commitment is to either $0$ or $2^i$ for appropriate $i$. Any third-party verifier can then convince itself that the bit commitments reconstruct the committed amount, that each commitment is to either $0$ or $2^i$, and therefore that the committed amount lies in the correct range.
However, this comes at a cost. Borromean bitwise proofs scale linearly in size with the number of bits in the range. Further, if multiple outputs are used in a transaction, a separate proof is required for each. Each proof is large, taking up $6.2$ kB of space.
\section{Bulletproofs}
Bulletproofs are a recent general non-interactive zero-knowledge proof construction \cite{bp}. Using a novel inner product argument, they can be used in a variety of applications ranging from range proofs (pun intended) to verifiable shuffles and even proofs of general arithmetic circuit evaluation. For our purposes, they can accomplish the same goal as Borromean bitwise range proofs: convincing a verifier that a committed amount is within a claimed range.
The details of Bulletproof construction, both for prover and verifier, are discussed in the paper \cite{bp}, so we will not duplicate them here. However, several definitions are useful when discussing the scaling. A standard Bulletproof that shows an amount is within the $n$-bit range $[0,2^n-1]$ is called a \textit{single-output proof} or a \textit{1-proof}. However, it is possible for a prover to construct a single proof showing that $m$ separate amounts (with separate random masks) each lie within the range $[0,2^n-1]$, where $m$ is a power of two. Such a proof is called an \textit{aggregate proof} or, more precisely, an $m$\textit{-proof}. The scheme is constructed in such a way that a single-output proof is trivially an $m$-proof with $m=1$ (which simplifies the code). It is important to note that the construction of an aggregate proof requires that the prover know each amount and mask; this means that while it is useful for all outputs in a transaction to be contained within a single aggregate proof for space savings, it is not possible for a third party to take existing proofs and construct an aggregate proof, either within a single transaction or between different transactions.
The size scaling benefits of Bulletproofs occur at two levels:
\begin{enumerate}
\item \textbf{Bit length of range}. The size of a Bulletproof increases logarithmically with the number of bits in the range. In bitwise range proofs, the proof size increased linearly with the number of bits.
\item \textbf{Number of amounts in aggregate proof}. The size of a Bulletproof increases logarithmically with the number of amounts included in a single aggregate proof. In bitwise range proofs, the proof size increased linearly with the number of bits (since a separate proof was needed for each amount).
\end{enumerate}
We discuss efficiency in more detail below.
There is a separate scaling argument that is useful. A new node that comes online will receive many $m$-proofs, at least one per post-Bulletproof transaction in the blockchain. Instead of verifying each of the proofs separately, the node can perform a \textit{batch verification} of as many proofs at a time as it wishes. As described below, this process requires that certain portions of each proof be verified separately, but allows for the remaining parts of the proofs to be batched and verified together. The resulting verification time is linear in the number of proofs, but with a significantly lower time per proof. An existing node that has already verified the transactions in the blockchain can still use batch verification on new transactions it receives, but the benefits are not as great due to the lower number of transactions that must be verified in a short time.
\section{Optimizations}
For the most part, the proposed implementation of Bulletproofs in Monero follows the Bulletproofs paper in scope and notation wherever possible. However, we include several optimizations that have also been discussed for other projects. These optimizations are algebraically equivalent to those in the paper, but reduce the time required for verification. The author understands that some or all of the optimizations may be included in an update to the Bulletproofs paper sometime in the future. However, we document them here for completeness and ease of code review. The reader is encouraged to refer to the paper for the complete context of our changes.
\subsection{Curve group notation}
The paper is written with a general group structure in mind, so scalar-group operations are written multiplicatively (\textit{e.g.} $x = a^bc^d$). In the case of elliptic curve groups, we use additive notation instead (\textit{e.g.} $X = bA + dC$) and use case to differentiate between curve points and scalars for clarity. This is purely a notational convenience.
\subsection{Basepoint notation}
Throughout the paper, amount commitments are expressed as $V \equiv vG + \mu H$, where $G$ and $H$ are distinct (but arbitrary) fixed elliptic curve group generators. We interchange the roles of $G$ and $H$ throughout our implementation to match the use of existing base points used in commitments elsewhere in the Monero codebase. Note that the indexed $\{G_i\}$ and $\{H_i\}$ curve points are not modified in this way.
\subsection{Fiat-Shamir challenges}
To make the Bulletproof scheme non-interactive, we follow the paper by introducing Fiat-Shamir challenges computed by hashing the proof transcript up to the point that a new challenge is needed. This is done by introducing a rolling hash that uses as input the previous challenge and any new proof elements introduced. The prover and verifier compute these challenges identically.
\subsection{Inner product argment}
The inner product argument in Protocol 1 of the Bulletproofs paper uses recursion to shrink the size of its input vectors down to single elements. These inputs include distinct curve group generators $\{G_i\}$ and $\{H_i\}$, which we compute using an indexed hash function. We make several optimizations to this protocol for the verifier.
First, we observe that the curve points in Equation (10) are in fact linear combinations of $\{G_i\}$ and $\{H_i\}$ that use the scalar challenges in Equations (24)-(25). Next, we note that the point $P$ in Equation (62) is passed into Protocol 1 as described in Section 4.2 of the paper. Since this curve point contains a linear combination of the same group generators as Protocol 1, we can take advantage of this and compute a single linear combination, rather than separately compute Equations (62) and (10).
In practice, we replace Equations (62) and (10) with the following check, where $M \equiv |\{L_j\}| = |\{R_j\}|$:
$$A + xS - \mu G + \sum_{j=0}^{M-1}(w_j^2 L_j + w_j^{-2} R_j) + (t - ab)xH - \sum_{i=0}^{mn-1}(g_iG_i + h_iH_i) = 0$$
The symbols are mostly those used in the paper. However, we use $w_j$ to represent the round challenges in Lines (21)-(22), and $x$ to represent the challenge in Lines (32)-(33) to avoid reuse of symbols. The scalars $g_i$ and $h_i$ are computed in the following way. Express the index $i = b_0b_1 \cdots b_{M-1}$ bitwise, where $b_{M-1}$ is the least-significant bit. Then
$$g_i = a\prod_{j=0}^{M-1} w_j^{2b_j-1} + z$$
and
$$h_i = \left(by^{-i}\prod_{j=0}^{M-1} w_j^{-2b_j+1} - zy^i + z^{2+\lfloor i/N \rfloor}2^{i\operatorname{mod}N}\right)y^{-i}$$
This optimization is applied only to the verifier.
\subsection{Batch verification}
Our implementation permits the verifier to take many aggregate proofs and verify them together as a batch. We do not assume that the proofs each have the same number of outputs, nor make any restrictions on the maximum size of a batch. The batch verification we describe will only succeed if each proof is valid, and will fail if one or more proofs are invalid.
Batch verification is split into two checks, performed after iterating over each proof in the batch. During the iteration, the verifier keeps ongoing sums of components from each proof and then performs the first-stage check for Equation (61):
\begin{equation}
\sum_l (\beta_l\tau_{xl}) G + \sum_l \beta_l\left[ t_l - (k_l + z_l \langle \overline{1}^{mn},\overline{y_l}^{mn} \rangle) \right] H - \sum_l \beta_l \left( \sum_j z_l^{j+2} V_{lj} - x_lT_{1l} - x_l^2T_{2l} \right) = 0 \nonumber
\end{equation}
The second-phase check proceeds similarly:
\begin{multline}
\sum_l \beta_l(A_l + x_lS_l) - \sum_l(\beta_l\mu_l) G + \sum_l\left[\beta_l \sum_j(w_{lj}^2 L_{lj} + w_{lj}^{-2} R_{lj})\right] + \sum_l \beta_l x_l(t_l - a_lb_l) H \\
- \sum_i \left[\sum_l(\beta_l g_{li})G_i + \sum_l(\beta_l h_{li})H_i\right] = 0 \nonumber
\end{multline}
Here each $l$-indexed sum is over each proof in the batch, and $\beta_l$ is a weighting factor chosen at random (not deterministically) by the verifier. This ensures that, except with negligible probability, the checks will only succeed if each proof is separately valid; an adversary cannot selectively provide a batch containing invalid proofs in an attempt to fool the verifier. The benefit to this approach is that the sums can be computed as large multi-exponentiation operations after the scalars from all proofs have been assembled.
If the batch fails either check, at least one proof in the batch is invalid. To identify which proofs are at fault, the verifier can either iterate through each proof and perform the checks separately (in linear time), or perform a binary search by successively performing the checks on half-batches until it identifies all faulty proofs (in logarithmic time).
\section{Proof size}
Including the amount commitment $V$, a single Borromean bitwise range proof occupies $6.2$ kB of space; a transaction with $m$ outputs therefore requires $6.2m$ kB of space. An $m$-proof (with a $64$-bit range) requires $2\lg m + 17$ group elements and $5$ scalars, each of which takes up $32$ bytes. Table \ref{table:size} shows the space savings from Bulletproofs for several values of $m$.
\begin{table}[h]
\begin{center}
\begin{tabular}{r|rr|c}
$m$ & Bulletproof & Borromean & Relative size \\
\hline
$1$ & $704$ & $6200$ & $0.114$ \\
$2$ & $768$ & $12400$ & $0.062$ \\
$8$ & $896$ & $49600$ & $0.018$ \\
$16$ & $960$ & $99200$ & $0.010$ \\
$128$ & $1152$ & $793600$ & $0.001$
\end{tabular}
\caption{Size (bytes) of $m$ Borromean proofs versus $m$-proof}
\label{table:size}
\end{center}
\end{table}
Using data from the Monero blockchain\footnote{Data was taken from blocks 1400000 through 1500000} on the distribution of the number of outputs in transactions, the use of Bulletproofs would reduce the total size of range proofs by $94\%$.
\bibliographystyle{plain}
\bibliography{bulletproofs}
\end{document}

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

View file

@ -0,0 +1,43 @@
\ProvidesClass{mrl}
\LoadClass{article}
\usepackage{amsmath,amssymb,amsthm}
\usepackage[top=1in,bottom=1in,right=1in,left=1in]{geometry}
\usepackage{color}
\usepackage{graphicx}
\usepackage{hyperref}
\definecolor{bloo}{rgb}{0,0.2,0.4}
\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
\linespread{1.2}
\renewcommand\title[1]{\def\@title{#1}}
\let\@title=\@empty
\newcommand\authors[1]{\def\@authors{#1}}
\let\@authors=\@empty
\newcommand\affiliations[1]{\def\@affiliations{#1}}
\let\@affiliations=\@empty
\renewcommand\date[1]{\def\@date{#1}}
\let\@date=\@empty
\newcommand\ident[1]{\def\@ident{#1}}
\let\@ident=\@empty
\newcommand\type[1]{\def\@type{#1}}
\let\@type=\@empty
\AtBeginDocument{
\hfill\includegraphics[width=100px]{logo.png}
\newline
\noindent\colorbox{bloo}{\parbox{\textwidth}{{\sffamily\color{white}\@type \hfill \@ident}}}
\vskip 10pt
\noindent{\Large\@title}
\vskip 5pt
\noindent{\@authors}
\newline
\noindent{\@affiliations}
\newline
\noindent{\@date}
}

View file

@ -0,0 +1,46 @@
import unittest, random, time
def newIdent(params):
nonce = params
# Generate new random identity.
return hash(str(nonce) + str(random.random()))
#### #### #### #### #### #### #### #### #### #### #### #### #### #### #### ####
class Block(object):
'''
Each block has: an identity, a timestamp of discovery (possibly false),
has a timestamp of arrival at the local node (possibly unnecessary), a
parent block's identity, and a difficulty score.
'''
def __init__(self, params={}):
self.ident = None
self.discoTimestamp = None
self.arrivTimestamp = None
self.parent = None
self.diff = None
try:
assert len(params)==5
except AssertionError:
print("Error in Block(): Tried to add a malformed block. We received params = " + str(params) + ", but should have had something of the form {\"ident\":ident, \"disco\":disco, \"arriv\":arriv, \"parent\":parent, \"diff\":diff}.")
self.ident = params["ident"]
self.discoTimestamp = params["disco"]
self.arrivTimestamp = params["arriv"]
self.parent = params["parent"]
self.diff = params["diff"]
class Test_Block(unittest.TestCase):
def test_b(self):
#bill = Block()
name = newIdent(0)
t = time.time()
s = t+1
diff = 1.0
params = {"ident":name, "disco":t, "arriv":s, "parent":None, "diff":diff}
bill = Block(params)
self.assertEqual(bill.ident,name)
self.assertEqual(bill.discoTimestamp,t)
self.assertEqual(bill.arrivTimestamp,t+1)
self.assertTrue(bill.parent is None)
self.assertEqual(bill.diff,diff)
suite = unittest.TestLoader().loadTestsFromTestCase(Test_Block)
unittest.TextTestRunner(verbosity=1).run(suite)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,41 @@
from Node import *
class Edge(object):
'''
Edge object. Has an identity, some data, and a dict of nodes.
'''
def __init__(self, params):
try:
assert len(params)==3
except AssertionError:
print("Error, tried to create mal-formed edge.")
else:
self.ident = params[0]
self.data = params[1]
self.verbose = params[2]
self.nodes = {}
def getNeighbor(self, nodeIdent):
# Given one node identity, check that the node
# identity is in the edge's node list and
# return the identity of the other adjacent node.
result = (nodeIdent in self.nodes)
if result:
for otherIdent in self.nodes:
if otherIdent != nodeIdent:
result = otherIdent
assert result in self.nodes
return result
class Test_Edge(unittest.TestCase):
def test_e(self):
params = None
nelly = Node(params)
milly = Node(params)
ed = Edge(params)
ed.nodes.update({nelly.ident:nelly, milly.ident:milly})
self.assertEqual(len(self.nodes),2)
#suite = unittest.TestLoader().loadTestsFromTestCase(Test_Edge)
#unittest.TextTestRunner(verbosity=1).run(suite)

View file

@ -0,0 +1,204 @@
from Blockchain import *
from Node import *
from Edge import *
from copy import *
def newIntensity(params):
x = random.random()
return x
def newOffset(params):
x = 2.0*random.random() - 1.0
return x
class Graph(object):
'''
Explanation
'''
def __init__(self, params):
self.nodes = {}
self.edges = {}
self.mode = params[0]
self.targetRate = params[1]
self.numInitNodes = params[2]
self.maxNeighbors = params[3]
self.probEdge = params[4]
self.verbosity = params[5]
self.startTime = deepcopy(time.time())
self.runTime = params[6]
self.globalTime = deepcopy(self.startTime)
self.birthRate = params[7]
self.deathRate = params[8]
self.filename = params[9]
self.data = params[10]
self.blankBlockchain = Blockchain()
self.blankBlockchain.targetRate = self.targetRate
self.blankBlockchain.mode = self.mode
self.blankBlockchain.diff = 1.0
self._createInit()
def _createInit(self):
# For simplicity, all nodes will have a genesis block with t=0.0 and no offset
for i in range(self.numInitNodes):
offset = newOffset(None)
intens = newIntensity(None)
name = newIdent(len(self.nodes))
dat = {"offset":offset, "intensity":intens, "blockchain":deepcopy(self.blankBlockchain)}
params = {"ident":name, "data":dat, "verbose":self.verbosity, "mode":self.mode, "targetRate":self.targetRate}
nelly = Node(params)
self.nodes.update({nelly.ident:nelly})
t = self.startTime
self.nodes[nelly.ident].generateBlock(t)
touched = {}
for xNode in self.nodes:
for yNode in self.nodes:
notSameNode = (xNode != yNode)
xNodeHasRoom = (len(self.nodes[xNode].edges) < self.maxNeighbors)
yNodeHasRoom = (len(self.nodes[yNode].edges) < self.maxNeighbors)
xyNotTouched = ((xNode, yNode) not in touched)
yxNotTouched = ((yNode, xNode) not in touched)
if notSameNode and xNodeHasRoom and yNodeHasRoom and xyNotTouched and yxNotTouched:
touched.update({(xNode,yNode):True, (yNode,xNode):True})
if random.random() < self.probEdge:
params = [newIdent(len(self.edges)), {"pendingBlocks":{}, "length":random.random()}, self.verbosity]
ed = Edge(params)
ed.nodes.update({xNode:self.nodes[xNode], yNode:self.nodes[yNode]})
self.edges.update({ed.ident:ed})
self.nodes[xNode].edges.update({ed.ident:ed})
self.nodes[yNode].edges.update({ed.ident:ed})
def eventNodeJoins(self, t):
# timestamp,nodeJoins,numberNeighbors,neighbor1.ident,edge1.ident,neighbor2.ident,edge2.ident,...,
out = ""
neighbors = []
for xNode in self.nodes:
xNodeHasRoom = (len(self.nodes[xNode].edges) < self.maxNeighbors)
iStillHasRoom = (len(neighbors) < self.maxNeighbors)
if xNodeHasRoom and iStillHasRoom and random.random() < self.probEdge:
neighbors.append(xNode)
newNodeName = newIdent(len(self.nodes))
offset = newOffset(None)
intens = newIntensity(None)
dat = {"offset":offset, "intensity":intens, "blockchain":deepcopy(self.blankBlockchain)}
params = {"ident":newNodeName, "data":dat, "verbose":self.verbosity, "mode":self.mode, "targetRate":self.targetRate}
newNode = Node(params)
self.nodes.update({newNode.ident:newNode})
self.nodes[newNode.ident].generateBlock(self.startTime, 0)
out = str(t) + ",nodeJoins," + str(newNode.ident) + "," + str(len(neighbors)) + ","
for neighbor in neighbors:
out += neighbor + ","
params = [newIdent(len(self.edges)), {}, self.verbosity]
ed = Edge(params)
ed.nodes.update({neighbor:self.nodes[neighbor], newNode.ident:self.nodes[newNode.ident]})
out += ed.ident + ","
self.edges.update({ed.ident:ed})
self.nodes[neighbor].edges.update({ed.ident:ed})
self.nodes[newNode.ident].edges.update({ed.ident:ed})
return out
def eventNodeLeaves(self, t):
out = str(t) + ",nodeLeaves,"
leaverIdent = random.choice(list(self.nodes.keys()))
out += str(leaverIdent) + ","
leaver = self.nodes[leaverIdent]
neighbors = []
for ed in leaver.edges:
edge = leaver.edges[ed]
neighbors.append((edge.ident, edge.getNeighbor(leaverIdent)))
for neighbor in neighbors:
edIdent = neighbor[0]
neiIdent = neighbor[1]
del self.nodes[neiIdent].edges[edIdent]
del self.edges[edIdent]
del self.nodes[leaverIdent]
return out
def eventBlockDiscovery(self, discoIdent, t):
out = str(t) + ",blockDisco," + str(discoIdent) + ","
blockIdent = self.nodes[discoIdent].generateBlock(t)
out += str(blockIdent)
self.nodes[discoIdent].propagate(t, blockIdent)
return out
def eventBlockArrival(self, pendingIdent, edgeIdent, t):
out = str(t) + ",blockArriv,"
edge = self.edges[edgeIdent]
pendingData = edge.data["pendingBlocks"][pendingIdent] # pendingDat = {"timeOfArrival":timeOfArrival, "destIdent":otherIdent, "block":blockToProp}
out += str(pendingData["destIdent"]) + "," + str(edgeIdent) + "," + str(pendingData["block"].ident)
destNode = self.nodes[pendingData["destIdent"]]
edge = self.edges[edgeIdent]
block = deepcopy(pendingData["block"])
block.arrivTimestamp = t + destNode.data["offset"]
destNode.updateBlockchain({block.ident:block})
del edge.data["pendingBlocks"][pendingIdent]
return out
def go(self):
with open(self.filename,"w") as writeFile:
writeFile.write("timestamp,eventId,eventData\n")
eventType = None
while self.globalTime - self.startTime< self.runTime:
u = -1.0*math.log(1.0-random.random())/self.birthRate
eventType = ("nodeJoins", None)
v = -1.0*math.log(1.0-random.random())/self.deathRate
if v < u:
eventType = ("nodeLeaves", None)
u = v
for nodeIdent in self.nodes:
localBlockDiscoRate = self.nodes[nodeIdent].data["intensity"]/self.nodes[nodeIdent].data["blockchain"].diff
v = -1.0*math.log(1.0-random.random())/localBlockDiscoRate
if v < u:
eventType = ("blockDisco", nodeIdent)
u = v
for edgeIdent in self.edges:
edge = self.edges[edgeIdent]
pB = edge.data["pendingBlocks"]
for pendingIdent in pB:
pendingData = pB[pendingIdent]
if pendingData["timeOfArrival"] - self.globalTime < u:
eventType = ("blockArriv", pendingIdent, edgeIdent)
u = v
assert eventType is not None
self.globalTime += u
out = ""
if eventType[0] == "nodeJoins":
out = self.eventNodeJoins(self.globalTime)
elif eventType[0] == "nodeLeaves":
out = self.eventNodeLeaves(self.globalTime)
elif eventType[0] == "blockDisco":
out = self.eventBlockDiscovery(eventType[1], self.globalTime)
elif eventType[0] == "blockArriv":
out = self.eventBlockArrival(eventType[1], eventType[2], self.globalTime)
else:
print("WHAAAA")
with open(self.filename, "a") as writeFile:
writeFile.write(out + "\n")
mode = "Nakamoto"
targetRate = 1.0/600000.0
numInitNodes = 10
maxNeighbors = 8
probEdge = 0.1
verbosity = True
startTime = time.time()
runTime = 10.0
globalTime = startTime
birthRate = 1.0/10.0
deathRate = 0.99*1.0/10.0
filename = "output.csv"
greg = Graph([mode, targetRate, numInitNodes, maxNeighbors, probEdge, verbosity, runTime, birthRate, deathRate, filename, []])
greg.go()

View file

@ -0,0 +1,123 @@
from Blockchain import *
from copy import *
class Node(object):
'''
Node object. params [identity, blockchain (data), verbosity, difficulty]
'''
def __init__(self, params={}):
self.ident = None
self.data = {}
self.verbose = None
self.edges = {}
self.mode = None
self.targetRate = None
try:
assert len(params)==5
except AssertionError:
print("Error, Tried to create malformed node.")
else:
self.ident = params["ident"]
self.data = params["data"]
self.verbose = params["verbose"]
self.edges = {}
self.mode = params["mode"]
self.targetRate = params["targetRate"]
def generateBlock(self, discoTime):
newName = newIdent(len(self.data["blockchain"].blocks))
t = discoTime
s = t+self.data["offset"]
diff = self.data["blockchain"].diff
params = {"ident":newName, "disco":t, "arriv":s, "parent":None, "diff":diff}
newBlock = Block(params)
self.data["blockchain"].addBlock(newBlock)
return newName
def updateBlockchain(self, incBlocks):
# incBlocks shall be a dictionary of block identities (as keys) and their associated blocks (as values)
# to be added to the local data. We assume difficulty scores have been reported honestly for now.
tempData = deepcopy(incBlocks)
for key in incBlocks:
if key in self.data["blockchain"].blocks:
del tempData[key]
elif incBlocks[key].parent in self.data["blockchain"].blocks or incBlocks[key].parent is None:
self.data["blockchain"].addBlock(incBlocks[key])
del tempData[key]
incBlocks = deepcopy(tempData)
while len(incBlocks)>0:
for key in incBlocks:
if key in self.data["blockchain"].blocks:
del tempData[key]
elif incBlocks[key].parent in self.data["blockchain"].blocks:
self.data["blockchain"].addBlock(incBlocks[key], self.mode, self.targetRate)
del tempData[key]
incBlocks = deepcopy(tempData)
def propagate(self, timeOfProp, blockIdent):
for edgeIdent in self.edges:
edge = self.edges[edgeIdent]
length = edge.data["length"]
timeOfArrival = timeOfProp + length
otherIdent = edge.getNeighbor(self.ident)
other = edge.nodes[otherIdent]
bc = other.data["blockchain"]
if blockIdent not in bc.blocks:
pB = edge.data["pendingBlocks"]
pendingIdent = newIdent(len(pB))
mybc = self.data["blockchain"]
blockToProp = mybc.blocks[blockIdent]
pendingDat = {"timeOfArrival":timeOfArrival, "destIdent":otherIdent, "block":blockToProp}
pB.update({pendingIdent:pendingDat})
class Test_Node(unittest.TestCase):
# TODO test each method separately
def test_all(self):
bill = Blockchain([], verbosity=True)
mode="Nakamoto"
tr = 1.0/600000.0
deltaT = 600000.0
bill.targetRate = tr
name = newIdent(0)
t = 0.0
s = t
diff = 1.0
params = {"ident":name, "disco":t, "arriv":s, "parent":None, "diff":diff}
genesis = Block(params)
bill.addBlock(genesis, mode, tr)
parent = genesis.ident
nellyname = newIdent(time.time())
mode = "Nakamoto"
targetRate = 1.0/600000.0
params = {"ident":nellyname, "data":{"offset":0.0, "intensity":1.0, "blockchain":bill}, "verbose":True, "mode":mode, "targetRate":targetRate}
nelly = Node(params)
while len(nelly.data["blockchain"].blocks) < 2015:
name = newIdent(len(nelly.data["blockchain"].blocks))
diff = nelly.data["blockchain"].diff
t += deltaT*diff*(2.0*random.random()-1.0)
s = t
params = {"ident":name, "disco":t, "arriv":s, "parent":parent, "diff":diff}
newBlock = Block(params)
nelly.updateBlockchain({newBlock.ident:newBlock})
parent = name
while len(nelly.data["blockchain"].blocks) < 5000:
name = newIdent(len(nelly.data["blockchain"].blocks))
diff = nelly.data["blockchain"].diff
t += deltaT*diff
s = t
params = {"ident":name, "disco":t, "arriv":s, "parent":parent, "diff":diff}
newBlock = Block(params)
nelly.updateBlockchain({newBlock.ident:newBlock})
parent = name
#suite = unittest.TestLoader().loadTestsFromTestCase(Test_Node)
#unittest.TextTestRunner(verbosity=1).run(suite)

View file

@ -0,0 +1,83 @@
import unittest
import math
import copy
from collections import deque
import time
import hashlib
class Block(object):
"""
Fundamental object. Attributes:
data = payload dict with keys "timestamp" and "txns" and others
ident = string
parents = dict {blockID : parentBlock}
Functions:
addParents : takes dict {blockID : parentBlock} as input
and updates parents to include.
_recomputeIdent : recomputes identity
Usage:
b0 = Block(dataIn = stuff, parentsIn = None)
b1 = Block(dataIn = otherStuff, parentsIn = { b0.ident : b0 })
"""
def __init__(self, dataIn=None, parentsIn=[]):
# Initialize with empty payload, no identity, and empty parents.
self.data = dataIn
self.ident = hash(str(0))
assert type(parentsIn)==type([])
self.parents = parentsIn
self._recomputeIdent()
def addParents(self, parentsIn=[]): # list of parentIdents
if self.parents is None:
self.parents = parentsIn
else:
self.parents = self.parents + parentsIn
self._recomputeIdent()
def _recomputeIdent(self):
m = str(0) + str(self.data) + str(self.parents)
self.ident = hash(m)
class Test_Block(unittest.TestCase):
def test_Block(self):
# b0 -> b1 -> {both b2, b3} -> b4... oh, and say b3 -> b5 also
b0 = Block()
b0.data = {"timestamp" : time.time()}
time.sleep(1)
b1 = Block()
b1.data = {"timestamp" : time.time(), "txns" : [1,2,3]}
b1.addParents({b0.ident:b0}) # updateIdent called with addParent.
time.sleep(1)
b2 = Block()
b2.data = {"timestamp" : time.time(), "txns" : None}
b2.addParents({b1.ident:b1})
time.sleep(1)
b3 = Block()
b3.data = {"timestamp" : time.time(), "txns" : None}
b3.addParents({b1.ident:b1})
time.sleep(1)
b4 = Block()
b4.data = {"timestamp" : time.time()} # see how sloppy we can be wheeee
b4.addParents({b2.ident:b2, b3.ident:b3})
time.sleep(1)
b5 = Block()
b5.data = {"timestamp" : time.time(), "txns" : "stuff" }
b5.addParents({b3.ident:b3})
self.assertTrue(len(b1.parents)==1 and b0.ident in b1.parents)
self.assertTrue(len(b2.parents)==1 and b1.ident in b2.parents)
self.assertTrue(len(b3.parents)==1 and b1.ident in b3.parents)
self.assertTrue(len(b4.parents)==2)
self.assertTrue(b2.ident in b4.parents and b3.ident in b4.parents)
self.assertTrue(len(b5.parents)==1 and b3.ident in b5.parents)
#suite = unittest.TestLoader().loadTestsFromTestCase(Test_Block)
#unittest.TextTestRunner(verbosity=1).run(suite)

View file

@ -0,0 +1,668 @@
'''
A handler for Block.py that takes a collection of blocks (which
only reference parents) as input data. It uses a doubly-linked
tree to determine precedent relationships efficiently, and it can
use that precedence relationship to produce a reduced/robust pre-
cedence relationship as output (the spectre precedence relationship
between blocks.
Another handler will extract a coherent/robust list of non-conflict-
ing transactions from a reduced/robust BlockHandler object.
'''
from Block import *
import random
class BlockHandler(object):
def __init__(self):
#print("Initializing")
# Initialize a BlockHandler object.
self.data = None
self.blocks = {} # Set of blocks (which track parents)
self.family = {} # Doubly linked list tracks parent-and-child links
self.invDLL = {} # subset of blocks unlikely to be re-orged
self.roots = [] # list of root blockIdents
self.leaves = [] # list of leaf blockIdents
self.antichains = []
self.vids = []
self.antichainCutoff = 600 # stop re-orging after this many layers
self.pendingVotes = {}
self.votes = {}
self.totalVotes = {}
def addBlock(self, b):
#print("Adding block")
# Take a single block b and add to self.blocks, record family
# relations, update leaf monitor, update root monitor if nec-
# essary
diffDict = {copy.deepcopy(b.ident):copy.deepcopy(b)}
try:
assert b.ident not in self.blocks
except AssertionError:
print("Woops, tried to add a block with ident in self.blocks, overwriting old block")
self.blocks.update(diffDict)
try:
assert b.ident not in self.leaves
except AssertionError:
print("Woops, tried to add a block to leaf set that is already in the leafset, aborting.")
self.leaves.append(b.ident) # New block is always a leaf
try:
assert b.ident not in self.family
except AssertionError:
print("woops, tried to add a block that already has a recorded family history, aborting.")
self.family.update({b.ident:{"parents":b.parents, "children":[]}})
# Add fam history fam (new blocks have no children yet)
# Now update each parent's family history to reflect the new child
if b.parents is not None:
if len(b.parents)>0:
for parentIdent in b.parents:
if parentIdent not in self.family:
# This should never occur.
print("Hey, what? confusedTravolta.gif... parentIdent not in self.family, parent not correct somehow.")
self.family.update({parentIdent:{}})
if "parents" not in self.family[parentIdent]:
# This should never occur.
print("Hey, what? confusedTravolta.gif... family history of parent lacks sub-dict for parentage, parent not correct somehow")
self.family[parentIdent].update({"parents":[]})
if "children" not in self.family[parentIdent]:
# This should never occur.
print("Hey, what? confusedTravolta.gif... family history of parent lacks sub-dict for children, parent not correct somehow")
self.family[parentIdent].update({"children":[]})
if self.blocks[parentIdent].parents is not None:
for pid in self.blocks[parentIdent].parents:
if pid not in self.family[parentIdent]["parents"]:
self.family[parentIdent]["parents"].append(pid)
#for p in self.blocks[parentIdent].parents: self.family[parentIdent]["parents"].append(p)
# Update "children" sub-dict of family history of parent
self.family[parentIdent]["children"].append(b.ident)
# If the parent was previously a leaf, it is no longer
if parentIdent in self.leaves:
self.leaves.remove(parentIdent)
else:
if b.ident not in self.roots:
self.roots.append(b.ident)
if b.ident not in self.leaves:
self.leaves.append(b.ident)
if b.ident not in self.family:
self.family.update({b.ident:{"parents":{}, "children":{}}})
else:
if b.ident not in self.roots:
self.roots.append(b.ident)
if b.ident not in self.leaves:
self.leaves.append(b.ident)
if b.ident not in self.family:
self.family.update({b.ident:{"parents":{}, "children":{}}})
pass
def hasAncestor(self, xid, yid):
# Return true if y is an ancestor of x
assert xid in self.blocks
assert yid in self.blocks
q = deque()
found = False
if self.blocks[xid].parents is not None:
for pid in self.blocks[xid].parents:
if pid==yid:
found = True
break
q.append(pid)
while(len(q)>0 and not found):
xid = q.popleft()
if self.blocks[xid].parents is not None:
if len(self.blocks[xid].parents) > 0:
for pid in self.blocks[xid].parents:
if pid==yid:
found = True
q.append(pid)
return found
def pruneLeaves(self):
#print("Pruning leaves")
out = BlockHandler()
q = deque()
for rootIdent in self.roots:
q.append(rootIdent)
while(len(q)>0):
thisIdent = q.popleft()
if thisIdent not in self.leaves:
out.addBlock(self.blocks[thisIdent])
for chIdent in self.family[thisIdent]["children"]:
q.append(chIdent)
return out
def leafBackAntichain(self):
#print("Computing antichain")
temp = copy.deepcopy(self)
decomposition = []
vulnIdents = []
decomposition.append([])
for lid in temp.leaves:
decomposition[-1].append(lid)
vulnIdents = copy.deepcopy(decomposition[-1])
temp = temp.pruneLeaves()
while(len(temp.blocks)>0 and len(decomposition) < self.antichainCutoff):
decomposition.append([])
for lid in temp.leaves:
decomposition[-1].append(lid)
for xid in decomposition[-1]:
if xid not in vulnIdents:
vulnIdents.append(xid)
temp = temp.pruneLeaves()
return decomposition, vulnIdents
def transmitVote(self, votingIdents):
(vid, xid, yid) = votingIdents
q = deque()
for wid in self.blocks[vid].parents:
if wid in self.vids:
q.append(wid)
while(len(q)>0):
wid = q.popleft()
if (wid,xid,yid) not in self.pendingVotes:
self.pendingVotes.update({(wid,xid,yid):0})
if (wid,yid,xid) not in self.pendingVotes:
self.pendingVotes.update({(wid,yid,xid):0})
self.pendingVotes[(wid,xid,yid)]+=1
self.pendingVotes[(wid,yid,xid)]-=1
#print(self.blocks[wid].parents)
for pid in self.blocks[wid].parents:
if pid in self.vids:
q.append(pid)
def voteFor(self, votingIdents, touched):
(vid, xid, yid) = votingIdents
self.votes.update({(vid,xid,yid):1, (vid,yid,xid):-1})
touched.update({(vid,xid,yid):True, (vid,yid,xid):True})
self.transmitVote((vid,xid,yid))
return touched
def sumPendingVote(self, vid, touched):
pastR = self.pastOf(vid)
for xid in self.vids:
for yid in self.vids:
if (vid, xid, yid) in self.pendingVotes:
if self.pendingVotes[(vid,xid,yid)] > 0:
touched = self.voteFor((vid,xid,yid), touched)
elif self.pendingVotes[(vid,xid,yid)] <0:
touched = self.voteFor((vid,yid,xid), touched)
else:
self.votes.update({(vid,xid,yid): 0, (vid,yid,xid): 0})
touched.update({(vid,xid,yid): True, (vid,yid,xid): True})
#R = self.pastOf(vid)
#touched = R.vote(touched)
return touched
def vote(self,touchedIn={}):
U, V = self.leafBackAntichain()
self.antichains = U
self.vids = V
touched = touchedIn
for i in range(len(U)):
for vid in U[i]: # ID of voting block
touched = self.sumPendingVote(vid, touched)
for j in range(i+1):
for xid in U[j]: # Voting block compares self to xid
# Note if j=i, xid and vid are incomparable.
# If j < i, then xid may have vid as an ancestor.
# vid can never have xid as an ancestor.
# In all cases, vid votes that vid precedes xid
if xid==vid:
continue
else:
touched = self.voteFor((vid,vid,xid),touched)
# For each ancestor of xid that is not an ancestor of vid,
# we can apply the same!
q = deque()
for pid in self.blocks[xid].parents:
if pid in self.vids and not self.hasAncestor(vid,pid):
q.append(pid)
while(len(q)>0):
wid = q.popleft()
for pid in self.blocks[wid].parents:
if pid in self.vids and not self.hasAncestor(vid, pid):
q.append(pid)
touched = self.voteFor((vid,vid,wid),touched)
R = self.pastOf(vid)
R.vote()
for xid in R.blocks:
touched = self.voteFor((vid,xid,vid), touched)
for yid in R.blocks:
if (xid, yid) in R.totalVotes:
if R.totalVotes[(xid,yid)]:
touched = self.voteFor((vid,xid,yid), touched)
elif (yid, xid) in R.totalVotes:
if R.totalVotes[(yid,xid)]:
touched = self.voteFor((vid, yid, xid), touched)
self.computeTotalVotes()
return touched
def computeTotalVotes(self):
for xid in self.vids:
for yid in self.vids:
s = 0
found = False
for vid in self.vids:
if (vid, xid, yid) in self.votes or (vid, yid, xid) in self.votes:
found = True
if self.votes[(vid, xid, yid)]==1:
s+= 1
elif self.votes[(vid,yid,xid)]==-1:
s-= 1
if found:
if s > 0:
self.totalVotes.update({(xid, yid):True, (yid,xid):False})
elif s < 0:
self.totalVotes.update({(xid,yid):False, (yid,xid):True})
elif s==0:
self.totalVotes.update({(xid,yid):False, (yid,xid):False})
else:
if (xid,yid) in self.totalVotes:
del self.totalVotes[(xid,yid)]
if (yid,xid) in self.totalVotes:
del self.totalVotes[(yid,xid)]
def pastOf(self, xid):
R = BlockHandler()
identsToAdd = {}
q = deque()
for pid in self.blocks[xid].parents:
q.append(pid)
while(len(q)>0):
yid = q.popleft()
if yid not in identsToAdd:
identsToAdd.update({yid:True})
for pid in self.blocks[yid].parents:
q.append(pid)
for rid in self.roots:
if rid in identsToAdd:
q.append(rid)
while(len(q)>0):
yid = q.popleft()
if yid not in R.blocks:
R.addBlock(self.blocks[yid])
for pid in self.family[yid]["children"]:
if pid in identsToAdd:
q.append(pid)
return R
class Test_BlockHandler(unittest.TestCase):
def test_betterTest(self):
R = BlockHandler()
self.assertTrue(R.data is None)
self.assertEqual(len(R.blocks),0)
self.assertEqual(type(R.blocks),type({}))
self.assertEqual(len(R.family),0)
self.assertEqual(type(R.family),type({}))
self.assertEqual(len(R.invDLL),0)
self.assertEqual(type(R.invDLL),type({}))
self.assertEqual(len(R.roots),0)
self.assertEqual(type(R.leaves),type([]))
self.assertEqual(len(R.leaves),0)
self.assertEqual(R.antichainCutoff,600)
self.assertEqual(type(R.roots),type([]))
self.assertEqual(len(R.pendingVotes),0)
self.assertEqual(type(R.pendingVotes),type({}))
self.assertEqual(len(R.votes),0)
self.assertEqual(type(R.votes),type({}))
gen = Block() # genesis block
self.assertTrue(gen.data is None)
self.assertEqual(gen.parents,[])
msg = str(0) + str(None) + str([])
self.assertEqual(gen.ident, hash(msg))
block0 = gen
block1 = Block(parentsIn=[block0.ident], dataIn={"timestamp":time.time(), "txns":"pair of zircon encrusted tweezers"})
block2 = Block(parentsIn=[block1.ident], dataIn={"timestamp":time.time(), "txns":"watch out for that yellow snow"})
block3 = Block(parentsIn=[block1.ident], dataIn={"timestamp":time.time(), "txns":"he had the stank foot"})
block4 = Block(parentsIn=[block2.ident, block3.ident], dataIn={"timestamp":time.time(), "txns":"come here fido"})
block5 = Block(parentsIn=[block3.ident], dataIn={"timestamp":time.time(), "txns":"applied rotation on her sugar plum"})
block6 = Block(parentsIn=[block5.ident], dataIn={"timestamp":time.time(), "txns":"listen to frank zappa for the love of all that is good"})
R.addBlock(block0)
self.assertTrue(block0.ident in R.leaves)
self.assertTrue(block0.ident in R.roots)
R.addBlock(block1)
self.assertTrue(block1.ident in R.leaves and block0.ident not in R.leaves)
R.addBlock(block2)
self.assertTrue(block2.ident in R.leaves and block1.ident not in R.leaves)
R.addBlock(block3)
self.assertTrue(block3.ident in R.leaves and block2.ident in R.leaves and block1.ident not in R.leaves)
R.addBlock(block4)
self.assertTrue(block4.ident in R.leaves and block3.ident not in R.leaves and block2.ident not in R.leaves)
R.addBlock(block5)
self.assertTrue(block4.ident in R.leaves and block5.ident in R.leaves and block3.ident not in R.leaves)
R.addBlock(block6)
self.assertTrue(block4.ident in R.leaves and block6.ident in R.leaves and block5.ident not in R.leaves)
self.assertEqual(len(R.blocks), 7)
self.assertEqual(len(R.family), 7)
self.assertEqual(len(R.invDLL), 0)
self.assertEqual(len(R.roots), 1)
self.assertEqual(len(R.leaves),2)
self.assertEqual(R.antichainCutoff, 600)
self.assertEqual(len(R.pendingVotes),0)
self.assertEqual(len(R.votes),0)
self.assertTrue( R.hasAncestor(block6.ident, block0.ident) and not R.hasAncestor(block0.ident, block6.ident))
self.assertTrue( R.hasAncestor(block5.ident, block0.ident) and not R.hasAncestor(block0.ident, block5.ident))
self.assertTrue( R.hasAncestor(block4.ident, block0.ident) and not R.hasAncestor(block0.ident, block4.ident))
self.assertTrue( R.hasAncestor(block3.ident, block0.ident) and not R.hasAncestor(block0.ident, block3.ident))
self.assertTrue( R.hasAncestor(block2.ident, block0.ident) and not R.hasAncestor(block0.ident, block2.ident))
self.assertTrue( R.hasAncestor(block1.ident, block0.ident) and not R.hasAncestor(block0.ident, block1.ident))
self.assertTrue( R.hasAncestor(block6.ident, block1.ident) and not R.hasAncestor(block1.ident, block6.ident))
self.assertTrue( R.hasAncestor(block5.ident, block1.ident) and not R.hasAncestor(block1.ident, block5.ident))
self.assertTrue( R.hasAncestor(block4.ident, block1.ident) and not R.hasAncestor(block1.ident, block4.ident))
self.assertTrue( R.hasAncestor(block3.ident, block1.ident) and not R.hasAncestor(block1.ident, block3.ident))
self.assertTrue( R.hasAncestor(block2.ident, block1.ident) and not R.hasAncestor(block1.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block1.ident) and R.hasAncestor(block1.ident, block0.ident))
self.assertTrue(not R.hasAncestor(block6.ident, block2.ident) and not R.hasAncestor(block2.ident, block6.ident))
self.assertTrue(not R.hasAncestor(block5.ident, block2.ident) and not R.hasAncestor(block2.ident, block5.ident))
self.assertTrue( R.hasAncestor(block4.ident, block2.ident) and not R.hasAncestor(block2.ident, block4.ident))
self.assertTrue(not R.hasAncestor(block3.ident, block2.ident) and not R.hasAncestor(block2.ident, block3.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block2.ident) and R.hasAncestor(block2.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block2.ident) and R.hasAncestor(block2.ident, block0.ident))
self.assertTrue( R.hasAncestor(block6.ident, block3.ident) and not R.hasAncestor(block3.ident, block6.ident))
self.assertTrue( R.hasAncestor(block5.ident, block3.ident) and not R.hasAncestor(block3.ident, block5.ident))
self.assertTrue( R.hasAncestor(block4.ident, block3.ident) and not R.hasAncestor(block3.ident, block4.ident))
self.assertTrue(not R.hasAncestor(block2.ident, block3.ident) and not R.hasAncestor(block3.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block3.ident) and R.hasAncestor(block3.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block3.ident) and R.hasAncestor(block3.ident, block0.ident))
self.assertTrue(not R.hasAncestor(block6.ident, block4.ident) and not R.hasAncestor(block4.ident, block6.ident))
self.assertTrue(not R.hasAncestor(block5.ident, block4.ident) and not R.hasAncestor(block4.ident, block5.ident))
self.assertTrue(not R.hasAncestor(block3.ident, block4.ident) and R.hasAncestor(block4.ident, block3.ident))
self.assertTrue(not R.hasAncestor(block2.ident, block4.ident) and R.hasAncestor(block4.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block4.ident) and R.hasAncestor(block4.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block4.ident) and R.hasAncestor(block4.ident, block0.ident))
self.assertTrue( R.hasAncestor(block6.ident, block5.ident) and not R.hasAncestor(block5.ident, block6.ident))
self.assertTrue(not R.hasAncestor(block4.ident, block5.ident) and not R.hasAncestor(block5.ident, block4.ident))
self.assertTrue(not R.hasAncestor(block3.ident, block5.ident) and R.hasAncestor(block5.ident, block3.ident))
self.assertTrue(not R.hasAncestor(block2.ident, block5.ident) and not R.hasAncestor(block5.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block5.ident) and R.hasAncestor(block5.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block5.ident) and R.hasAncestor(block5.ident, block0.ident))
self.assertTrue(not R.hasAncestor(block5.ident, block6.ident) and R.hasAncestor(block6.ident, block5.ident))
self.assertTrue(not R.hasAncestor(block4.ident, block6.ident) and not R.hasAncestor(block6.ident, block4.ident))
self.assertTrue(not R.hasAncestor(block3.ident, block6.ident) and R.hasAncestor(block6.ident, block3.ident))
self.assertTrue(not R.hasAncestor(block2.ident, block6.ident) and not R.hasAncestor(block6.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block6.ident) and R.hasAncestor(block6.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block6.ident) and R.hasAncestor(block6.ident, block0.ident))
R = R.pruneLeaves()
self.assertEqual(len(R.blocks), 5)
self.assertEqual(len(R.family), 5)
self.assertEqual(len(R.invDLL), 0)
self.assertEqual(len(R.roots), 1)
self.assertEqual(len(R.leaves),2)
self.assertEqual(R.antichainCutoff, 600)
self.assertEqual(len(R.pendingVotes),0)
self.assertEqual(len(R.votes),0)
self.assertTrue( R.hasAncestor(block5.ident, block0.ident) and not R.hasAncestor(block0.ident, block5.ident))
self.assertTrue( R.hasAncestor(block3.ident, block0.ident) and not R.hasAncestor(block0.ident, block3.ident))
self.assertTrue( R.hasAncestor(block2.ident, block0.ident) and not R.hasAncestor(block0.ident, block2.ident))
self.assertTrue( R.hasAncestor(block1.ident, block0.ident) and not R.hasAncestor(block0.ident, block1.ident))
self.assertTrue( R.hasAncestor(block5.ident, block1.ident) and not R.hasAncestor(block1.ident, block5.ident))
self.assertTrue( R.hasAncestor(block3.ident, block1.ident) and not R.hasAncestor(block1.ident, block3.ident))
self.assertTrue( R.hasAncestor(block2.ident, block1.ident) and not R.hasAncestor(block1.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block1.ident) and R.hasAncestor(block1.ident, block0.ident))
self.assertTrue(not R.hasAncestor(block5.ident, block2.ident) and not R.hasAncestor(block2.ident, block5.ident))
self.assertTrue(not R.hasAncestor(block3.ident, block2.ident) and not R.hasAncestor(block2.ident, block3.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block2.ident) and R.hasAncestor(block2.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block2.ident) and R.hasAncestor(block2.ident, block0.ident))
self.assertTrue( R.hasAncestor(block5.ident, block3.ident) and not R.hasAncestor(block3.ident, block5.ident))
self.assertTrue(not R.hasAncestor(block2.ident, block3.ident) and not R.hasAncestor(block3.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block3.ident) and R.hasAncestor(block3.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block3.ident) and R.hasAncestor(block3.ident, block0.ident))
self.assertTrue(not R.hasAncestor(block3.ident, block5.ident) and R.hasAncestor(block5.ident, block3.ident))
self.assertTrue(not R.hasAncestor(block2.ident, block5.ident) and not R.hasAncestor(block5.ident, block2.ident))
self.assertTrue(not R.hasAncestor(block1.ident, block5.ident) and R.hasAncestor(block5.ident, block1.ident))
self.assertTrue(not R.hasAncestor(block0.ident, block5.ident) and R.hasAncestor(block5.ident, block0.ident))
## Formal unit tests for leafBackAntichain() to follow: visual inspection reveals this does what it says on the tin.
#R.vote()
#print(R.votes)
def test_big_bertha(self):
R = BlockHandler()
gen = Block() # genesis block
msg = str(0) + str(None) + str([])
block0 = gen
block1 = Block(parentsIn=[block0.ident], dataIn={"timestamp":time.time(), "txns":"pair of zircon encrusted tweezers"})
block2 = Block(parentsIn=[block1.ident], dataIn={"timestamp":time.time(), "txns":"watch out for that yellow snow"})
block3 = Block(parentsIn=[block1.ident], dataIn={"timestamp":time.time(), "txns":"he had the stank foot"})
block4 = Block(parentsIn=[block2.ident, block3.ident], dataIn={"timestamp":time.time(), "txns":"come here fido"})
block5 = Block(parentsIn=[block3.ident], dataIn={"timestamp":time.time(), "txns":"applied rotation on her sugar plum"})
block6 = Block(parentsIn=[block5.ident], dataIn={"timestamp":time.time(), "txns":"listen to frank zappa for the love of all that is good"})
R.addBlock(block0)
R.addBlock(block1)
R.addBlock(block2)
R.addBlock(block3)
R.addBlock(block4)
R.addBlock(block5)
R.addBlock(block6)
names = {0:block0.ident, 1:block1.ident, 2:block2.ident, 3:block3.ident, 4:block4.ident, 5:block5.ident, 6:block6.ident}
# Testing voteFor
# Verify all roots have children
for rid in R.roots:
self.assertFalse(len(R.family[rid]["children"])==0)
# Verify that all children of all roots have children and collect grandchildren idents
gc = []
for rid in R.roots:
for cid in R.family[rid]["children"]:
self.assertFalse(len(R.family[cid]["children"]) == 0)
gc = gc + R.family[cid]["children"]
# Pick a random grandchild of the root.
gcid = random.choice(gc)
# Pick a random block with gcid in its past
vid = random.choice(list(R.blocks.keys()))
while(not R.hasAncestor(vid, gcid)):
vid = random.choice(list(R.blocks.keys()))
# Pick a random pair of blocks for gcid and vid to vote on.
xid = random.choice(list(R.blocks.keys()))
yid = random.choice(list(R.blocks.keys()))
# Have vid cast vote that xid < yid
R.voteFor((vid,xid,yid),{})
# Verify that R.votes has correct entries
self.assertEqual(R.votes[(vid,xid,yid)], 1)
self.assertEqual(R.votes[(vid,yid,xid)],-1)
# Check that for each ancestor of vid, that they received an appropriate pending vote
q = deque()
for pid in R.blocks[vid].parents:
if pid in R.vids:
q.append(pid)
while(len(q)>0):
wid = q.popleft()
self.assertEqual(R.pendingVotes[(wid,xid,yid)],1)
for pid in R.blocks[wid].parents:
if pid in R.vids:
q.append(pid)
# Now we are going to mess around with how voting at gcid interacts with the above.
# First, we let gcid cast a vote that xid < yid and check that it propagates appropriately as above.
R.voteFor((gcid,xid,yid),{})
self.assertEqual(R.votes[(gcid,xid,yid)],1)
self.assertEqual(R.votes[(gcid,yid,xid)],-1)
for pid in R.blocks[gcid].parents:
if pid in R.vids:
q.append(gpid)
while(len(q)>0):
wid = q.popleft()
self.assertEqual(R.pendingVotes[(wid,xid,yid)],2)
self.assertEqual(R.pendingVotes[(wid,yid,xid)],-2)
for pid in R.blocks[wid].parents:
if pid in R.vids:
q.append(pid)
# Now we are going to have gcid cast the opposite vote. this should change what is stored in R.votes
# but also change pending votes below gcid
R.voteFor((gcid,yid,xid),{})
self.assertEqual(R.votes[(gcid,xid,yid)],-1)
self.assertEqual(R.votes[(gcid,yid,xid)],1)
for pid in R.blocks[gcid].parents:
if pid in R.vids:
q.append(gpid)
while(len(q)>0):
wid = q.popleft()
self.assertEqual(R.pendingVotes[(wid,xid,yid)],0)
self.assertEqual(R.pendingVotes[(wid,yid,xid)],0)
for pid in R.blocks[wid].parents:
if pid in R.vids:
q.append(pid)
# Do again, now pending votes should be negative
R.voteFor((gcid,yid,xid),{})
self.assertEqual(R.votes[(gcid,xid,yid)],-1)
self.assertEqual(R.votes[(gcid,yid,xid)],1)
for pid in R.blocks[gcid].parents:
if pid in R.vids:
q.append(gpid)
while(len(q)>0):
wid = q.popleft()
self.assertEqual(R.pendingVotes[(wid,xid,yid)],-1)
self.assertEqual(R.pendingVotes[(wid,yid,xid)],1)
for pid in R.blocks[wid].parents:
if pid in R.vids:
q.append(pid)
# Test sumPendingVotes
R.sumPendingVote(gcid, {})
self.assertTrue((gcid,xid,yid) in R.votes)
self.assertTrue((gcid,yid,xid) in R.votes)
self.assertEqual(R.votes[(gcid,xid,yid)],-1)
self.assertEqual(R.votes[(gcid,yid,xid)],1)
touched = R.vote()
print("\n ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ====")
print("Antichain layers:\n")
for layer in R.antichains:
print(layer)
print("\n ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ====")
for key in R.votes:
print("key = ", key, ", vote = ", R.votes[key])
print("\n ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ====")
for key in R.totalVotes:
print("key = ", key, ", vote = ", R.totalVotes[key])
print("\n ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ==== ====")
self.assertTrue((names[0], names[1]) in R.totalVotes and (names[1], names[0]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[0], names[1])] and not R.totalVotes[(names[1], names[0])])
self.assertTrue((names[0], names[2]) in R.totalVotes and (names[2], names[0]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[0], names[2])] and not R.totalVotes[(names[2], names[0])])
self.assertTrue((names[0], names[3]) in R.totalVotes and (names[3], names[0]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[0], names[3])] and not R.totalVotes[(names[3], names[0])])
self.assertTrue((names[0], names[4]) in R.totalVotes and (names[4], names[0]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[0], names[4])] and not R.totalVotes[(names[4], names[0])])
self.assertTrue((names[0], names[5]) in R.totalVotes and (names[5], names[0]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[0], names[5])] and not R.totalVotes[(names[5], names[0])])
self.assertTrue((names[0], names[6]) in R.totalVotes and (names[6], names[0]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[0], names[6])] and not R.totalVotes[(names[6], names[0])])
#### #### #### ####
self.assertTrue((names[1], names[2]) in R.totalVotes and (names[2], names[1]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[1], names[2])] and not R.totalVotes[(names[2], names[1])])
self.assertTrue((names[1], names[3]) in R.totalVotes and (names[3], names[1]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[1], names[3])] and not R.totalVotes[(names[3], names[1])])
self.assertTrue((names[1], names[4]) in R.totalVotes and (names[4], names[1]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[1], names[4])] and not R.totalVotes[(names[4], names[1])])
self.assertTrue((names[1], names[5]) in R.totalVotes and (names[5], names[1]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[1], names[5])] and not R.totalVotes[(names[5], names[1])])
self.assertTrue((names[1], names[6]) in R.totalVotes and (names[6], names[1]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[1], names[6])] and not R.totalVotes[(names[6], names[1])])
#### #### #### ####
self.assertTrue((names[2], names[3]) in R.totalVotes and (names[3], names[2]) in R.totalVotes)
self.assertTrue(not R.totalVotes[(names[2], names[3])] and R.totalVotes[(names[3], names[2])])
self.assertTrue((names[2], names[4]) in R.totalVotes and (names[4], names[2]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[2], names[4])] and not R.totalVotes[(names[4], names[2])])
self.assertTrue((names[2], names[5]) in R.totalVotes and (names[5], names[2]) in R.totalVotes)
self.assertTrue(not R.totalVotes[(names[2], names[5])] and R.totalVotes[(names[5], names[2])])
self.assertTrue((names[2], names[6]) in R.totalVotes and (names[6], names[2]) in R.totalVotes)
#print("2,6 ", R.totalVotes[(names[2], names[6])])
#print("6,2 ", R.totalVotes[(names[6], names[2])])
self.assertTrue(not R.totalVotes[(names[2], names[6])] and R.totalVotes[(names[6], names[2])])
#### #### #### ####
self.assertTrue((names[3], names[4]) in R.totalVotes and (names[4], names[3]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[3], names[4])] and not R.totalVotes[(names[4], names[3])])
self.assertTrue((names[3], names[5]) in R.totalVotes and (names[5], names[3]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[3], names[5])] and not R.totalVotes[(names[5], names[3])])
self.assertTrue((names[3], names[6]) in R.totalVotes and (names[6], names[3]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[3], names[6])] and not R.totalVotes[(names[6], names[3])])
#### #### #### ####
self.assertTrue((names[4], names[5]) in R.totalVotes and (names[5], names[4]) in R.totalVotes)
self.assertTrue(not R.totalVotes[(names[4], names[5])] and R.totalVotes[(names[5], names[4])])
self.assertTrue((names[4], names[6]) in R.totalVotes and (names[6], names[4]) in R.totalVotes)
self.assertTrue(not R.totalVotes[(names[4], names[6])] and R.totalVotes[(names[6], names[4])])
#### #### #### ####
self.assertTrue((names[5], names[6]) in R.totalVotes and (names[6], names[5]) in R.totalVotes)
self.assertTrue(R.totalVotes[(names[5], names[6])] and not R.totalVotes[(names[6], names[5])])
#print(R.votes)
suite = unittest.TestLoader().loadTestsFromTestCase(Test_BlockHandler)
unittest.TextTestRunner(verbosity=1).run(suite)

View file

@ -0,0 +1,372 @@
// NOTE: this interchanges the roles of G and H to match other code's behavior
package how.monero.hodl.bulletproof;
import how.monero.hodl.crypto.Curve25519Point;
import how.monero.hodl.crypto.Scalar;
import how.monero.hodl.crypto.CryptoUtil;
import how.monero.hodl.util.ByteUtil;
import java.math.BigInteger;
import how.monero.hodl.util.VarInt;
import java.util.Random;
import static how.monero.hodl.crypto.Scalar.randomScalar;
import static how.monero.hodl.crypto.CryptoUtil.*;
import static how.monero.hodl.util.ByteUtil.*;
public class LinearBulletproof
{
private static int N;
private static Curve25519Point G;
private static Curve25519Point H;
private static Curve25519Point[] Gi;
private static Curve25519Point[] Hi;
public static class ProofTuple
{
private Curve25519Point V;
private Curve25519Point A;
private Curve25519Point S;
private Curve25519Point T1;
private Curve25519Point T2;
private Scalar taux;
private Scalar mu;
private Scalar[] l;
private Scalar[] r;
public ProofTuple(Curve25519Point V, Curve25519Point A, Curve25519Point S, Curve25519Point T1, Curve25519Point T2, Scalar taux, Scalar mu, Scalar[] l, Scalar[] r)
{
this.V = V;
this.A = A;
this.S = S;
this.T1 = T1;
this.T2 = T2;
this.taux = taux;
this.mu = mu;
this.l = l;
this.r = r;
}
}
/* Given two scalar arrays, construct a vector commitment */
public static Curve25519Point VectorExponent(Scalar[] a, Scalar[] b)
{
Curve25519Point Result = Curve25519Point.ZERO;
for (int i = 0; i < N; i++)
{
Result = Result.add(Gi[i].scalarMultiply(a[i]));
Result = Result.add(Hi[i].scalarMultiply(b[i]));
}
return Result;
}
/* Given a scalar, construct a vector of powers */
public static Scalar[] VectorPowers(Scalar x)
{
Scalar[] result = new Scalar[N];
for (int i = 0; i < N; i++)
{
result[i] = x.pow(i);
}
return result;
}
/* Given two scalar arrays, construct the inner product */
public static Scalar InnerProduct(Scalar[] a, Scalar[] b)
{
Scalar result = Scalar.ZERO;
for (int i = 0; i < N; i++)
{
result = result.add(a[i].mul(b[i]));
}
return result;
}
/* Given two scalar arrays, construct the Hadamard product */
public static Scalar[] Hadamard(Scalar[] a, Scalar[] b)
{
Scalar[] result = new Scalar[N];
for (int i = 0; i < N; i++)
{
result[i] = a[i].mul(b[i]);
}
return result;
}
/* Add two vectors */
public static Scalar[] VectorAdd(Scalar[] a, Scalar[] b)
{
Scalar[] result = new Scalar[N];
for (int i = 0; i < N; i++)
{
result[i] = a[i].add(b[i]);
}
return result;
}
/* Subtract two vectors */
public static Scalar[] VectorSubtract(Scalar[] a, Scalar[] b)
{
Scalar[] result = new Scalar[N];
for (int i = 0; i < N; i++)
{
result[i] = a[i].sub(b[i]);
}
return result;
}
/* Multiply a scalar and a vector */
public static Scalar[] VectorScalar(Scalar[] a, Scalar x)
{
Scalar[] result = new Scalar[N];
for (int i = 0; i < N; i++)
{
result[i] = a[i].mul(x);
}
return result;
}
/* Compute the inverse of a scalar, the stupid way */
public static Scalar Invert(Scalar x)
{
Scalar inverse = new Scalar(x.toBigInteger().modInverse(CryptoUtil.l));
assert x.mul(inverse).equals(Scalar.ONE);
return inverse;
}
/* Compute the value of k(y,z) */
public static Scalar ComputeK(Scalar y, Scalar z)
{
Scalar result = Scalar.ZERO;
result = result.sub(z.sq().mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y))));
result = result.sub(z.pow(3).mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(Scalar.TWO))));
return result;
}
/* Given a value v (0..2^N-1) and a mask gamma, construct a range proof */
public static ProofTuple PROVE(Scalar v, Scalar gamma)
{
Curve25519Point V = H.scalarMultiply(v).add(G.scalarMultiply(gamma));
// This hash is updated for Fiat-Shamir throughout the proof
Scalar hashCache = hashToScalar(V.toBytes());
// PAPER LINES 36-37
Scalar[] aL = new Scalar[N];
Scalar[] aR = new Scalar[N];
BigInteger tempV = v.toBigInteger();
for (int i = N-1; i >= 0; i--)
{
BigInteger basePow = BigInteger.valueOf(2).pow(i);
if (tempV.divide(basePow).equals(BigInteger.ZERO))
{
aL[i] = Scalar.ZERO;
}
else
{
aL[i] = Scalar.ONE;
tempV = tempV.subtract(basePow);
}
aR[i] = aL[i].sub(Scalar.ONE);
}
// DEBUG: Test to ensure this recovers the value
BigInteger test_aL = BigInteger.ZERO;
BigInteger test_aR = BigInteger.ZERO;
for (int i = 0; i < N; i++)
{
if (aL[i].equals(Scalar.ONE))
test_aL = test_aL.add(BigInteger.valueOf(2).pow(i));
if (aR[i].equals(Scalar.ZERO))
test_aR = test_aR.add(BigInteger.valueOf(2).pow(i));
}
assert test_aL.equals(v.toBigInteger());
assert test_aR.equals(v.toBigInteger());
// PAPER LINES 38-39
Scalar alpha = randomScalar();
Curve25519Point A = VectorExponent(aL,aR).add(G.scalarMultiply(alpha));
// PAPER LINES 40-42
Scalar[] sL = new Scalar[N];
Scalar[] sR = new Scalar[N];
for (int i = 0; i < N; i++)
{
sL[i] = randomScalar();
sR[i] = randomScalar();
}
Scalar rho = randomScalar();
Curve25519Point S = VectorExponent(sL,sR).add(G.scalarMultiply(rho));
// PAPER LINES 43-45
hashCache = hashToScalar(concat(hashCache.bytes,A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
Scalar t0 = Scalar.ZERO;
Scalar t1 = Scalar.ZERO;
Scalar t2 = Scalar.ZERO;
t0 = t0.add(z.mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y))));
t0 = t0.add(z.sq().mul(v));
Scalar k = ComputeK(y,z);
t0 = t0.add(k);
// DEBUG: Test the value of t0 has the correct form
Scalar test_t0 = Scalar.ZERO;
test_t0 = test_t0.add(InnerProduct(aL,Hadamard(aR,VectorPowers(y))));
test_t0 = test_t0.add(z.mul(InnerProduct(VectorSubtract(aL,aR),VectorPowers(y))));
test_t0 = test_t0.add(z.sq().mul(InnerProduct(VectorPowers(Scalar.TWO),aL)));
test_t0 = test_t0.add(k);
assert test_t0.equals(t0);
t1 = t1.add(InnerProduct(VectorSubtract(aL,VectorScalar(VectorPowers(Scalar.ONE),z)),Hadamard(VectorPowers(y),sR)));
t1 = t1.add(InnerProduct(sL,VectorAdd(Hadamard(VectorPowers(y),VectorAdd(aR,VectorScalar(VectorPowers(Scalar.ONE),z))),VectorScalar(VectorPowers(Scalar.TWO),z.sq()))));
t2 = t2.add(InnerProduct(sL,Hadamard(VectorPowers(y),sR)));
// PAPER LINES 47-48
Scalar tau1 = randomScalar();
Scalar tau2 = randomScalar();
Curve25519Point T1 = H.scalarMultiply(t1).add(G.scalarMultiply(tau1));
Curve25519Point T2 = H.scalarMultiply(t2).add(G.scalarMultiply(tau2));
// PAPER LINES 49-51
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,T2.toBytes()));
Scalar x = hashCache;
// PAPER LINES 52-53
Scalar taux = Scalar.ZERO;
taux = tau1.mul(x);
taux = taux.add(tau2.mul(x.sq()));
taux = taux.add(gamma.mul(z.sq()));
Scalar mu = x.mul(rho).add(alpha);
// PAPER LINES 54-57
Scalar[] l = new Scalar[N];
Scalar[] r = new Scalar[N];
l = VectorAdd(VectorSubtract(aL,VectorScalar(VectorPowers(Scalar.ONE),z)),VectorScalar(sL,x));
r = VectorAdd(Hadamard(VectorPowers(y),VectorAdd(aR,VectorAdd(VectorScalar(VectorPowers(Scalar.ONE),z),VectorScalar(sR,x)))),VectorScalar(VectorPowers(Scalar.TWO),z.sq()));
// DEBUG: Test if the l and r vectors match the polynomial forms
Scalar test_t = Scalar.ZERO;
test_t = test_t.add(t0).add(t1.mul(x));
test_t = test_t.add(t2.mul(x.sq()));
assert test_t.equals(InnerProduct(l,r));
// PAPER LINE 58
return new ProofTuple(V,A,S,T1,T2,taux,mu,l,r);
}
/* Given a range proof, determine if it is valid */
public static boolean VERIFY(ProofTuple proof)
{
// Reconstruct the challenges
Scalar hashCache = hashToScalar(proof.V.toBytes());
hashCache = hashToScalar(concat(hashCache.bytes,proof.A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T2.toBytes()));
Scalar x = hashCache;
// PAPER LINE 60
Scalar t = InnerProduct(proof.l,proof.r);
// PAPER LINE 61
Curve25519Point L61Left = G.scalarMultiply(proof.taux).add(H.scalarMultiply(t));
Scalar k = ComputeK(y,z);
Curve25519Point L61Right = H.scalarMultiply(k.add(z.mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y)))));
L61Right = L61Right.add(proof.V.scalarMultiply(z.sq()));
L61Right = L61Right.add(proof.T1.scalarMultiply(x));
L61Right = L61Right.add(proof.T2.scalarMultiply(x.sq()));
if (!L61Right.equals(L61Left))
{
return false;
}
// PAPER LINE 62
Curve25519Point P = Curve25519Point.ZERO;
P = P.add(proof.A);
P = P.add(proof.S.scalarMultiply(x));
Scalar[] Gexp = new Scalar[N];
for (int i = 0; i < N; i++)
Gexp[i] = Scalar.ZERO.sub(z);
Scalar[] Hexp = new Scalar[N];
for (int i = 0; i < N; i++)
{
Hexp[i] = Scalar.ZERO;
Hexp[i] = Hexp[i].add(z.mul(y.pow(i)));
Hexp[i] = Hexp[i].add(z.sq().mul(Scalar.TWO.pow(i)));
Hexp[i] = Hexp[i].mul(Invert(y).pow(i));
}
P = P.add(VectorExponent(Gexp,Hexp));
// PAPER LINE 63
for (int i = 0; i < N; i++)
{
Hexp[i] = Scalar.ZERO;
Hexp[i] = Hexp[i].add(proof.r[i]);
Hexp[i] = Hexp[i].mul(Invert(y).pow(i));
}
Curve25519Point L63Right = VectorExponent(proof.l,Hexp).add(G.scalarMultiply(proof.mu));
if (!L63Right.equals(P))
{
return false;
}
return true;
}
public static void main(String[] args)
{
// Number of bits in the range
N = 64;
// Set the curve base points
G = Curve25519Point.G;
H = Curve25519Point.hashToPoint(G);
Gi = new Curve25519Point[N];
Hi = new Curve25519Point[N];
for (int i = 0; i < N; i++)
{
Gi[i] = getHpnGLookup(2*i);
Hi[i] = getHpnGLookup(2*i+1);
}
// Run a bunch of randomized trials
Random rando = new Random();
int TRIALS = 250;
int count = 0;
while (count < TRIALS)
{
long amount = rando.nextLong();
if (amount > Math.pow(2,N)-1 || amount < 0)
continue;
ProofTuple proof = PROVE(new Scalar(BigInteger.valueOf(amount)),randomScalar());
if (!VERIFY(proof))
System.out.println("Test failed");
count += 1;
}
}
}

View file

@ -0,0 +1,532 @@
// NOTE: this interchanges the roles of G and H to match other code's behavior
package how.monero.hodl.bulletproof;
import how.monero.hodl.crypto.Curve25519Point;
import how.monero.hodl.crypto.Scalar;
import how.monero.hodl.crypto.CryptoUtil;
import java.math.BigInteger;
import java.util.Random;
import static how.monero.hodl.crypto.Scalar.randomScalar;
import static how.monero.hodl.crypto.CryptoUtil.*;
import static how.monero.hodl.util.ByteUtil.*;
public class LogBulletproof
{
private static int N;
private static int logN;
private static Curve25519Point G;
private static Curve25519Point H;
private static Curve25519Point[] Gi;
private static Curve25519Point[] Hi;
public static class ProofTuple
{
private Curve25519Point V;
private Curve25519Point A;
private Curve25519Point S;
private Curve25519Point T1;
private Curve25519Point T2;
private Scalar taux;
private Scalar mu;
private Curve25519Point[] L;
private Curve25519Point[] R;
private Scalar a;
private Scalar b;
private Scalar t;
public ProofTuple(Curve25519Point V, Curve25519Point A, Curve25519Point S, Curve25519Point T1, Curve25519Point T2, Scalar taux, Scalar mu, Curve25519Point[] L, Curve25519Point[] R, Scalar a, Scalar b, Scalar t)
{
this.V = V;
this.A = A;
this.S = S;
this.T1 = T1;
this.T2 = T2;
this.taux = taux;
this.mu = mu;
this.L = L;
this.R = R;
this.a = a;
this.b = b;
this.t = t;
}
}
/* Given two scalar arrays, construct a vector commitment */
public static Curve25519Point VectorExponent(Scalar[] a, Scalar[] b)
{
assert a.length == N && b.length == N;
Curve25519Point Result = Curve25519Point.ZERO;
for (int i = 0; i < N; i++)
{
Result = Result.add(Gi[i].scalarMultiply(a[i]));
Result = Result.add(Hi[i].scalarMultiply(b[i]));
}
return Result;
}
/* Compute a custom vector-scalar commitment */
public static Curve25519Point VectorExponentCustom(Curve25519Point[] A, Curve25519Point[] B, Scalar[] a, Scalar[] b)
{
assert a.length == A.length && b.length == B.length && a.length == b.length;
Curve25519Point Result = Curve25519Point.ZERO;
for (int i = 0; i < a.length; i++)
{
Result = Result.add(A[i].scalarMultiply(a[i]));
Result = Result.add(B[i].scalarMultiply(b[i]));
}
return Result;
}
/* Given a scalar, construct a vector of powers */
public static Scalar[] VectorPowers(Scalar x)
{
Scalar[] result = new Scalar[N];
for (int i = 0; i < N; i++)
{
result[i] = x.pow(i);
}
return result;
}
/* Given two scalar arrays, construct the inner product */
public static Scalar InnerProduct(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar result = Scalar.ZERO;
for (int i = 0; i < a.length; i++)
{
result = result.add(a[i].mul(b[i]));
}
return result;
}
/* Given two scalar arrays, construct the Hadamard product */
public static Scalar[] Hadamard(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].mul(b[i]);
}
return result;
}
/* Given two curvepoint arrays, construct the Hadamard product */
public static Curve25519Point[] Hadamard2(Curve25519Point[] A, Curve25519Point[] B)
{
assert A.length == B.length;
Curve25519Point[] Result = new Curve25519Point[A.length];
for (int i = 0; i < A.length; i++)
{
Result[i] = A[i].add(B[i]);
}
return Result;
}
/* Add two vectors */
public static Scalar[] VectorAdd(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].add(b[i]);
}
return result;
}
/* Subtract two vectors */
public static Scalar[] VectorSubtract(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].sub(b[i]);
}
return result;
}
/* Multiply a scalar and a vector */
public static Scalar[] VectorScalar(Scalar[] a, Scalar x)
{
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].mul(x);
}
return result;
}
/* Exponentiate a curve vector by a scalar */
public static Curve25519Point[] VectorScalar2(Curve25519Point[] A, Scalar x)
{
Curve25519Point[] Result = new Curve25519Point[A.length];
for (int i = 0; i < A.length; i++)
{
Result[i] = A[i].scalarMultiply(x);
}
return Result;
}
/* Compute the inverse of a scalar, the stupid way */
public static Scalar Invert(Scalar x)
{
Scalar inverse = new Scalar(x.toBigInteger().modInverse(CryptoUtil.l));
assert x.mul(inverse).equals(Scalar.ONE);
return inverse;
}
/* Compute the slice of a curvepoint vector */
public static Curve25519Point[] CurveSlice(Curve25519Point[] a, int start, int stop)
{
Curve25519Point[] Result = new Curve25519Point[stop-start];
for (int i = start; i < stop; i++)
{
Result[i-start] = a[i];
}
return Result;
}
/* Compute the slice of a scalar vector */
public static Scalar[] ScalarSlice(Scalar[] a, int start, int stop)
{
Scalar[] result = new Scalar[stop-start];
for (int i = start; i < stop; i++)
{
result[i-start] = a[i];
}
return result;
}
/* Compute the value of k(y,z) */
public static Scalar ComputeK(Scalar y, Scalar z)
{
Scalar result = Scalar.ZERO;
result = result.sub(z.sq().mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y))));
result = result.sub(z.pow(3).mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(Scalar.TWO))));
return result;
}
/* Given a value v (0..2^N-1) and a mask gamma, construct a range proof */
public static ProofTuple PROVE(Scalar v, Scalar gamma)
{
Curve25519Point V = H.scalarMultiply(v).add(G.scalarMultiply(gamma));
// This hash is updated for Fiat-Shamir throughout the proof
Scalar hashCache = hashToScalar(V.toBytes());
// PAPER LINES 36-37
Scalar[] aL = new Scalar[N];
Scalar[] aR = new Scalar[N];
BigInteger tempV = v.toBigInteger();
for (int i = N-1; i >= 0; i--)
{
BigInteger basePow = BigInteger.valueOf(2).pow(i);
if (tempV.divide(basePow).equals(BigInteger.ZERO))
{
aL[i] = Scalar.ZERO;
}
else
{
aL[i] = Scalar.ONE;
tempV = tempV.subtract(basePow);
}
aR[i] = aL[i].sub(Scalar.ONE);
}
// PAPER LINES 38-39
Scalar alpha = randomScalar();
Curve25519Point A = VectorExponent(aL,aR).add(G.scalarMultiply(alpha));
// PAPER LINES 40-42
Scalar[] sL = new Scalar[N];
Scalar[] sR = new Scalar[N];
for (int i = 0; i < N; i++)
{
sL[i] = randomScalar();
sR[i] = randomScalar();
}
Scalar rho = randomScalar();
Curve25519Point S = VectorExponent(sL,sR).add(G.scalarMultiply(rho));
// PAPER LINES 43-45
hashCache = hashToScalar(concat(hashCache.bytes,A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
// Polynomial construction before PAPER LINE 46
Scalar t0 = Scalar.ZERO;
Scalar t1 = Scalar.ZERO;
Scalar t2 = Scalar.ZERO;
t0 = t0.add(z.mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y))));
t0 = t0.add(z.sq().mul(v));
Scalar k = ComputeK(y,z);
t0 = t0.add(k);
t1 = t1.add(InnerProduct(VectorSubtract(aL,VectorScalar(VectorPowers(Scalar.ONE),z)),Hadamard(VectorPowers(y),sR)));
t1 = t1.add(InnerProduct(sL,VectorAdd(Hadamard(VectorPowers(y),VectorAdd(aR,VectorScalar(VectorPowers(Scalar.ONE),z))),VectorScalar(VectorPowers(Scalar.TWO),z.sq()))));
t2 = t2.add(InnerProduct(sL,Hadamard(VectorPowers(y),sR)));
// PAPER LINES 47-48
Scalar tau1 = randomScalar();
Scalar tau2 = randomScalar();
Curve25519Point T1 = H.scalarMultiply(t1).add(G.scalarMultiply(tau1));
Curve25519Point T2 = H.scalarMultiply(t2).add(G.scalarMultiply(tau2));
// PAPER LINES 49-51
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,T2.toBytes()));
Scalar x = hashCache;
// PAPER LINES 52-53
Scalar taux = Scalar.ZERO;
taux = tau1.mul(x);
taux = taux.add(tau2.mul(x.sq()));
taux = taux.add(gamma.mul(z.sq()));
Scalar mu = x.mul(rho).add(alpha);
// PAPER LINES 54-57
Scalar[] l = new Scalar[N];
Scalar[] r = new Scalar[N];
l = VectorAdd(VectorSubtract(aL,VectorScalar(VectorPowers(Scalar.ONE),z)),VectorScalar(sL,x));
r = VectorAdd(Hadamard(VectorPowers(y),VectorAdd(aR,VectorAdd(VectorScalar(VectorPowers(Scalar.ONE),z),VectorScalar(sR,x)))),VectorScalar(VectorPowers(Scalar.TWO),z.sq()));
Scalar t = InnerProduct(l,r);
// PAPER LINES 32-33
hashCache = hashToScalar(concat(hashCache.bytes,x.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,taux.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,mu.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,t.bytes));
Scalar x_ip = hashCache;
// These are used in the inner product rounds
int nprime = N;
Curve25519Point[] Gprime = new Curve25519Point[N];
Curve25519Point[] Hprime = new Curve25519Point[N];
Scalar[] aprime = new Scalar[N];
Scalar[] bprime = new Scalar[N];
for (int i = 0; i < N; i++)
{
Gprime[i] = Gi[i];
Hprime[i] = Hi[i].scalarMultiply(Invert(y).pow(i));
aprime[i] = l[i];
bprime[i] = r[i];
}
Curve25519Point[] L = new Curve25519Point[logN];
Curve25519Point[] R = new Curve25519Point[logN];
int round = 0; // track the index based on number of rounds
Scalar[] w = new Scalar[logN]; // this is the challenge x in the inner product protocol
// PAPER LINE 13
while (nprime > 1)
{
// PAPER LINE 15
nprime /= 2;
// PAPER LINES 16-17
Scalar cL = InnerProduct(ScalarSlice(aprime,0,nprime),ScalarSlice(bprime,nprime,bprime.length));
Scalar cR = InnerProduct(ScalarSlice(aprime,nprime,aprime.length),ScalarSlice(bprime,0,nprime));
// PAPER LINES 18-19
L[round] = VectorExponentCustom(CurveSlice(Gprime,nprime,Gprime.length),CurveSlice(Hprime,0,nprime),ScalarSlice(aprime,0,nprime),ScalarSlice(bprime,nprime,bprime.length)).add(H.scalarMultiply(cL.mul(x_ip)));
R[round] = VectorExponentCustom(CurveSlice(Gprime,0,nprime),CurveSlice(Hprime,nprime,Hprime.length),ScalarSlice(aprime,nprime,aprime.length),ScalarSlice(bprime,0,nprime)).add(H.scalarMultiply(cR.mul(x_ip)));
// PAPER LINES 21-22
hashCache = hashToScalar(concat(hashCache.bytes,L[round].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,R[round].toBytes()));
w[round] = hashCache;
// PAPER LINES 24-25
Gprime = Hadamard2(VectorScalar2(CurveSlice(Gprime,0,nprime),Invert(w[round])),VectorScalar2(CurveSlice(Gprime,nprime,Gprime.length),w[round]));
Hprime = Hadamard2(VectorScalar2(CurveSlice(Hprime,0,nprime),w[round]),VectorScalar2(CurveSlice(Hprime,nprime,Hprime.length),Invert(w[round])));
// PAPER LINES 28-29
aprime = VectorAdd(VectorScalar(ScalarSlice(aprime,0,nprime),w[round]),VectorScalar(ScalarSlice(aprime,nprime,aprime.length),Invert(w[round])));
bprime = VectorAdd(VectorScalar(ScalarSlice(bprime,0,nprime),Invert(w[round])),VectorScalar(ScalarSlice(bprime,nprime,bprime.length),w[round]));
round += 1;
}
// PAPER LINE 58 (with inclusions from PAPER LINE 8 and PAPER LINE 20)
return new ProofTuple(V,A,S,T1,T2,taux,mu,L,R,aprime[0],bprime[0],t);
}
/* Given a range proof, determine if it is valid */
public static boolean VERIFY(ProofTuple proof)
{
// Reconstruct the challenges
Scalar hashCache = hashToScalar(proof.V.toBytes());
hashCache = hashToScalar(concat(hashCache.bytes,proof.A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T2.toBytes()));
Scalar x = hashCache;
hashCache = hashToScalar(concat(hashCache.bytes,x.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.taux.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.mu.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.t.bytes));
Scalar x_ip = hashCache;
// PAPER LINE 61
Curve25519Point L61Left = G.scalarMultiply(proof.taux).add(H.scalarMultiply(proof.t));
Scalar k = ComputeK(y,z);
Curve25519Point L61Right = H.scalarMultiply(k.add(z.mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y)))));
L61Right = L61Right.add(proof.V.scalarMultiply(z.sq()));
L61Right = L61Right.add(proof.T1.scalarMultiply(x));
L61Right = L61Right.add(proof.T2.scalarMultiply(x.sq()));
if (!L61Right.equals(L61Left))
return false;
// PAPER LINE 62
Curve25519Point P = Curve25519Point.ZERO;
P = P.add(proof.A);
P = P.add(proof.S.scalarMultiply(x));
Scalar[] Gexp = new Scalar[N];
for (int i = 0; i < N; i++)
Gexp[i] = Scalar.ZERO.sub(z);
Scalar[] Hexp = new Scalar[N];
for (int i = 0; i < N; i++)
{
Hexp[i] = Scalar.ZERO;
Hexp[i] = Hexp[i].add(z.mul(y.pow(i)));
Hexp[i] = Hexp[i].add(z.sq().mul(Scalar.TWO.pow(i)));
Hexp[i] = Hexp[i].mul(Invert(y).pow(i));
}
P = P.add(VectorExponent(Gexp,Hexp));
// Compute the number of rounds for the inner product
int rounds = proof.L.length;
// PAPER LINES 21-22
// The inner product challenges are computed per round
Scalar[] w = new Scalar[rounds];
hashCache = hashToScalar(concat(hashCache.bytes,proof.L[0].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.R[0].toBytes()));
w[0] = hashCache;
if (rounds > 1)
{
for (int i = 1; i < rounds; i++)
{
hashCache = hashToScalar(concat(hashCache.bytes,proof.L[i].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.R[i].toBytes()));
w[i] = hashCache;
}
}
// Basically PAPER LINES 24-25
// Compute the curvepoints from G[i] and H[i]
Curve25519Point InnerProdG = Curve25519Point.ZERO;
Curve25519Point InnerProdH = Curve25519Point.ZERO;
for (int i = 0; i < N; i++)
{
// Convert the index to binary IN REVERSE and construct the scalar exponent
int index = i;
Scalar gScalar = Scalar.ONE;
Scalar hScalar = Invert(y).pow(i);
for (int j = rounds-1; j >= 0; j--)
{
int J = w.length - j - 1; // because this is done in reverse bit order
int basePow = (int) Math.pow(2,j); // assumes we don't get too big
if (index / basePow == 0) // bit is zero
{
gScalar = gScalar.mul(Invert(w[J]));
hScalar = hScalar.mul(w[J]);
}
else // bit is one
{
gScalar = gScalar.mul(w[J]);
hScalar = hScalar.mul(Invert(w[J]));
index -= basePow;
}
}
// Now compute the basepoint's scalar multiplication
// Each of these could be written as a multiexp operation instead
InnerProdG = InnerProdG.add(Gi[i].scalarMultiply(gScalar));
InnerProdH = InnerProdH.add(Hi[i].scalarMultiply(hScalar));
}
// PAPER LINE 26
Curve25519Point Pprime = P.add(G.scalarMultiply(Scalar.ZERO.sub(proof.mu)));
for (int i = 0; i < rounds; i++)
{
Pprime = Pprime.add(proof.L[i].scalarMultiply(w[i].sq()));
Pprime = Pprime.add(proof.R[i].scalarMultiply(Invert(w[i]).sq()));
}
Pprime = Pprime.add(H.scalarMultiply(proof.t.mul(x_ip)));
if (!Pprime.equals(InnerProdG.scalarMultiply(proof.a).add(InnerProdH.scalarMultiply(proof.b)).add(H.scalarMultiply(proof.a.mul(proof.b).mul(x_ip)))))
return false;
return true;
}
public static void main(String[] args)
{
// Number of bits in the range
N = 64;
logN = 6; // its log, manually
// Set the curve base points
G = Curve25519Point.G;
H = Curve25519Point.hashToPoint(G);
Gi = new Curve25519Point[N];
Hi = new Curve25519Point[N];
for (int i = 0; i < N; i++)
{
Gi[i] = getHpnGLookup(2*i);
Hi[i] = getHpnGLookup(2*i+1);
}
// Run a bunch of randomized trials
Random rando = new Random();
int TRIALS = 250;
int count = 0;
while (count < TRIALS)
{
long amount = rando.nextLong();
if (amount > Math.pow(2,N)-1 || amount < 0)
continue;
ProofTuple proof = PROVE(new Scalar(BigInteger.valueOf(amount)),randomScalar());
if (!VERIFY(proof))
System.out.println("Test failed");
count += 1;
}
}
}

View file

@ -0,0 +1,644 @@
// NOTE: this interchanges the roles of G and H to match other code's behavior
package how.monero.hodl.bulletproof;
import how.monero.hodl.crypto.Curve25519Point;
import how.monero.hodl.crypto.Scalar;
import how.monero.hodl.crypto.CryptoUtil;
import java.math.BigInteger;
import java.util.Random;
import static how.monero.hodl.crypto.Scalar.randomScalar;
import static how.monero.hodl.crypto.CryptoUtil.*;
import static how.monero.hodl.util.ByteUtil.*;
public class MultiBulletproof
{
private static int NEXP;
private static int N;
private static Curve25519Point G;
private static Curve25519Point H;
private static Curve25519Point[] Gi;
private static Curve25519Point[] Hi;
public static class ProofTuple
{
private Curve25519Point V[];
private Curve25519Point A;
private Curve25519Point S;
private Curve25519Point T1;
private Curve25519Point T2;
private Scalar taux;
private Scalar mu;
private Curve25519Point[] L;
private Curve25519Point[] R;
private Scalar a;
private Scalar b;
private Scalar t;
public ProofTuple(Curve25519Point V[], Curve25519Point A, Curve25519Point S, Curve25519Point T1, Curve25519Point T2, Scalar taux, Scalar mu, Curve25519Point[] L, Curve25519Point[] R, Scalar a, Scalar b, Scalar t)
{
this.V = V;
this.A = A;
this.S = S;
this.T1 = T1;
this.T2 = T2;
this.taux = taux;
this.mu = mu;
this.L = L;
this.R = R;
this.a = a;
this.b = b;
this.t = t;
}
}
/* Given two scalar arrays, construct a vector commitment */
public static Curve25519Point VectorExponent(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Curve25519Point Result = Curve25519Point.ZERO;
for (int i = 0; i < a.length; i++)
{
Result = Result.add(Gi[i].scalarMultiply(a[i]));
Result = Result.add(Hi[i].scalarMultiply(b[i]));
}
return Result;
}
/* Compute a custom vector-scalar commitment */
public static Curve25519Point VectorExponentCustom(Curve25519Point[] A, Curve25519Point[] B, Scalar[] a, Scalar[] b)
{
assert a.length == A.length && b.length == B.length && a.length == b.length;
Curve25519Point Result = Curve25519Point.ZERO;
for (int i = 0; i < a.length; i++)
{
Result = Result.add(A[i].scalarMultiply(a[i]));
Result = Result.add(B[i].scalarMultiply(b[i]));
}
return Result;
}
/* Given a scalar, construct a vector of powers */
public static Scalar[] VectorPowers(Scalar x, int size)
{
Scalar[] result = new Scalar[size];
for (int i = 0; i < size; i++)
{
result[i] = x.pow(i);
}
return result;
}
/* Given two scalar arrays, construct the inner product */
public static Scalar InnerProduct(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar result = Scalar.ZERO;
for (int i = 0; i < a.length; i++)
{
result = result.add(a[i].mul(b[i]));
}
return result;
}
/* Given two scalar arrays, construct the Hadamard product */
public static Scalar[] Hadamard(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].mul(b[i]);
}
return result;
}
/* Given two curvepoint arrays, construct the Hadamard product */
public static Curve25519Point[] Hadamard2(Curve25519Point[] A, Curve25519Point[] B)
{
assert A.length == B.length;
Curve25519Point[] Result = new Curve25519Point[A.length];
for (int i = 0; i < A.length; i++)
{
Result[i] = A[i].add(B[i]);
}
return Result;
}
/* Add two vectors */
public static Scalar[] VectorAdd(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].add(b[i]);
}
return result;
}
/* Subtract two vectors */
public static Scalar[] VectorSubtract(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].sub(b[i]);
}
return result;
}
/* Multiply a scalar and a vector */
public static Scalar[] VectorScalar(Scalar[] a, Scalar x)
{
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].mul(x);
}
return result;
}
/* Exponentiate a curve vector by a scalar */
public static Curve25519Point[] VectorScalar2(Curve25519Point[] A, Scalar x)
{
Curve25519Point[] Result = new Curve25519Point[A.length];
for (int i = 0; i < A.length; i++)
{
Result[i] = A[i].scalarMultiply(x);
}
return Result;
}
/* Compute the inverse of a scalar, the stupid way */
public static Scalar Invert(Scalar x)
{
Scalar inverse = new Scalar(x.toBigInteger().modInverse(CryptoUtil.l));
assert x.mul(inverse).equals(Scalar.ONE);
return inverse;
}
/* Compute the slice of a curvepoint vector */
public static Curve25519Point[] CurveSlice(Curve25519Point[] a, int start, int stop)
{
Curve25519Point[] Result = new Curve25519Point[stop-start];
for (int i = start; i < stop; i++)
{
Result[i-start] = a[i];
}
return Result;
}
/* Compute the slice of a scalar vector */
public static Scalar[] ScalarSlice(Scalar[] a, int start, int stop)
{
Scalar[] result = new Scalar[stop-start];
for (int i = start; i < stop; i++)
{
result[i-start] = a[i];
}
return result;
}
/* Construct an aggregate range proof */
public static ProofTuple PROVE(Scalar[] v, Scalar[] gamma, int logM)
{
int M = v.length;
int logMN = logM + NEXP;
Curve25519Point[] V = new Curve25519Point[M];
V[0] = H.scalarMultiply(v[0]).add(G.scalarMultiply(gamma[0]));
// This hash is updated for Fiat-Shamir throughout the proof
Scalar hashCache = hashToScalar(V[0].toBytes());
for (int j = 1; j < M; j++)
{
V[j] = H.scalarMultiply(v[j]).add(G.scalarMultiply(gamma[j]));
hashCache = hashToScalar(concat(hashCache.bytes,V[j].toBytes()));
}
// PAPER LINES 36-37
Scalar[] aL = new Scalar[M*N];
Scalar[] aR = new Scalar[M*N];
for (int j = 0; j < M; j++)
{
BigInteger tempV = v[j].toBigInteger();
for (int i = N-1; i >= 0; i--)
{
BigInteger basePow = BigInteger.valueOf(2).pow(i);
if (tempV.divide(basePow).equals(BigInteger.ZERO))
{
aL[j*N+i] = Scalar.ZERO;
}
else
{
aL[j*N+i] = Scalar.ONE;
tempV = tempV.subtract(basePow);
}
aR[j*N+i] = aL[j*N+i].sub(Scalar.ONE);
}
}
// PAPER LINES 38-39
Scalar alpha = randomScalar();
Curve25519Point A = VectorExponent(aL,aR).add(G.scalarMultiply(alpha));
// PAPER LINES 40-42
Scalar[] sL = new Scalar[M*N];
Scalar[] sR = new Scalar[M*N];
for (int i = 0; i < M*N; i++)
{
sL[i] = randomScalar();
sR[i] = randomScalar();
}
Scalar rho = randomScalar();
Curve25519Point S = VectorExponent(sL,sR).add(G.scalarMultiply(rho));
// PAPER LINES 43-45
hashCache = hashToScalar(concat(hashCache.bytes,A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
// Polynomial construction by coefficients
Scalar[] l0;
Scalar[] l1;
Scalar[] r0;
Scalar[] r1;
l0 = VectorSubtract(aL,VectorScalar(VectorPowers(Scalar.ONE,M*N),z));
l1 = sL;
// This computes the ugly sum/concatenation from PAPER LINE 65
Scalar[] zerosTwos = new Scalar[M*N];
for (int i = 0; i < M*N; i++)
{
zerosTwos[i] = Scalar.ZERO;
for (int j = 1; j <= M; j++) // note this starts from 1
{
Scalar temp = Scalar.ZERO;
if (i >= (j-1)*N && i < j*N)
temp = Scalar.TWO.pow(i-(j-1)*N); // exponent ranges from 0..N-1
zerosTwos[i] = zerosTwos[i].add(z.pow(1+j).mul(temp));
}
}
r0 = VectorAdd(aR,VectorScalar(VectorPowers(Scalar.ONE,M*N),z));
r0 = Hadamard(r0,VectorPowers(y,M*N));
r0 = VectorAdd(r0,zerosTwos);
r1 = Hadamard(VectorPowers(y,M*N),sR);
// Polynomial construction before PAPER LINE 46
Scalar t0 = InnerProduct(l0,r0);
Scalar t1 = InnerProduct(l0,r1).add(InnerProduct(l1,r0));
Scalar t2 = InnerProduct(l1,r1);
// PAPER LINES 47-48
Scalar tau1 = randomScalar();
Scalar tau2 = randomScalar();
Curve25519Point T1 = H.scalarMultiply(t1).add(G.scalarMultiply(tau1));
Curve25519Point T2 = H.scalarMultiply(t2).add(G.scalarMultiply(tau2));
// PAPER LINES 49-51
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,T2.toBytes()));
Scalar x = hashCache;
// PAPER LINES 52-53
Scalar taux = tau1.mul(x);
taux = taux.add(tau2.mul(x.sq()));
for (int j = 1; j <= M; j++) // note this starts from 1
{
taux = taux.add(z.pow(1+j).mul(gamma[j-1]));
}
Scalar mu = x.mul(rho).add(alpha);
// PAPER LINES 54-57
Scalar[] l = l0;
l = VectorAdd(l,VectorScalar(l1,x));
Scalar[] r = r0;
r = VectorAdd(r,VectorScalar(r1,x));
Scalar t = InnerProduct(l,r);
// PAPER LINES 32-33
hashCache = hashToScalar(concat(hashCache.bytes,x.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,taux.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,mu.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,t.bytes));
Scalar x_ip = hashCache;
// These are used in the inner product rounds
int nprime = M*N;
Curve25519Point[] Gprime = new Curve25519Point[M*N];
Curve25519Point[] Hprime = new Curve25519Point[M*N];
Scalar[] aprime = new Scalar[M*N];
Scalar[] bprime = new Scalar[M*N];
for (int i = 0; i < M*N; i++)
{
Gprime[i] = Gi[i];
Hprime[i] = Hi[i].scalarMultiply(Invert(y).pow(i));
aprime[i] = l[i];
bprime[i] = r[i];
}
Curve25519Point[] L = new Curve25519Point[logMN];
Curve25519Point[] R = new Curve25519Point[logMN];
int round = 0; // track the index based on number of rounds
Scalar[] w = new Scalar[logMN]; // this is the challenge x in the inner product protocol
// PAPER LINE 13
while (nprime > 1)
{
// PAPER LINE 15
nprime /= 2;
// PAPER LINES 16-17
Scalar cL = InnerProduct(ScalarSlice(aprime,0,nprime),ScalarSlice(bprime,nprime,bprime.length));
Scalar cR = InnerProduct(ScalarSlice(aprime,nprime,aprime.length),ScalarSlice(bprime,0,nprime));
// PAPER LINES 18-19
L[round] = VectorExponentCustom(CurveSlice(Gprime,nprime,Gprime.length),CurveSlice(Hprime,0,nprime),ScalarSlice(aprime,0,nprime),ScalarSlice(bprime,nprime,bprime.length)).add(H.scalarMultiply(cL.mul(x_ip)));
R[round] = VectorExponentCustom(CurveSlice(Gprime,0,nprime),CurveSlice(Hprime,nprime,Hprime.length),ScalarSlice(aprime,nprime,aprime.length),ScalarSlice(bprime,0,nprime)).add(H.scalarMultiply(cR.mul(x_ip)));
// PAPER LINES 21-22
hashCache = hashToScalar(concat(hashCache.bytes,L[round].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,R[round].toBytes()));
w[round] = hashCache;
// PAPER LINES 24-25
Gprime = Hadamard2(VectorScalar2(CurveSlice(Gprime,0,nprime),Invert(w[round])),VectorScalar2(CurveSlice(Gprime,nprime,Gprime.length),w[round]));
Hprime = Hadamard2(VectorScalar2(CurveSlice(Hprime,0,nprime),w[round]),VectorScalar2(CurveSlice(Hprime,nprime,Hprime.length),Invert(w[round])));
// PAPER LINES 28-29
aprime = VectorAdd(VectorScalar(ScalarSlice(aprime,0,nprime),w[round]),VectorScalar(ScalarSlice(aprime,nprime,aprime.length),Invert(w[round])));
bprime = VectorAdd(VectorScalar(ScalarSlice(bprime,0,nprime),Invert(w[round])),VectorScalar(ScalarSlice(bprime,nprime,bprime.length),w[round]));
round += 1;
}
// PAPER LINE 58 (with inclusions from PAPER LINE 8 and PAPER LINE 20)
return new ProofTuple(V,A,S,T1,T2,taux,mu,L,R,aprime[0],bprime[0],t);
}
/* Given a range proof, determine if it is valid */
public static boolean VERIFY(ProofTuple[] proofs)
{
// Figure out which proof is longest
int maxLength = 0;
for (int p = 0; p < proofs.length; p++)
{
if (proofs[p].L.length > maxLength)
maxLength = proofs[p].L.length;
}
int maxMN = (int) Math.pow(2,maxLength);
// Set up weighted aggregates for the first check
Scalar y0 = Scalar.ZERO; // tau_x
Scalar y1 = Scalar.ZERO; // t-(k+z+Sum(y^i))
Curve25519Point Y2 = Curve25519Point.ZERO; // z-V sum
Curve25519Point Y3 = Curve25519Point.ZERO; // xT_1
Curve25519Point Y4 = Curve25519Point.ZERO; // x^2T_2
// Set up weighted aggregates for the second check
Curve25519Point Z0 = Curve25519Point.ZERO; // A + xS
Scalar z1 = Scalar.ZERO; // mu
Curve25519Point Z2 = Curve25519Point.ZERO; // Li/Ri sum
Scalar z3 = Scalar.ZERO; // (t-ab)x_ip
Scalar[] z4 = new Scalar[maxMN]; // g scalar sum
Scalar[] z5 = new Scalar[maxMN]; // h scalar sum
for (int i = 0; i < maxMN; i++)
{
z4[i] = Scalar.ZERO;
z5[i] = Scalar.ZERO;
}
for (int p = 0; p < proofs.length; p++)
{
ProofTuple proof = proofs[p];
int logMN = proof.L.length;
int M = (int) Math.pow(2,logMN)/N;
// For the current proof, get a random weighting factor
// NOTE: This must not be deterministic! Only the verifier knows it
Scalar weight = randomScalar();
// Reconstruct the challenges
Scalar hashCache = hashToScalar(proof.V[0].toBytes());
for (int j = 1; j < M; j++)
hashCache = hashToScalar(concat(hashCache.bytes,proof.V[j].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T2.toBytes()));
Scalar x = hashCache;
hashCache = hashToScalar(concat(hashCache.bytes,x.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.taux.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.mu.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.t.bytes));
Scalar x_ip = hashCache;
// PAPER LINE 61
y0 = y0.add(proof.taux.mul(weight));
Scalar k = Scalar.ZERO.sub(z.sq().mul(InnerProduct(VectorPowers(Scalar.ONE,M*N),VectorPowers(y,M*N))));
for (int j = 1; j <= M; j++) // note this starts from 1
{
k = k.sub(z.pow(j+2).mul(InnerProduct(VectorPowers(Scalar.ONE,N),VectorPowers(Scalar.TWO,N))));
}
y1 = y1.add(proof.t.sub(k.add(z.mul(InnerProduct(VectorPowers(Scalar.ONE,M*N),VectorPowers(y,M*N))))).mul(weight));
Curve25519Point temp = Curve25519Point.ZERO;
for (int j = 0; j < M; j++)
{
temp = temp.add(proof.V[j].scalarMultiply(z.pow(j+2)));
}
Y2 = Y2.add(temp.scalarMultiply(weight));
Y3 = Y3.add(proof.T1.scalarMultiply(x.mul(weight)));
Y4 = Y4.add(proof.T2.scalarMultiply(x.sq().mul(weight)));
// PAPER LINE 62
Z0 = Z0.add((proof.A.add(proof.S.scalarMultiply(x))).scalarMultiply(weight));
// PAPER LINES 21-22
// The inner product challenges are computed per round
Scalar[] w = new Scalar[logMN];
hashCache = hashToScalar(concat(hashCache.bytes,proof.L[0].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.R[0].toBytes()));
w[0] = hashCache;
if (logMN > 1)
{
for (int i = 1; i < logMN; i++)
{
hashCache = hashToScalar(concat(hashCache.bytes,proof.L[i].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.R[i].toBytes()));
w[i] = hashCache;
}
}
// Basically PAPER LINES 24-25
// Compute the curvepoints from G[i] and H[i]
for (int i = 0; i < M*N; i++)
{
// Convert the index to binary IN REVERSE and construct the scalar exponent
int index = i;
Scalar gScalar = proof.a;
Scalar hScalar = proof.b.mul(Invert(y).pow(i));
for (int j = logMN-1; j >= 0; j--)
{
int J = w.length - j - 1; // because this is done in reverse bit order
int basePow = (int) Math.pow(2,j); // assumes we don't get too big
if (index / basePow == 0) // bit is zero
{
gScalar = gScalar.mul(Invert(w[J]));
hScalar = hScalar.mul(w[J]);
}
else // bit is one
{
gScalar = gScalar.mul(w[J]);
hScalar = hScalar.mul(Invert(w[J]));
index -= basePow;
}
}
gScalar = gScalar.add(z);
hScalar = hScalar.sub(z.mul(y.pow(i)).add(z.pow(2+i/N).mul(Scalar.TWO.pow(i%N))).mul(Invert(y).pow(i)));
// Now compute the basepoint's scalar multiplication
z4[i] = z4[i].add(gScalar.mul(weight));
z5[i] = z5[i].add(hScalar.mul(weight));
}
// PAPER LINE 26
z1 = z1.add(proof.mu.mul(weight));
temp = Curve25519Point.ZERO;
for (int i = 0; i < logMN; i++)
{
temp = temp.add(proof.L[i].scalarMultiply(w[i].sq()));
temp = temp.add(proof.R[i].scalarMultiply(Invert(w[i]).sq()));
}
Z2 = Z2.add(temp.scalarMultiply(weight));
z3 = z3.add((proof.t.sub(proof.a.mul(proof.b))).mul(x_ip).mul(weight));
}
// Perform the first- and second-stage check on all proofs at once
// NOTE: These checks could benefit from multiexp operations
Curve25519Point Check1 = Curve25519Point.ZERO;
Check1 = Check1.add(G.scalarMultiply(y0));
Check1 = Check1.add(H.scalarMultiply(y1));
Check1 = Check1.add(Y2.scalarMultiply(Scalar.ZERO.sub(Scalar.ONE)));
Check1 = Check1.add(Y3.scalarMultiply(Scalar.ZERO.sub(Scalar.ONE)));
Check1 = Check1.add(Y4.scalarMultiply(Scalar.ZERO.sub(Scalar.ONE)));
if (! Check1.equals(Curve25519Point.ZERO))
{
System.out.println("Failed first-stage check");
return false;
}
Curve25519Point Check2 = Curve25519Point.ZERO;
Check2 = Check2.add(Z0);
Check2 = Check2.add(G.scalarMultiply(Scalar.ZERO.sub(z1)));
Check2 = Check2.add(Z2);
Check2 = Check2.add(H.scalarMultiply(z3));
for (int i = 0; i < maxMN; i++)
{
Check2 = Check2.add(Gi[i].scalarMultiply(Scalar.ZERO.sub(z4[i])));
Check2 = Check2.add(Hi[i].scalarMultiply(Scalar.ZERO.sub(z5[i])));
}
if (! Check2.equals(Curve25519Point.ZERO))
{
System.out.println("Failed second-stage check");
return false;
}
return true;
}
/* Generate a random proof with specified bit size and number of outputs */
public static ProofTuple randomProof(int mExp)
{
int M = (int) Math.pow(2,mExp);
Random rando = new Random();
Scalar[] amounts = new Scalar[M];
Scalar[] masks = new Scalar[M];
// Generate the outputs and masks
for (int i = 0; i < M; i++)
{
long amount = -1L;
while (amount > Math.pow(2,N)-1 || amount < 0L) // Java doesn't handle random long ranges very well
amount = rando.nextLong();
amounts[i] = new Scalar(BigInteger.valueOf(amount));
masks[i] = randomScalar();
}
// Run and return the proof
// Have to pass in lg(M) because Java is stupid about logarithms
System.out.println("Generating proof with " + M + " outputs...");
return PROVE(amounts,masks,mExp);
}
public static void main(String[] args)
{
// Test parameters: currently only works when batching proofs of the same aggregation size
NEXP = 6; // N = 2^NEXP
N = (int) Math.pow(2,NEXP); // number of bits in amount range (so amounts are 0..2^(N-1))
int MAXEXP = 4; // the maximum number of outputs used is 2^MAXEXP
int PROOFS = 5; // number of proofs in batch
// Set the curve base points
G = Curve25519Point.G;
H = Curve25519Point.hashToPoint(G);
int MAXM = (int) Math.pow(2,MAXEXP);
Gi = new Curve25519Point[MAXM*N];
Hi = new Curve25519Point[MAXM*N];
for (int i = 0; i < MAXM*N; i++)
{
Gi[i] = getHpnGLookup(2*i);
Hi[i] = getHpnGLookup(2*i+1);
}
// Set up all the proofs
ProofTuple[] proofs = new ProofTuple[PROOFS];
Random rando = new Random();
for (int i = 0; i < PROOFS; i++)
{
// Pick a random proof length: 2^0,...,2^MAXEXP
proofs[i] = randomProof(rando.nextInt(MAXEXP+1));
}
// Verify the batch
System.out.println("Verifying proof batch...");
if (VERIFY(proofs))
System.out.println("Success!");
else
System.out.println("ERROR: failed verification");
}
}

View file

@ -0,0 +1,522 @@
// NOTE: this interchanges the roles of G and H to match other code's behavior
package how.monero.hodl.bulletproof;
import how.monero.hodl.crypto.Curve25519Point;
import how.monero.hodl.crypto.Scalar;
import how.monero.hodl.crypto.CryptoUtil;
import java.math.BigInteger;
import java.util.Random;
import static how.monero.hodl.crypto.Scalar.randomScalar;
import static how.monero.hodl.crypto.CryptoUtil.*;
import static how.monero.hodl.util.ByteUtil.*;
public class OptimizedLogBulletproof
{
private static int N;
private static int logN;
private static Curve25519Point G;
private static Curve25519Point H;
private static Curve25519Point[] Gi;
private static Curve25519Point[] Hi;
public static class ProofTuple
{
private Curve25519Point V;
private Curve25519Point A;
private Curve25519Point S;
private Curve25519Point T1;
private Curve25519Point T2;
private Scalar taux;
private Scalar mu;
private Curve25519Point[] L;
private Curve25519Point[] R;
private Scalar a;
private Scalar b;
private Scalar t;
public ProofTuple(Curve25519Point V, Curve25519Point A, Curve25519Point S, Curve25519Point T1, Curve25519Point T2, Scalar taux, Scalar mu, Curve25519Point[] L, Curve25519Point[] R, Scalar a, Scalar b, Scalar t)
{
this.V = V;
this.A = A;
this.S = S;
this.T1 = T1;
this.T2 = T2;
this.taux = taux;
this.mu = mu;
this.L = L;
this.R = R;
this.a = a;
this.b = b;
this.t = t;
}
}
/* Given two scalar arrays, construct a vector commitment */
public static Curve25519Point VectorExponent(Scalar[] a, Scalar[] b)
{
assert a.length == N && b.length == N;
Curve25519Point Result = Curve25519Point.ZERO;
for (int i = 0; i < N; i++)
{
Result = Result.add(Gi[i].scalarMultiply(a[i]));
Result = Result.add(Hi[i].scalarMultiply(b[i]));
}
return Result;
}
/* Compute a custom vector-scalar commitment */
public static Curve25519Point VectorExponentCustom(Curve25519Point[] A, Curve25519Point[] B, Scalar[] a, Scalar[] b)
{
assert a.length == A.length && b.length == B.length && a.length == b.length;
Curve25519Point Result = Curve25519Point.ZERO;
for (int i = 0; i < a.length; i++)
{
Result = Result.add(A[i].scalarMultiply(a[i]));
Result = Result.add(B[i].scalarMultiply(b[i]));
}
return Result;
}
/* Given a scalar, construct a vector of powers */
public static Scalar[] VectorPowers(Scalar x)
{
Scalar[] result = new Scalar[N];
for (int i = 0; i < N; i++)
{
result[i] = x.pow(i);
}
return result;
}
/* Given two scalar arrays, construct the inner product */
public static Scalar InnerProduct(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar result = Scalar.ZERO;
for (int i = 0; i < a.length; i++)
{
result = result.add(a[i].mul(b[i]));
}
return result;
}
/* Given two scalar arrays, construct the Hadamard product */
public static Scalar[] Hadamard(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].mul(b[i]);
}
return result;
}
/* Given two curvepoint arrays, construct the Hadamard product */
public static Curve25519Point[] Hadamard2(Curve25519Point[] A, Curve25519Point[] B)
{
assert A.length == B.length;
Curve25519Point[] Result = new Curve25519Point[A.length];
for (int i = 0; i < A.length; i++)
{
Result[i] = A[i].add(B[i]);
}
return Result;
}
/* Add two vectors */
public static Scalar[] VectorAdd(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].add(b[i]);
}
return result;
}
/* Subtract two vectors */
public static Scalar[] VectorSubtract(Scalar[] a, Scalar[] b)
{
assert a.length == b.length;
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].sub(b[i]);
}
return result;
}
/* Multiply a scalar and a vector */
public static Scalar[] VectorScalar(Scalar[] a, Scalar x)
{
Scalar[] result = new Scalar[a.length];
for (int i = 0; i < a.length; i++)
{
result[i] = a[i].mul(x);
}
return result;
}
/* Exponentiate a curve vector by a scalar */
public static Curve25519Point[] VectorScalar2(Curve25519Point[] A, Scalar x)
{
Curve25519Point[] Result = new Curve25519Point[A.length];
for (int i = 0; i < A.length; i++)
{
Result[i] = A[i].scalarMultiply(x);
}
return Result;
}
/* Compute the inverse of a scalar, the stupid way */
public static Scalar Invert(Scalar x)
{
Scalar inverse = new Scalar(x.toBigInteger().modInverse(CryptoUtil.l));
assert x.mul(inverse).equals(Scalar.ONE);
return inverse;
}
/* Compute the slice of a curvepoint vector */
public static Curve25519Point[] CurveSlice(Curve25519Point[] a, int start, int stop)
{
Curve25519Point[] Result = new Curve25519Point[stop-start];
for (int i = start; i < stop; i++)
{
Result[i-start] = a[i];
}
return Result;
}
/* Compute the slice of a scalar vector */
public static Scalar[] ScalarSlice(Scalar[] a, int start, int stop)
{
Scalar[] result = new Scalar[stop-start];
for (int i = start; i < stop; i++)
{
result[i-start] = a[i];
}
return result;
}
/* Compute the value of k(y,z) */
public static Scalar ComputeK(Scalar y, Scalar z)
{
Scalar result = Scalar.ZERO;
result = result.sub(z.sq().mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y))));
result = result.sub(z.pow(3).mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(Scalar.TWO))));
return result;
}
/* Given a value v (0..2^N-1) and a mask gamma, construct a range proof */
public static ProofTuple PROVE(Scalar v, Scalar gamma)
{
Curve25519Point V = H.scalarMultiply(v).add(G.scalarMultiply(gamma));
// This hash is updated for Fiat-Shamir throughout the proof
Scalar hashCache = hashToScalar(V.toBytes());
// PAPER LINES 36-37
Scalar[] aL = new Scalar[N];
Scalar[] aR = new Scalar[N];
BigInteger tempV = v.toBigInteger();
for (int i = N-1; i >= 0; i--)
{
BigInteger basePow = BigInteger.valueOf(2).pow(i);
if (tempV.divide(basePow).equals(BigInteger.ZERO))
{
aL[i] = Scalar.ZERO;
}
else
{
aL[i] = Scalar.ONE;
tempV = tempV.subtract(basePow);
}
aR[i] = aL[i].sub(Scalar.ONE);
}
// PAPER LINES 38-39
Scalar alpha = randomScalar();
Curve25519Point A = VectorExponent(aL,aR).add(G.scalarMultiply(alpha));
// PAPER LINES 40-42
Scalar[] sL = new Scalar[N];
Scalar[] sR = new Scalar[N];
for (int i = 0; i < N; i++)
{
sL[i] = randomScalar();
sR[i] = randomScalar();
}
Scalar rho = randomScalar();
Curve25519Point S = VectorExponent(sL,sR).add(G.scalarMultiply(rho));
// PAPER LINES 43-45
hashCache = hashToScalar(concat(hashCache.bytes,A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
// Polynomial construction before PAPER LINE 46
Scalar t0 = Scalar.ZERO;
Scalar t1 = Scalar.ZERO;
Scalar t2 = Scalar.ZERO;
t0 = t0.add(z.mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y))));
t0 = t0.add(z.sq().mul(v));
Scalar k = ComputeK(y,z);
t0 = t0.add(k);
t1 = t1.add(InnerProduct(VectorSubtract(aL,VectorScalar(VectorPowers(Scalar.ONE),z)),Hadamard(VectorPowers(y),sR)));
t1 = t1.add(InnerProduct(sL,VectorAdd(Hadamard(VectorPowers(y),VectorAdd(aR,VectorScalar(VectorPowers(Scalar.ONE),z))),VectorScalar(VectorPowers(Scalar.TWO),z.sq()))));
t2 = t2.add(InnerProduct(sL,Hadamard(VectorPowers(y),sR)));
// PAPER LINES 47-48
Scalar tau1 = randomScalar();
Scalar tau2 = randomScalar();
Curve25519Point T1 = H.scalarMultiply(t1).add(G.scalarMultiply(tau1));
Curve25519Point T2 = H.scalarMultiply(t2).add(G.scalarMultiply(tau2));
// PAPER LINES 49-51
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,T2.toBytes()));
Scalar x = hashCache;
// PAPER LINES 52-53
Scalar taux = Scalar.ZERO;
taux = tau1.mul(x);
taux = taux.add(tau2.mul(x.sq()));
taux = taux.add(gamma.mul(z.sq()));
Scalar mu = x.mul(rho).add(alpha);
// PAPER LINES 54-57
Scalar[] l = new Scalar[N];
Scalar[] r = new Scalar[N];
l = VectorAdd(VectorSubtract(aL,VectorScalar(VectorPowers(Scalar.ONE),z)),VectorScalar(sL,x));
r = VectorAdd(Hadamard(VectorPowers(y),VectorAdd(aR,VectorAdd(VectorScalar(VectorPowers(Scalar.ONE),z),VectorScalar(sR,x)))),VectorScalar(VectorPowers(Scalar.TWO),z.sq()));
Scalar t = InnerProduct(l,r);
// PAPER LINES 32-33
hashCache = hashToScalar(concat(hashCache.bytes,x.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,taux.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,mu.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,t.bytes));
Scalar x_ip = hashCache;
// These are used in the inner product rounds
int nprime = N;
Curve25519Point[] Gprime = new Curve25519Point[N];
Curve25519Point[] Hprime = new Curve25519Point[N];
Scalar[] aprime = new Scalar[N];
Scalar[] bprime = new Scalar[N];
for (int i = 0; i < N; i++)
{
Gprime[i] = Gi[i];
Hprime[i] = Hi[i].scalarMultiply(Invert(y).pow(i));
aprime[i] = l[i];
bprime[i] = r[i];
}
Curve25519Point[] L = new Curve25519Point[logN];
Curve25519Point[] R = new Curve25519Point[logN];
int round = 0; // track the index based on number of rounds
Scalar[] w = new Scalar[logN]; // this is the challenge x in the inner product protocol
// PAPER LINE 13
while (nprime > 1)
{
// PAPER LINE 15
nprime /= 2;
// PAPER LINES 16-17
Scalar cL = InnerProduct(ScalarSlice(aprime,0,nprime),ScalarSlice(bprime,nprime,bprime.length));
Scalar cR = InnerProduct(ScalarSlice(aprime,nprime,aprime.length),ScalarSlice(bprime,0,nprime));
// PAPER LINES 18-19
L[round] = VectorExponentCustom(CurveSlice(Gprime,nprime,Gprime.length),CurveSlice(Hprime,0,nprime),ScalarSlice(aprime,0,nprime),ScalarSlice(bprime,nprime,bprime.length)).add(H.scalarMultiply(cL.mul(x_ip)));
R[round] = VectorExponentCustom(CurveSlice(Gprime,0,nprime),CurveSlice(Hprime,nprime,Hprime.length),ScalarSlice(aprime,nprime,aprime.length),ScalarSlice(bprime,0,nprime)).add(H.scalarMultiply(cR.mul(x_ip)));
// PAPER LINES 21-22
hashCache = hashToScalar(concat(hashCache.bytes,L[round].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,R[round].toBytes()));
w[round] = hashCache;
// PAPER LINES 24-25
Gprime = Hadamard2(VectorScalar2(CurveSlice(Gprime,0,nprime),Invert(w[round])),VectorScalar2(CurveSlice(Gprime,nprime,Gprime.length),w[round]));
Hprime = Hadamard2(VectorScalar2(CurveSlice(Hprime,0,nprime),w[round]),VectorScalar2(CurveSlice(Hprime,nprime,Hprime.length),Invert(w[round])));
// PAPER LINES 28-29
aprime = VectorAdd(VectorScalar(ScalarSlice(aprime,0,nprime),w[round]),VectorScalar(ScalarSlice(aprime,nprime,aprime.length),Invert(w[round])));
bprime = VectorAdd(VectorScalar(ScalarSlice(bprime,0,nprime),Invert(w[round])),VectorScalar(ScalarSlice(bprime,nprime,bprime.length),w[round]));
round += 1;
}
// PAPER LINE 58 (with inclusions from PAPER LINE 8 and PAPER LINE 20)
return new ProofTuple(V,A,S,T1,T2,taux,mu,L,R,aprime[0],bprime[0],t);
}
/* Given a range proof, determine if it is valid */
public static boolean VERIFY(ProofTuple proof)
{
// Reconstruct the challenges
Scalar hashCache = hashToScalar(proof.V.toBytes());
hashCache = hashToScalar(concat(hashCache.bytes,proof.A.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.S.toBytes()));
Scalar y = hashCache;
hashCache = hashToScalar(hashCache.bytes);
Scalar z = hashCache;
hashCache = hashToScalar(concat(hashCache.bytes,z.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T1.toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.T2.toBytes()));
Scalar x = hashCache;
hashCache = hashToScalar(concat(hashCache.bytes,x.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.taux.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.mu.bytes));
hashCache = hashToScalar(concat(hashCache.bytes,proof.t.bytes));
Scalar x_ip = hashCache;
// PAPER LINE 61
Curve25519Point L61Left = G.scalarMultiply(proof.taux).add(H.scalarMultiply(proof.t));
Scalar k = ComputeK(y,z);
Curve25519Point L61Right = H.scalarMultiply(k.add(z.mul(InnerProduct(VectorPowers(Scalar.ONE),VectorPowers(y)))));
L61Right = L61Right.add(proof.V.scalarMultiply(z.sq()));
L61Right = L61Right.add(proof.T1.scalarMultiply(x));
L61Right = L61Right.add(proof.T2.scalarMultiply(x.sq()));
if (!L61Right.equals(L61Left))
return false;
// PAPER LINE 62
Curve25519Point P = Curve25519Point.ZERO;
P = P.add(proof.A);
P = P.add(proof.S.scalarMultiply(x));
// Compute the number of rounds for the inner product
int rounds = proof.L.length;
// PAPER LINES 21-22
// The inner product challenges are computed per round
Scalar[] w = new Scalar[rounds];
hashCache = hashToScalar(concat(hashCache.bytes,proof.L[0].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.R[0].toBytes()));
w[0] = hashCache;
if (rounds > 1)
{
for (int i = 1; i < rounds; i++)
{
hashCache = hashToScalar(concat(hashCache.bytes,proof.L[i].toBytes()));
hashCache = hashToScalar(concat(hashCache.bytes,proof.R[i].toBytes()));
w[i] = hashCache;
}
}
// Basically PAPER LINES 24-25
// Compute the curvepoints from G[i] and H[i]
Curve25519Point InnerProdG = Curve25519Point.ZERO;
Curve25519Point InnerProdH = Curve25519Point.ZERO;
for (int i = 0; i < N; i++)
{
// Convert the index to binary IN REVERSE and construct the scalar exponent
int index = i;
Scalar gScalar = proof.a;
Scalar hScalar = proof.b.mul(Invert(y).pow(i));
for (int j = rounds-1; j >= 0; j--)
{
int J = w.length - j - 1; // because this is done in reverse bit order
int basePow = (int) Math.pow(2,j); // assumes we don't get too big
if (index / basePow == 0) // bit is zero
{
gScalar = gScalar.mul(Invert(w[J]));
hScalar = hScalar.mul(w[J]);
}
else // bit is one
{
gScalar = gScalar.mul(w[J]);
hScalar = hScalar.mul(Invert(w[J]));
index -= basePow;
}
}
// Adjust the scalars using the exponents from PAPER LINE 62
gScalar = gScalar.add(z);
hScalar = hScalar.sub(z.mul(y.pow(i)).add(z.sq().mul(Scalar.TWO.pow(i))).mul(Invert(y).pow(i)));
// Now compute the basepoint's scalar multiplication
// Each of these could be written as a multiexp operation instead
InnerProdG = InnerProdG.add(Gi[i].scalarMultiply(gScalar));
InnerProdH = InnerProdH.add(Hi[i].scalarMultiply(hScalar));
}
// PAPER LINE 26
Curve25519Point Pprime = P.add(G.scalarMultiply(Scalar.ZERO.sub(proof.mu)));
for (int i = 0; i < rounds; i++)
{
Pprime = Pprime.add(proof.L[i].scalarMultiply(w[i].sq()));
Pprime = Pprime.add(proof.R[i].scalarMultiply(Invert(w[i]).sq()));
}
Pprime = Pprime.add(H.scalarMultiply(proof.t.mul(x_ip)));
if (!Pprime.equals(InnerProdG.add(InnerProdH).add(H.scalarMultiply(proof.a.mul(proof.b).mul(x_ip)))))
return false;
return true;
}
public static void main(String[] args)
{
// Number of bits in the range
N = 64;
logN = 6; // its log, manually
// Set the curve base points
G = Curve25519Point.G;
H = Curve25519Point.hashToPoint(G);
Gi = new Curve25519Point[N];
Hi = new Curve25519Point[N];
for (int i = 0; i < N; i++)
{
Gi[i] = getHpnGLookup(2*i);
Hi[i] = getHpnGLookup(2*i+1);
}
// Run a bunch of randomized trials
Random rando = new Random();
int TRIALS = 250;
int count = 0;
while (count < TRIALS)
{
long amount = rando.nextLong();
if (amount > Math.pow(2,N)-1 || amount < 0)
continue;
ProofTuple proof = PROVE(new Scalar(BigInteger.valueOf(amount)),randomScalar());
if (!VERIFY(proof))
System.out.println("Test failed");
count += 1;
}
}
}