Yo Dawg, I heard you like CoinJoins

I will be releasing the code in a week or so for a service I will be hosting to help further obscure transaction graphs. The code will be available here once it’s ready: https://github.com/DavidBurkett/GrinJoin. In the meantime, I was hoping to get everyone’s thoughts on the idea, and see if anyone has any interest in helping out, or ideas for improvement. Thanks in advance!

GrinJoin

A CoinJoin service for Grin

Background

In 2013, Greg Maxwell proposed[1] the CoinJoin protocol as a way of anonymizing transactions. CoinJoin is a way of combining the inputs and outputs from multiple transactions into a single transaction. Doing such erased the original transaction boundaries, making it infeasible for blockchain analysts to determine which outputs spent which inputs.

Three years later, the original Mimblewimble protocol was outlined[2] by the pseudonymous Tom Elvis Jedusor. This combined CoinJoin with Confidential Transactions to make a non-interactive protocol for performing CoinJoins, which became the basis for the Grin cryptocurrency.

In Grin, every block is simply a CoinJoin of all of the other transactions in the block, resulting in one big transaction. This gives Grin an enormous privacy advantage over bitcoin, breaking transaction linkability! In theory .

Problem

There are a few problems though. Privacy is only gained if there are other transactions to join yours with. Since Grin is still new, there are only a few transactions per minute on average, meaning there aren’t very many other transactions to mix yours with. To make matters worse, anyone running a node that wants to monitor the transaction pool can see nearly all of the individual transactions before they’ve been combined in a block. So much for unlinkability!

Proposed Soution

As usage grows, these issues become less of a problem. In the meantime, there are ways to make the situation significantly better for those who are willing to wait a few blocks before including their transactions. By introducing a trust-minimized central server (or group of servers), transactions can be collected and combined for a period of time before ever being broadcast to the network and included in a block. Now only the central server knows which inputs and outputs belong to you!

But now this central service knows which node a transaction came from. How can we solve this problem? The same way we solve it for Grin transactions: Dandelion (and later, i2p). By introducing a new tx pool similar to Dandelion’s stempool, let’s call it the JoinPool, we can route our transactions through other nodes at random, obscuring their origin. Rather than “fluffing”/broadcasting a transaction after X hops (on average), the transactions in the JoinPool can instead be sent to the CoinJoin server to be joined with other transactions.

So now only the central server knows which outputs spend which inputs, but has no idea where that transaction came from. If we wanted, we could even setup a heirarchy of servers, so there’s not a single server that knows all of the original transactions. Each node could choose either a specific trusted server or a random one to send their transactions to. These secondary servers can aggregate transactions as they receive them, and after aggregating “enough” transactions, then send the final, joined transaction to the primary CoinJoin server to be joined with the aggregate transactions from the other secondary services.

Roadmap

  1. Finish implementing support for a primary “GrinJoin” server, and start hosting it. Interested parties can manually submit transactions to be included in the next GrinJoin transaction to test the service. I will initially log transaction info for debugging purposes, so at this stage, assume I could compromise your privacy if I wanted to.
  2. Implement a JoinPool in Grin++ to obscure the origin of each transaction. Transactions that choose to use this new CoinJoin feature will eventually make their way to the GrinJoin server. I will remove all logging once everything is stable.
  3. Add support to GrinJoin for running a ‘secondary’ server, and add the ability to Grin++ to choose which server to submit your transactions to.

[1] https://bitcointalk.org/index.php?topic=279249
[2] https://download.wpsoftware.net/bitcoin/wizardry/mimblewimble.txt

9 Likes

I would open with how many input output pairs you believe you could group together, not a history of grin fancy math.

How many input output pairs can be grouped?

Perhaps join Grin and Join in a way that makes you grin => Groin :grin:

1 Like

What would limit that in any way?

2 Likes

+1. There should be no limit aside from the block size/weight limit.

This looks like a very interesting proposal. Another way you could go is to obfuscate the inputs and outputs prior to users spending their funds. This would require rounds of shuffle transactions. During these rounds, each wallet would shuffle its outputs with other wallets, segregating shuffled outputs to be prioritized for spending.

One of the benefits of this approach, in addition to allowing fast payments, is that it can also be done in an entirely trustless way, where not even the server knows which inputs correspond with which outputs. For an example implementation, see “CashShuffle” on Bitcoin Cash. Note that shuffling in this manner with Grin would be even more effective because there is no need to worry about concealing the amounts.

Isn’t there hostile input that takes more effort to catch then to send?

What’s the end result of someone feeding 2 transactions that share an input?

This is cool!

Definitely check out my Objective-Dandelion proposal, which might provide inspiration for a routing layer that attempts to maximize aggregation:

Grinbox is already meant to be leveraged for joining transactions, any reasons why you think a net-new service is necessary?

It is? That’s news to me. AFAIK, Grinbox is for offline transaction building, but I thought the transactions were still broadcast normally.

It doesn’t seem to be a supported feature currently, but I think I’ve read somewhere that they wanted to support batching broadcasted transactions.

Then this proposal is to implement that feature :slight_smile:

1 Like

I wonder what happens if central server goes down?

There will be a fallback mechanism to “fluff” the transaction if it isn’t seen in a certain amount of time.

I love this,
question - as grin gets bigger will we be able to further extend the service to something such as :
a) if a user wants to be put in front of the queue in the sense that they take priority over still awaiting transaction pool members (if u would call them that), then can we request a compulsory donation (haha).
b) that donation could go towards people who offer currency as stake for circulation or simply to users who wait the longest for their transactions (patient lizard ppl).

–please humour me / all g if u dont xD

@rodarmor When I first read that proposal, I misread it terribly, and so quickly brushed it off as deeply flawed. Upon re-reading, I realized what you were saying, and it’s actually very similar to what I was planning on doing anyway, with a few key improvements. The only changes I would suggest is we

  1. Use IP address instead of creating a new nodeId concept
  2. use the receipt of a new block as the expiration of the patience timer. I apologize for brushing this off before.
  3. Instead of just waiting for the tx to get to a fluff peer, which may never happen anyway due to cycles, just use probability as before, except instead of fluffing, make one additional hop to one of the fluff peers.

No worries! I surely could have done a better job explaining it.

That’s a good idea. Two peers behind NAT or on the same box would get the same ID, but that’s probably not enough of an issue to introduce a separate node ID.

That seems reasonable, as long as a single block time gives adequate opportunities to aggregate transactions.

An alternative might be to immediately fluff a transaction if you are stemmed it twice. I.e. on first receipt of a stem transaction you stem it, but if you receive it again you’re probably in a cycle, so you fluff it. I don’t really have any reason to believe that this is better or worse than using probability though.