Grin Mining FAQ - All of the answers to your mining questions


#1

I’m going to keep this post updated with answers to the most commonly heard questions about mining in Grin. Please feel free to (politely!) link this FAQ to anyone asking basic mining questions on Gitter or anywhere else. If an answer isn’t on here, I’ll modify the FAQ to include it.

These answers are meant to be very high level and aimed at users rather than developers.

What Proof-of-Work does GRIN use

Grin uses a proof-of-work system called Cuckoo Cycle.

What is Cuckoo Cycle?

Cuckoo Cycle is a Proof-of-Work algorithm that searches very large graphs for cycles, i.e. a cyclic path (a path that loops back on itself). For most people’s purposes, it’s an algorithm that we hope turns out to be ASIC resistant. An introduction to how it works can be found here.

What is ‘Graphs per Second’ (GPS)?

Cuckoo Cycle doesn’t work via hashing as most people understand it, it works by searching through large graphs. It makes more sense to think of solver speed in terms of Graph searches per second, or just Graphs per Second.

What card/rig/setup will best for mining Grin?

The short answer is, ‘nobody knows’. And even if we did, the same answer might not hold true next week.

At the moment, the fastest GPU implementation is John Tromp’s Mean CUDA Miner, which (as of this writing) maxes out at about 1.2 seconds per graph on a 1080ti. A 1080 comes in slightly slower, and a 980ti seems to take about 4 seconds. However, all of these numbers should be taken with a pinch of salt as they represent a very early and unoptimised version of the miner. It’s expected that the community will ultimately step up to create more optimised solvers for various platforms. We have no way of predicting what hardware setup will be the fastest as it will depend on what target platforms the community ends up optimising for.

This being said, I’d like to start collecting real stats on particular GPUs, graph times, and energy efficiency, and make sure they’re all listed in this post. This will likely become more feasible after the launch of Testnet2.

I want to build a mining system. How does [BUS Speed/RAM Timing/CPU Architecture/LED Colour] affect solve times?

See above. If you’d like to perform tests on your particular hardware to find out, I’ll be happy to share your results with the community here.

Why are only NVIDIA cards supported?

At present, the only useful GPU solver we have is John Tromp’s CUDA miner. OpenCL/ATI solvers will likely appear in the community very soon after launch, if not before. (If you’re considering writing one, please see the note at the bottom of this post).

Will I be able to run Multiple GPUs?

Parallel GPU support should work for NVIDIA GPUs on Testnet2 and beyond. You should also be able to run multiple GPUs and CPU mining in parallel.

Will CPUs be competitive?

Hopefully, but it’s not guaranteed. At the moment, a mid-quality i7 comes is able to search 1 Graph in about 3-4 seconds, which is well within range of the current GPU solver. Presently, it looks as it if running a CPU miner will be worthwhile. We’d hope the ratio of CPU GPS to GPU GPS stays within about 1:5, i.e. a CPU searches 1 graphs in the time it takes a GPU to search 5. However, we don’t know what solvers will exist in the future.

Does GPU mining work on Testnet1?

No, Testnet1 is using Cuckoo 16, which uses a reduced graph size that works very quickly (as in thousands of graphs per seconds) and uses next to no memory. Mining wasn’t really a focus for Testnet1. Testnet2 and beyond will be running Cuckoo 30, which will take an average of 4GB memory and multiple seconds per graph.

My CPU isn’t running at 100% on testnet 1

Again, Testnet1 is not performing ‘real’ mining. Cuckoo 30 on Testnet2 and beyond is very much capable of maxxing out your GPU/CPU.

Can Cuckoo Cycle support pooled mining?

Yes. Pool clients can easily prove they were working on a solution by providing solutions of different cycle lengths valid for the given block. Providing pooling software isn’t a primary goal of Grin, but the intention is to ensure it’s supported in the Cuckoo Miner Library

Will Grin be mineable on mobile devices?

Possibly, but it probably won’t be practical for a while. Fast Cuckoo 30 mining requires at least 2.5GB of RAM available for use (regardless of whether you’re CPU or GPU mining), which well exceeds the usual amount of RAM on most mobile devices found today (including top-end ones). There is also another solver (the lean miner) that can perform a search using under 200MB of RAM, but with graph search time measured in minutes even on a fast CPU.

That being said, there are a few mobile devices starting to appear that have 4GB of RAM or more, so creating an efficient Cuckoo Cycle solver for one of these devices is technically feasible (and I should add that we have no idea how well it would perform).

Will Testnet1 or Testnet2 coins be worth anything?

Each coin mined is a point of karma for your assistance in helping us develop and test Grin. Other than that, do not attempt to buy, sell, or trade Testnet coins… they are worth nothing and should never be worth anything due to the beta status of the network from which they originate.

Why not Proof-of-Stake?

Short answer, because nobody has demonstrated, theoretically or otherwise, that PoS works or even can work as fairly or securely as PoW does.


Just one other note, if you’re considering writing a Cuckoo-Miner solver, please consider writing it as a plugin for Cuckoo Miner. This is the library that interfaces Grin with C/C++ solvers, and creating a plugin that works with it should just be a matter of writing to a specific interface. This way, your solver can be included right out of the box in future versions of Grin in a way that’s (hopefully) easy to use and ‘officially’ supported. Please feel free to contact me for details about how this can work, even if you’re intending to solicit donations via your solver.


How to Mine Cuckoo 30 in Grin: Help us test and collect stats!
Much of the technology behind Grin
Cuckoo 30 Stats Collection Thread (don't use)
Any help for a mac noob would be appreciated
pinned #2

#3

Reserving for future use.


#4

Reserving for future use.


#5

Reserving for future use.


#6

Reserving for future use.


#7

Nice man! Can’t wait to some of the GPU mining statistics.


#8

Cheers for this explanation.


#9

Why does Cuckoo 30 take up so much memory? You have mentioned 2.5gb min- 4gb average. How much VRAM does a GPU need to support Cuckoo 30.

I see all testing has been on Nvidia based on John Tromp’s CUDA miner. Is there anyone working on a OpenCL solver?


#10

GPUs need 4GB for good performance, or 5.4GB for best performance. See https://github.com/tromp/cuckoo/blob/master/GPU.md for details…


#11

Thank you for the explanation. Will read some more about testnet2 to understand things.


#13

The tokens that are maining in testnet 2 will have value in mainnet or not?


#14

No, no value at all.


#15

Don´t confuse “value” with “price”.


#16

Can this be updated with the new PoW update (equigrin + cuckoo cycle) so that we can see the impact?


#17

I think we might want to have it at least partially implemented to know all the impacts :slight_smile:


#18

Video guide to mining on windows by @photon


#19

hi. where can i download wallet for GRIN win7 x64?


#20

If I’m not wrong, there is no windows wallet atm.
But somebody is coding one -> Grin++ Status Update - Jan 9


#21

Hey, guys!
One question - I tried running the mining software on my AMD rig - 6 x RX580 8GB. There shouldn’t be any problem to run the mining software as I read, but I have problems. 1 or 2 of the cards start mining, the others stay on STARTING and I see this error message in the logs:

2019-01-13T17:19:33Z ERROR, Failed to start worker process: Only one usage of each socket address (protocol/network address/port) is normally permitted
2019-01-13T17:19:39Z ERROR, Exception in # File: Worker.cs # Line: 233 # Member: SendJob Message: Value cannot be null.

Any ideas?
Thanks!