GPU mean memory reductions

?

Keeping it secret would mean waiting a few years for the economy to develop and require a big chunk of mining hardware to meaningfully use; by bypassing the risk grin fails and someone claiming it first, there is a multiplicative effect on that reward if someone stumbled on it; to have someone work on it is a different matter but the security of having a balance pow is still there.

I don’t mean to sound like a dick. I’m not rich and $10,000 is meaningful money. But Claymore surely makes that much daily if not hourly. Why would someone like him, who has a meaningful improvement on the public miner, give away the idea for $10k?

A big farm, the ones you are afraid of, have $millions in hardware. If someone could boost performance even by a modest linear factor… you do the math.

I have advocated for a long time to use a simple PoW with lots of peer-review and research behind it. Complex PoW’s like Cuckoo Cycle favor the elite. You can read my views in this article about CryptoNight:

3 Likes

grin-miner.toml

On line 104:
#currently requires 7GB+ GPU memory
#[[mining.miner_plugin_config]]
#plugin_name = " cuckaroo_cuda_29 "

confusion about that, for cuckARoo29, what is the minimum memory requirement ?

Thanks

I don’t follow enough to know all the details across everything, but you should at least mention whether your trying mining “mean” or “lean” as the different methods use memory differently

That comment is out of date. It’s 5.5 GB now. See the up-to-date guide at https://github.com/mimblewimble/docs/wiki/How-to-mine-Grin

Hi John, can you please clarify if the mean mining is possible on gpu with 4gb vram ? https://github.com/tromp/cuckoo/commit/f9de587703ae259a93fae6c34618a9bf52f4042b
https://forum.aeternity.com/t/cuckoo-cycle-gpu-memory-requirements/1608

Yes, it’s possible, but I believe not without some loss in efficiency. I have no plan to implement that myself but expect 3rd parties will offer that.

ok, thanks. so, the assumption here is, that over the time some miner will appear for 4gb cards covering gtx1050ti and many amd polaris gpus.

Indeed…

obligatory poem titled “ode to a goldfish” to fill 20 chars:

My

wet

pet

1 Like

Still sounds like having 11GB or more is preferable to get the best results? I’ve just heard that Sapphire is bringing out a 16GB version of its Radeon RX570 specifically for mining Grin. Sounds really optimal:

https://medium.com/@philipwynnjones/https-medium-com-sapphiretechnology-why-gpu-mining-is-making-a-comeback-with-grin-7a85ecfef840

Not seen any performance figures yet though.

the 7nm vega with 16gb hbm2 memory will be the best option in my opinion :slight_smile:
should be available soon…

1 Like

Could be quite pricey, though. I was using a Frontier Edition for a bit for various coins. Brilliant hash rate but not worth it once the coin prices really started going down. Too pricey hardware and too power hungry.

I predict the required memory will drop when new miners are released. Might want to hold off on that pricey card.

I thought the point was to exclude ASICs by making large memory buffer requirements optimal, and ASICs don’t have that?

I think it’s possible to further optimize the memory layout, and make the miner fit better in 11 GB, boosting fidelity back to about 1. I’ll start working on that when I return from my imminent travels…

3 Likes

I wrote some experimental code in branch https://github.com/tromp/cuckoo/tree/memred2

Not sure if it even compiles.
If target cuda31.0 for rtx or cuda31.1 for gtx works then further memory reduction might be possible with -DNRB1=25

The system I normally do CUDA development on is in under reconstruction. If someone could provide me with ssh access to a machine with either 1080Ti or 2080Ti I might find time during my travels in coming days to make it work.

1 Like

will message you later today with a 2080Ti card.

1 Like

Hi Tromp, I can provide you with access to a cluster in the next 2 days. It has a range of resources including V100s. Contact me to discuss.

@tromp compiled your latest source, seems like it’s slower a few hash.

usually i got 2.15 - 2.16 on this rig. this time, i only got 2.1 at top.

and the memory usage, is still 11673MIB

Hi Tromp , I have a gtx 1070 rig , I’v seen that some poeple manged to run their 1070s on the c31 algo , is that possible ? I’v tried on windows 10 and it’s not working appearantly due to windwos reducing vram , is there any way I can run the c31 on my 1070s ? sorry for the random question