I'm interested in playing with this, but I'm not interested in paying $3000 for a high-end nVidia NPU. It seems to be in principle possible to split the task into smaller subtasks that could be distributed to a Beowulf-style Pi cluster. Several enclosures exist that let you connect multiple Compute Modules over a high-speed bus, I hear.There were some projects that used multiple PIs. From a cost and
I have minimal experience with Raspberry Pi (not zero, but minimal). I have none with setting up Beowulf clusters, and none with decomposing machine learning tasks and distributing them among processors. Thus, I wonder if there might be an existing project I could learn from and maybe even eventually contribute to, even if only as a tester.
Thanks.
There were some projects that used multiple PIs. From a cost and
complexity standpoint they were more "because we can" rather than
practical. A multicore AMD or INTEL processor would be a better option.
If you skip the high power GPU the system cost is lower. Also the MPI
cluster code is "off the shelf" for those processors.
https://mpitutorial.com/tutorials/mpi-hello-world/
Sysop: | Sarah |
---|---|
Location: | Portland, Oregon |
Users: | 97 |
Nodes: | 16 (0 / 16) |
Uptime: | 148:04:32 |
Calls: | 686 |
Calls today: | 686 |
Files: | 84,300 |
U/L today: |
44 files (5,578M bytes) |
D/L today: |
3,139 files (328M bytes) |
Messages: | 56,141 |
Posted today: | 44 |