I'm interested in playing with this, but I'm not interested in paying $3000 for a high-end nVidia NPU. It seems to be in principle possible to split the task into smaller subtasks that could be distributed to a Beowulf-style Pi cluster. Several enclosures exist that let you connect multiple Compute Modules over a high-speed bus, I hear.There were some projects that used multiple PIs. From a cost and
I have minimal experience with Raspberry Pi (not zero, but minimal). I have none with setting up Beowulf clusters, and none with decomposing machine learning tasks and distributing them among processors. Thus, I wonder if there might be an existing project I could learn from and maybe even eventually contribute to, even if only as a tester.
Thanks.
There were some projects that used multiple PIs. From a cost and
complexity standpoint they were more "because we can" rather than
practical. A multicore AMD or INTEL processor would be a better option.
If you skip the high power GPU the system cost is lower. Also the MPI
cluster code is "off the shelf" for those processors.
https://mpitutorial.com/tutorials/mpi-hello-world/
| Sysop: | Sarah |
|---|---|
| Location: | Portland, Oregon |
| Users: | 162 |
| Nodes: | 16 (0 / 16) |
| Uptime: | 05:58:24 |
| Calls: | 1,129 |
| Calls today: | 1,129 |
| Files: | 84,990 |
| U/L today: |
554 files (10,683M bytes) |
| D/L today: |
3,774 files (9,116M bytes) |
| Messages: | 65,183 |
| Posted today: | 54 |