I doubt it will be much use for generative AI, while it could likely accelerate the individual operations, it has no significant memory onboard and is connected with only one PCIe lane, not to say people much smarter than me won't be able to do something. You can run some smaller text generation models such as Phi just on the Pi 5 at around reading speed and 7b models around 1 token per second - Ollama is a simple no-code way to give it a try.
A bit of a bust for beginners / learners at the moment, you can probably do a bit more than shown in the demos if you get into C++ but Python is currently limited to running through GStreamer (although the full Python API is expected sometime in the next month). If you want to convert your own / other vision models you can, but you need a fairly beefy x86 machine.
Andy
A bit of a bust for beginners / learners at the moment, you can probably do a bit more than shown in the demos if you get into C++ but Python is currently limited to running through GStreamer (although the full Python API is expected sometime in the next month). If you want to convert your own / other vision models you can, but you need a fairly beefy x86 machine.
Andy
Statistics: Posted by voiceoftreason — Mon Aug 05, 2024 5:10 pm