Recently there has been a lot of commotion around large language model text based AI.
They are able to do impressive stuff, they give useful answers, and even can write somewhat usable programming sample code.
The most famous one currently is chatgpt, but all of those AIs are basically black boxes, that probably have some malicious features under the hood.
While there are Open-Source Implementations of ChatGPT style Training Algorithms
https://www.infoq.com/news/2023/01/open-source-chatgpt/Those kinda require that you have a sizeable gpu cluster like 500 $1k cards that are specialized kit, not your standard gaming stuff. To chew through large language-models with 100 billion to 500 billion parameters.
The biggest computational effort is the initial training run, that chews through a huge training data-set. After that is done, just running the thing to respond to your queries is easier.
So whats the path to a foss philosophy ethical AI ?
Should people do something like peer to peer network where they connect computers together to distribute the computational effort to many people ?
Or should people go for reducing the functionality until it can run on a normal computer ?
I guess the most useful feature is the computer-code generator, because you might be able to use that to make better Foss Ai software in the future, and help you with creating libre open source programs.
Is there another avenue for a Foss AI ?