>>469654>This is your brain on modernism.Modernism is largely correct and post-modernism is just regression into pre-modernity. Not that this philosophical gripe has much relevance here
>You assume AI will work like industrial production. I.e. bigger factor = efficiencies of scale.>But the opposite is occurring. AI works best with curated data sets and grows more inefficient the more data you feed it.I think you are confusing several things.
Curating data-sets makes for smaller more efficient AIs as end products, but you need a lot of people for that curation process, So in order to make the AIs small, you have to hire loads of people. The large uncurated data-sets need massive IT infrastructure, but not many people. So what changes between those 2 models of making AIs, is the ratio of machine capital to labor-inputs, to stick with your factory analogy, both cases have a factory but one is filled with more people while the other is filled with more server racks.
I'm looking at this from the perspective of the economy as a hole, because i'm interested in understanding what effects this will have on labor. I do share your hunch that the small efficient AIs that are designed for a dedicated purpose using a curated data-set will be the way to go. But i don't share you optimism that there won't be monopoly formation. After all, those small Ai companies that each produce a purpose-specific AI can potentially be bought up. Regardless of that it's likely going to increase labor demands, which seems like the bargaining power for labor will increase again.
>Facebook's AI has already leaked and it's spawned a Cambrian explosion of other AI models.I'm not sure i would agree with that evolutionary biology analogy (tech isn't self replicating like biological organisms), but aside from that, sure Facebook's LLaMA has been used to create many new variations. But Facebook could have done that on purpose, because they could be planning to buy out all those AI start-ups once they build an AI. And since it's based on the tech that Facebook has build originally, they'll have less technical difficulty re-integrating it.
>The most advanced advancements are being made on the open source side. Well obviously open source AI will be better then proprietary source AI, the open model allows for more dynamic development.
>Any models capitalist manage to control will likely be leaked at some point as well.I'm guessing you are correct about that as well, and while that probably favors the open source model because tech-leaks is not a problem that (F)OSS can have, this might just mean that most of the capitalists will go open source. Don't get me wrong that's a positive development, but it likely is not a tech-mcguffiin that conveniently overcomes capitalism for us.
As for who'll end up having control over this, i'm not sure. It seems unlikely that a tiny minority of capitalists could control it. That stuff will become extremely complex technology and even if they had ultimate power, they'll likely be unable to give enough commands to control the thing. A small number of capitalists would represent a tiny information system attempting to control an enormous information system. Machine learning algorithms also don't create first order hierarchies where you can find a few "keys-to-power", which makes this harder still. If we end with a mega corporation that does most of the AI, the porkies that technically own it will be the tail attempting to wag the dog.
I'm hoping that this goes into the direction where every person individually gains 100% control over their own Personal assistant AI. At that scale control is more plausible, because on a systemic level everybody giving instruction to the AIs represents a much bigger information-system on the control side. When everybody has an Ai in their tool-belt we can sort of continue using our "habitual" organization structures for people.