[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / music / 777 / posad / i / a / R9K / dead ] [ meta ]

/leftypol/ - Leftist Politically Incorrect

"The anons of the past have only shitposted on the Internets about the world, in various ways. The point, however, is to change it."
Name
Email
Subject
Comment
Captcha
Tor Only

Flag
File
Embed
Password (For file deletion.)

Matrix   IRC Chat   Mumble   Telegram   Discord


File: 1684339099194-0.png ( 12.88 KB , 500x377 , corporate ai.png )

File: 1684339099194-1.png ( 33.43 KB , 568x612 , foss-ai.png )

 No.469467

This is less about the technical aspects and more about the civilization consequences.

You can comment on what i wrote , but feel free to ignore my takes and make your own predictions




—–
I think AI development will have three phases:
—–

1 <The first phase
is already completed, it started in the 80s until recently and it basically was a niche application of brute force statistical analysis of large-ish data-sets in very technical industries. Only very few people were using that.

2 <The second phase
began recently with more broadly accessible text and image generators that put out original new text and images based on statistical patterns it derived from analyzing very large data-sets. This type of AI is characterized by being "a brain in a vat" that only interprets signals that humans give it. This is the reason it has a tendency to hallucinate facts or generated distorted images.

3 <The third phase
begins when the AIs gets their own sense perception that allows it to learn from the material world directly without human brain intermediaries. This is also a hard requirement to make them reliable machines. This is more than just plugging sensors like cameras and microphones into a computer, there will be a huge technical challenge in computer hardware and software. If you are clamoring for society to move on from the stale postmodern philosophical malaise, this might be your jam.

—-
Predictions :
—-

1 <Phase one
tech enabled a lot of industrial optimization and it generated online search results. It also powered High-Frequency-stock-trading. However it also made advertisement really creepy, and made social media a super combative place because that brought more user engagement.

2 <Phase two
tech will have a greater impact on society. Artist will get proletarianized, in the most literal sense, they'll stop making art by hand and start operating machines, replicating what happened to manual labor a 100 years ago. It will likely also impact information workers, corporate middle management roles will probably shrink a lot. AI will also begin taking on some of the tasks of teachers. The upside is that every student could get their own tutor, but letting machines raise Children will have downsides as well.
AI will be ruthlessly attacked in the legal arena, because legacy capital still holds much sway over governments and they want to use the legislator and judiciary to defend their privileged status, that's why AI-lawyers will become such a big business, and human lawyers will probably be downgraded to the position of a legal-aid that interfaces the tech with the courtrooms. This will also be used for higher order decision making in automated High-Frequency-stock-trading. What's left of the archetypal 90s and 2000s cocaine snorting stock-exchange-bros will disappear.
There are upsides too, it will be easy to make content for open source video games and it will have other opportunities for community oriented tech, like you could make an AI-helper-documentation that helps you to configure your gnu plus linux. The corporate-tech however will likely have very negative impact on human psychology. If you stick with open-source Ai that can be fully controlled by the user, you'll probably be fine tho. This is a tiny dark age but it'll have an opt out if you are able and willing to put in the effort to set up your own tech.

3 <Phase three
will be driven by making it super cheap to train and super-reliable, because it learns from data generated by measurement instruments which will be less labor intensive and more objective. This will enable the creation of true AI scientists and become THE "killer-app". It will change society in a very fundamental way, the same way the scientific revolution changed society in the 18, 19 and 20th century. The influence of realism oriented machine minds will sweep away a lot of superstitions and deceptions we currently believe. If this happens in the context of a class society, A new ruling class will use it to sweep away the incumbent ruling class. It will be technology disrupting the myths that justify surplus extraction like the industrial bourgeoisie used machine capital to sweep away the feudal aristocracy and their religious surplus extraction justification myths. So turbulent times ahead, but this might be the ticket for biological modification of human minds and bodies without risking it degenerating into a type of eugenics that seeks to erase certain people. At least there is the possibility for that because the machine minds tasked to shape biology could at least in theory be made transparent enough to rule out ill intentions.

—-
Geopolitical trends:
—-

The US won the phase 1 contest in all disciplines in a veritable tour de force, with the EU and China sharing a distant 3rd place.

China will win the the AI phase 2 contest for the big "multitool AI" that aims to do a little bit of everything. Simply because they are the largest systemically coherent technologically advanced "civilizational unit". They can support more data centers and they'll have more people inputting data, all in a consistent civilizational framework.

Phase 2 will also have a contest for smaller purpose-specific AI's and that will be an open competition without any forgone conclusions. That will definitely favor an open source model because that enables more dynamism in development than any proprietary stuff. So what ever geopolitical block manages to fund the most open source projects will win this.

Phase 3 tech will have the most difficult birth in the context of capitalism, especially if Michal Hudson is correct about the techno-phobic tendencies in "neoliberal finance feudalism". You can already hear the rumblings similarly to the horse carriage producers trying to ban the motor car, or music-distributors trying to ban the internet. Some countries with entrenched interest groups could manage to suffocate this stage of the tech to hold on to power, which could lead to regional tech-stagnation and dramatic geographic technological differentials. The last time that was the case in the 18 and 19 century and the subsequent technology-equalization was "extremely unpleasant" to put it mildly.

—-
How to be safe:
—-
As the social consequences of these developments will become more apparent, the discourse about ethical technology will grow louder. But don't be fooled by that, the technology you can control will benefit you, while the technology that you can't control will make you suffer. There is no substitution for Maximized User Control. By the time well intention institutions can protect you, the damage will already be done. And many predators will try to fool you into thinking they want to protect you, they're the ones that will try to take away user control or harm free software efforts to create AI. Keep in mind that the free-software sphere is so far the only place where actually existing ethical software has been reliably produced.

—-
Class dynamics
—-
Phase 1 in the capitalist context led to precarious gig-work.

The capitalist class wants to use this to raise un-employment and force wages down, but they're out of luck, Phase 2 Ai is not reliable enough, it needs humans to shepherd it. Tho, many labor aristocracy positions will become proletarianized.

Phase 3 tech will be easier to implement for Socialists, because there is less ideological friction with scientific realism or political friction from ruling class infighting. Within a capitalist context Phase 3 will likely benefit medium sized capitalist more than large ones, because data from sensory input has diminishing returns with scale, because beyond a certain scale it's just going to be duplicated data.

Sorry for wall of text, i was in a verbose mood
>>

 No.469488

I'm a computer science major and frankly all the hype over AI is ovefblown bullshit. AI's today aren't functioning like people think they are functioning. They aren't "thinking" or deducing anything. They are simply taking a pre-existing matrix of inputs and repeating what they know based on a pre existing matrix of outputs.

AI is a bunch of hype from people who don't understand anything about what they are talking about. Like Y2K

We will never have Real actual artificial intelligence that can think like a human and even if we will it is at least another century away.
>>

 No.469492

>>469488
>They aren't "thinking" or deducing anything. They are simply taking a pre-existing matrix of inputs and repeating what they know based on a pre existing matrix of outputs.
So… Basically what people do
>>

 No.469500

>>469488
>frankly all the hype over AI is ovefblown bullshit
You are correct the current AI wave will probably follow the same ebb and flow of the capitalist crisis cycle as other changes to the productive forces have. Maybe it'll be like with the dotcom boom.

>I's today aren't functioning like people think they are functioning. They aren't "thinking" or deducing anything.

No of course none of this tech "thinks like a human". Calculators don't "think" like a human either but they are still supremely useful and allowed us to externalize a cognitive task from brains to machines. The result was that we could do order of magnitudes more calculations. The goal isn't to replicate thinking the goal is to make a machine complete cognitive tasks. Maybe we should call it artificial information cognition and acronym it to AIC.

>We will never have Real actual artificial intelligence that can think like a human and even if we will it is at least another century away.

100 years for an artificial brain is very optimistic. We've have been perfecting artificial muscles with various types of motors and actuators for over 200 years, and recreating the properties of human muscles with high fidelity is still somewhat elusive, there are laboratory grade prototypes, but nothing has reached mass production yet. Keep in mind that if you want to make a machine think like a human it needs a body that is similar to a human one as well and live amongst human society.
>>

 No.469628

>>

 No.469641

>>469488
You're a LARPing faggot if you can't see how huge AI will be. No it won't literally replace humans but it will most likely tremendously increase productivity. It is already ebbing used in a lot of applications successfully.
>>

 No.469642

>>469641
>you can't see how huge AI will be.
I think that Those new generative AIs have a huge potential. But how much of that potential the capitalist mode of production can realize that's a different question.

Consider that this is to some degree post-scarcity technology. When you watch the old star trek shows like TNG (the one with Picard) where they introduce the idea of a holodeck. They go into a holo-projection room and start describing to the computer what objects and parts of the environment they want to holo-project, and the computer just generates it without anybody having to specify every single detail. This is the beginning of the technology behind that functionality. But capitalists have a terrible record with other tech that leans towards post-scarcity, so far they failed at harnessing even a small fraction of their potential. So far they had no other ideas beyond trying to re-introduce artificial scarcity, where there business models fights against the most fundamental operating principles that make the tech work.

>No it won't literally replace humans but it will most likely tremendously increase productivity.

Yes that is true, but capitalists literally want the opposite of that. They don't appear to have a desire to produce more and/or better, they just want to hire fewer people.

If the Ai requires that you hire a huge number of people to produce training data and babysit it, but in return you get vast rewards out of it, the capitalists might not go for that.

It sounds reasonable to employ large numbers of people for producing an even larger reward (high productivity), but the capitalist class appears to be only motivated to invest in technology for reducing labor cost. They also used to invest in technology in order to better compete but Ai companies are already lobbying for regulations that block competition. Usually there's at least a phase of intense market competition before a monopoly emerges and begins closing down all the potential avenues for competitors. They are trying to establish a monopoly through political means even before they cornered the market.

If the capitalists will behave in the way I just described, and only realize the aspects of the technology that allows them to reduce labor-cost, this might not be the epochal change that many are predicting, regardless how big the technological potential is. If the shortcomings of capitalist economics hobble AI, and the promising aspects of that tech get relegated into the hobby corner of computer enthusiasts it might take decades to couplet.
>>

 No.469654

>>469642
This is your brain on modernism. You assume AI will work like industrial production. I.e. bigger factor = efficiencies of scale.
But the opposite is occurring. AI works best with curated data sets and grows more inefficient the more data you feed it.
And capitalists won't be able to maintain a monopoly on it with barriers to entry. Facebook's AI has already leaked and it's spawned a Cambrian explosion of other AI models. The most advanced advancements are being made on the open source side. Any models capitalist manage to control will likely be leaked at some point as well.
>>

 No.469655

Looking at this as if historical progress works in global stages directed by a thought leader misunderstands tragically how this works. I can tell you there is work done far in advance of what is released today, some of which will be a dead end. It would be trivial to do better than ChatGPT if you really want to, but ChatGPT is for public use and designed to be basically a giant search engine.

The big leap in AI is not a technological throughput problem, but our own theories of knowledge and cognition being dogshit. The actual work on this is conducted in secret. They've chopped up enough brains to figure out how this shit works.
The basic problem is that AIs do not comprehend meanings in the way we do casually, and algorithmic computation in information systems and systems thought generally is borked. You'd probably do better training humans to be slaves than making AI resemble humans in expression and comprehension of meaning. The computer is ultimately a machine designed to perform a rote task, rather than interface with the material world in the way we do. That's been the fundamental problem with naive AI approaches. Noobs think like this. The people who really know what AI and compsci does have always known that the most important part of the cybernetically controlled system that must be governed is the human user, and so that is the direction everything in IT went. We build machines to control people and we don't do a particularly good job of it, and mostly we build machines to kill off the masses because that's the only idea the people in charge actually care about or want to pay for. They do some work on building torture machines and they pretty much know that humans can be broken trivially.

If you did want to make an AI that was more adaptable, you probably wouldn't emulate general intelligence at all. Instead, you would build an AI around the principles of being basically a walking encyclopedia that is intended to interface with a human user, and the human user who is adept at searching and utilizing this tool would be very powerful. That would be the smarter approach, and for a time it seemed like that was where the smart money was. After 2000 that idea was depreciated in favor of this AI-driven tyranny myth that politicians love but any compentent engineer knows won't work and has no interest in building effectively. What will result is a machine designed to immiserate people and little else. We could build better machines and a better society very easily, but they went for killing us instead and shouting "die, die, die". Communist Satanic Gangster Computer God, indeed…

Also, statistical analysis fuckery is just eugenic horseshit. There are things you use statistical analysis for, but a substitute for intelligence is not one of them. OTOH, the longer they maintain shitty paradigms, the better it is for us. They aren't going to do anything good with actually useful AI.
>>

 No.469656

You're also operating on the delusion that we are not living through a depopulation event right now, and that this is really the end of any humanity we would recognize. The ruling class has already gone openly Satanic and let the creepy shit out. They know they have weapons to kill at will large cities and round up and gas anyone they don't want to keep. It's been over for a long time. Now we just see the cards coming out. I think a lot of people knew this was coming, and that's why so many stopped caring and there's little will to fight any of this.
>>

 No.469657

The endgame doesn't work the way they dream it will, but it will make sure we continue to suffer for a long time, and there's no real way to get rid of this. The only people who end it are the people in a hidden world, when they feel they've killed enough and cowed everyone into submission. Then they can do their scientific despotism thing or whatever the fuck they dreamed about.

At that point the economic arrangement will be something different than what we've known up to now. They've always known capitalism is dogshit and they laughed at people who thought they could go on like this. The PBs had to be placated, so the oligarchs kept market economics going and gamed them to ensure that anyone not in the club would cannibalize each other. The ideas given to the masses were designed to liquidate them as quickly as possible and induce them to embrace pointless struggle and avarice of the worst type, culminating in what they did with Reagan and Thatcher. You can't get much worse than the "die die die die die" that they've been shouting ever since. This is the end of the world, at least for most of us. If we live, it won't be much of a life, and it's not like hard times will make anything but a bunch of screaming faggots who revel in torture, which is what these fucking Kraut bastards always were.
>>

 No.469658

>>469654
>This is your brain on modernism.
Modernism is largely correct and post-modernism is just regression into pre-modernity. Not that this philosophical gripe has much relevance here

>You assume AI will work like industrial production. I.e. bigger factor = efficiencies of scale.

>But the opposite is occurring. AI works best with curated data sets and grows more inefficient the more data you feed it.
I think you are confusing several things.

Curating data-sets makes for smaller more efficient AIs as end products, but you need a lot of people for that curation process, So in order to make the AIs small, you have to hire loads of people. The large uncurated data-sets need massive IT infrastructure, but not many people. So what changes between those 2 models of making AIs, is the ratio of machine capital to labor-inputs, to stick with your factory analogy, both cases have a factory but one is filled with more people while the other is filled with more server racks.

I'm looking at this from the perspective of the economy as a hole, because i'm interested in understanding what effects this will have on labor. I do share your hunch that the small efficient AIs that are designed for a dedicated purpose using a curated data-set will be the way to go. But i don't share you optimism that there won't be monopoly formation. After all, those small Ai companies that each produce a purpose-specific AI can potentially be bought up. Regardless of that it's likely going to increase labor demands, which seems like the bargaining power for labor will increase again.

>Facebook's AI has already leaked and it's spawned a Cambrian explosion of other AI models.

I'm not sure i would agree with that evolutionary biology analogy (tech isn't self replicating like biological organisms), but aside from that, sure Facebook's LLaMA has been used to create many new variations. But Facebook could have done that on purpose, because they could be planning to buy out all those AI start-ups once they build an AI. And since it's based on the tech that Facebook has build originally, they'll have less technical difficulty re-integrating it.

>The most advanced advancements are being made on the open source side.

Well obviously open source AI will be better then proprietary source AI, the open model allows for more dynamic development.
>Any models capitalist manage to control will likely be leaked at some point as well.
I'm guessing you are correct about that as well, and while that probably favors the open source model because tech-leaks is not a problem that (F)OSS can have, this might just mean that most of the capitalists will go open source. Don't get me wrong that's a positive development, but it likely is not a tech-mcguffiin that conveniently overcomes capitalism for us.

As for who'll end up having control over this, i'm not sure. It seems unlikely that a tiny minority of capitalists could control it. That stuff will become extremely complex technology and even if they had ultimate power, they'll likely be unable to give enough commands to control the thing. A small number of capitalists would represent a tiny information system attempting to control an enormous information system. Machine learning algorithms also don't create first order hierarchies where you can find a few "keys-to-power", which makes this harder still. If we end with a mega corporation that does most of the AI, the porkies that technically own it will be the tail attempting to wag the dog.

I'm hoping that this goes into the direction where every person individually gains 100% control over their own Personal assistant AI. At that scale control is more plausible, because on a systemic level everybody giving instruction to the AIs represents a much bigger information-system on the control side. When everybody has an Ai in their tool-belt we can sort of continue using our "habitual" organization structures for people.
>>

 No.469659

If you wanted to plan the economy based on productive metrics and cut out the managerial bullshit, the solution is simple in any era. No political leadership ever wanted to do that though, because that would give away their plan to do away with the workers and most of humanity. Otherwise, none of this shit about financial institutions would have been tolerated. It would have been pure planned economy, no commodification of anything, you get what is doled out to you and you don't get to complain. If they were going to dole out anything to the plebs, that wouldn't have been objectionable and it would have been far more efficient than anything we got. They're not going to give a fucking thing to the people though. Anything they give will only be there to trap them and herd them to the slaughter. You're in an odd position where the communists would actually want to retain commodification and money, because in the new plan, they get completely fucked and frozen out, and selected classes will get everything. The models they have are Ingsoc and Brave New World, and that's what won the Cold War. Congratufuckinglations.
>>

 No.469660

>>469488
Y2K at least was an actual thing which employed a lot of computer programmers that worked on legacy systems. It would have caused issues and did cause a few, but the Y2K thing was not as pressing as it seemed. Basically, it was presumed people wouldn't keep using legacy stuff or proliferate computers that much, and would upgrade their databases. It was not just about "fixing Y2K" which is so trivial a monkey could do it. It was about rolling new business machines.

AI at present is nothing much, except perhaps a work tool that could be used, if anyone believed in working.

The jobs are destroyed not by AI but because they were never needed. No one needs service work. You could fire anyone working in superfluous food service jobs and literally nothing would be affected except those employees. We can live without Subway FFS.
>>

 No.469662

>>469655
>The actual work on this is conducted in secret. They've chopped up enough brains to figure out how this shit works.
>The basic problem is that AIs do not comprehend meanings in the way we do casually

If we wanted to model AI on human brains we'd have to build computers based on memristor logic circuits instead of transistors, memristors are the closest analog to the ionic processes in neuronal activity. You can theoretically simulate that in software but practically that takes order of magnitudes too many traditional logic components. You have to start with a type of logic components that get you close enough. Memristors are at present in a stage of development that transistors were in the 1960s. Very expensive, very unreliable, very energy-inefficient and only available in very rudimentary circuits. If there was a big push for investment like there was for computers half a century ago, we could have working brain-simulation accelerator chips in 30-50 years. And there is no big secrecy, the human brain connectome project is public science, i think you can even download it, you just need a university grade supercomputer to run it.

All that said there is little economic value in replicating human minds, we already have tons of those. All that R&D cost to replicate an existing ability, that's not really worth it, we want AI for the mental abilities that we lack. But we eventually will develop this tech because it would allow us to copy Grandpa's and Grandma's mind before they die, and then spin them up in a VM if we want to commune with the dead. Also preserving geniuses might be nice too. You could ask a copy of Einstein's mind to explain general relativity to you, cyber-Leibniz's/Newton could teach you calculus, cyber-Marx could tell you about economics.
>>

 No.469663

>>469656
>You're also operating on the delusion that we are not living through a depopulation event right now
This is a somewhat chauvinistic outlook. Yeah the neo-liberals are destroying the west and it's population, the mortality rate is going up across all age groups, it's turning into fascism by extreme wealth inequality, where only people with enough money can buy the right to live. Many people outside the west are getting destroyed even harder, but it's not the fate of humanity, it's not going to be the fate of the entire world. It just looks that way to us because we're helplessly strapped in a roller-coaster that's racing into the abyss.
Maybe we get lucky an some degree of realism prevails and the neocons won't be able to steer the west into a suicide run against that Asian hyper-power, and we'll be able to recover, eventually.

>>469660
>if anyone believed in working.
People never believed in working as an end in it self (including protestant Christians), people only ever believed in working for a reward. Sisyphus absolutely despised rolling the stone up the hill over and over for no apparent purpose or reward.

What you said could almost be interpreted as capitalists that complain about people not wanting to work them selves to death with nothing to show for. You need at minimum a solid social democracy with a bearable wealth distribution, and at least somewhat effective political institution that are responsive to the interests of the masses, in order to create an environment where a work-ethic can survive. Neo-liberalism will brutally punish people that have the instinct to build things, almost nobody makes into adulthood without having that beaten out of them. The system rewards gangsters not builders https://invidious.snopyta.org/watch?v=DPt-zXn05ac
>>

 No.469664

>>469658
>Modernism is largely correct
It's not, but Modernists can fool themselves into believing it because very little of what they say is proven imperially. And anything that that goes against what they say is handwaved away since they are the ultimate arbitrators of what's true, not something objective like empiricism.
>>

 No.469665

>>469658
>Curating data-sets makes for smaller more efficient AIs as end products, but you need a lot of people for that curation process,
No you don't. And you can easily share curated datasets. Some people are doing things as simple as scraping the highest rated comments in a Reddit forum. Which tales all of a few minutes.
>>

 No.469667

>>469662
That's not the point. The point is that what the brain does to compare meanings and assemble its knowledge is very different from the modus operandi of a rational computer. This is a problem no matter how much computational force you have for algorithmic computing - the cost of searching for references with meaning to build understanding is very expensive, but humans specialize in pattern recognition. It's not just about brain structure, but what we do all of our lives. Humans are lousy rational computers because not one part of the brain was designed by nature to do that. We only acquired this symbolic language after some time and it was never acquired perfectly or evenly. Computers are things we designed with symbolic language and logic in mind, which it does very fast for the task we use it for. We don't have "algorithmic" resolution of meanings as facts, because what we actually do to sort out meaning is not purely symbolic language. We have a meaning and then we can refine it by choosing words that are best fits or some expression which gets the idea across. We vary in our ability to communicate symbolically the thought in our head. Language, however we utilize it, is limited to symbolic representations.

We may be able to convey some idea with a grunt and gesture and with context, but without an effective means to sense that context and piece it together, we don't necessarily know the intent. Even with our ability, it is easy to confuse gestures or be clear about what we said when speaking or writing. This is an inherent limitation of symbolic language.

It is not a hard barrier to building an AI by any means, but it is something you would consider if you wanted to process language with something other than a search engine, or by some machine learning which acquires rules of thumb and applies them in stilted ways. For example, the format of ChatGPT is remarkably regular, and it does a decent job of conveying things that are intelligible when you are requesting symbolic representations like something written in encyclopedia form. If you ask ChatGPT questions about love in philosophy, it could easily be confused about what you actually wanted to talk about, and not really get your arguments. Of course, ChatGPT is a glorified search engine, but if you asked a chatbot to store information and possess a "personality", if it is something digital and algo it will appear stilted and miss some subtle cues if it could read your face. GPT is being trained to act like an inquisitor rather than a friend or someone who will be able to get your situation and offer advice. This is in part deliberate because tech does not want the pleb to feel too comfortable, and already adapted humans to accept an inquisition in their lives.

The real advance as I said is for humans to reconcile with the machine and learn how to utilize it, what it is good at, and what programs do. We were actually working towards that, and we did that simply because young people were familiar with computerization. It's almost natural for a lot of people to figure out computers and programming intuitively.

The "mind uploading" idea is pure ideology. If you were going to replicate a human brain and consciousness, you wouldn't build a computer, but an artificial brain. That would be fairly easy. When you look at the human brain, as much as it does, it's not that complicated a beast. It has a lot of storage, but a lot of its memory is not hard stored but reconstructed by associations it can make. Efficient memory in humans is not filed like a computer but the result of humans who are adept at filing, compartmentalizing, and figuring out how to turn on knowledge of their specialty and relate it to other specialities. There is of course a limited capacity for any human brain, and that is one of the great problems with humans. An architecture that was never well-built for even the written word has to take in symbolic writing and information in volumes it was never able to cope with. This is why so many are mystified by AI memes, not counting the ideologues who just think like managers. Most people though know the hoopla is bullshit from said managers. In gaming lingo we call the AI the Artificial Idiot, and even a good programmer can't get around that.
>>

 No.469668

You might be able to replicate this function in algos without any fancy hardware, but it's a work of art and won't ever lead to general AI comparable to us. It would be more expensive than existing AI models which do what they're intended to do. Usually AI schemes don't try to think like us, but work with things that are very interesting for information. The hard part is arresting humans to conform to it, because humans do not like being cattle or led to the concentration camp, like many of these people insinuate will happen.
>>

 No.469669

My point is, the way brains produce knowledge and information is not reducible to symbolic expressions, as if we "think in symbols" that conform to a model or a conceit of systems thought. I'm writing about this exact issue, maybe I'll share when it is complete.
>>

 No.469670

This is a difficult thing to grok with our philosophies of science and the models we can build, but we intuitively have some sense of how we think and how we recall information, and we do not do it in laborious ways where we sort our inner mind like a technocratic polity. That's not what humans do well and we have to be unlike that just to adapt to technocratic society.

Basically, humans are massively parallel in their thinking, because we evolved to react to a large "bus" of information from senses. Language started as a neat trick we sometimes used, before it would be selected for due to its utility and the first examples of artificial selection. (Most ancient rule of humanity - retards die, and this persists all the way to now, so that's who was ritually sacrificed. That's how "natural selection" happened in humans, rather than some passive environmental competition.)
>>

 No.469671

>>469664
>It's not, but Modernists can fool themselves into believing it because very little of what they say is proven imperially. And anything that that goes against what they say is handwaved away since they are the ultimate arbitrators of what's true, not something objective like empiricism.

This is a new one, i have to admit.

Empiricism grows out of the scientific revolution during the enlightenment which marks the beginning of modernity. But I have never heard of empiricists completely reject Modernity before. There is a strange empiriocentrist idea that rejects all other philosophical ideas and usually that's very hip with 14 year old radical skeptics, which i still support because it's better than teenagers larping as Nichean Ubermenshen.

I agree that Modernity has to have a flaw of some kind because somehow it didn't complete it's task and it was possible for society to partially slide backwards into premodern thinking. You're welcome to suggest improvements to fix that, but don't bother pretending that "post-modernism" is something new, it's just old shit pretending to be new.
>>

 No.469672

Off-topic but a thought about evolution popped in my head. As you probably know, I'm not a Darwinian, and increasingly don't believe there can be a thing called "natural selection" in the way Darwin described that functions very well. It is a factor that can be conceived, and is relevant to ecological views and the mechanics of populations in competition or cooperation. At the basic level, though, my belief is that the only shifts in life that lead to speciation are mutations or "freak accidents" which confer some mechanical advantage to the life-form, which allow it advantages not just in life but in mating rituals. I assume that animals with sexual habits do not engage in sex randomly but possess some ritual to suggest partners are desirable, and the mates who are successful draw more partners by this appeal rather than their survival. Survival would be a factor in being able to reproduce in the first place. Maladaptive mutations, which is many of them, lead to death or failure. Many mutations, though, simply are not relevant or only lead to gradual variation within a species. The reason you don't see transitional forms is because most variations are not consequential for speciation, and simply become something average within the species or eventually melt into something less relevant. My view is mechanistic to the core and does not concern the genetic question, which really isn't relevant to natural selection or success in mating. So much ignorance in this is dictated by a need to defend eugenics at all costs.
>>

 No.469673

In the case of humans, the development of language kickstarted mechanisms which favored the growth of that faculty, which led to a lot of desirability and success in life, which led to a rapid explosion of the faculty. As humans adopted language more in their society, the section of the brain where language develops would strengthen due to intensive use of it - and this is literally Lamarck but in such a function, Lamarck would be correct. We know that human brains grow and adapt due to their environment and conditions, and if language were adopted, it would incentivize that growth, and then the selection of those who demonstrate more developed language, which starts a chain reaction until there is no more significant advantage to growing this function. At some point, anyone who "didn't get it" was out of the mating game, and then as the humans figured out how to talk to each other, they started ritual sacrifices and the killings began.
>>

 No.469674

This shows that making general rules for specific situations is a bad idea, and denying the details of something like language development - a unique development in the history of life so far as we know - is really bad natural history.
>>

 No.469675

>>469665
<Curating data-sets makes for smaller more efficient AIs as end products, but you need a lot of people for that curation process,
>No you don't. And you can easily share curated datasets. Some people are doing things as simple as scraping the highest rated comments in a Reddit forum. Which tales all of a few minutes.

Sharing the datasets and machine-learning-weights that is good praxis, it will prevent the economy from doing the same work more than once.

You are also correct that there is some already curated data openly available, (although the curation quality of updoots from reddit is very questionable). Capitalists will still have to hire people to generate or curate data because most of the use-cases for commercial applications aren't represented online. Milking the general intellect is only going to get you so far.
>>

 No.469676

File: 1684953400491.png ( 42.51 KB , 600x600 , brain-melt.png )

>>469667
For once i agree with you

However i don't understand why you think it's such a big problem that Machine intelligence can't mimic human thinking without sounding like an unhinged loon. We already have humans that can do all that.

I don't care if the AI doesn't understand, i just want Ai to do stuff like function as an interface for technical documentation, to reduce the tedious work of chasing down some technical detail. I have software projects where i already solved all the complicated high level logic problems on paper, but i'm not able to implement it because i would die of frustration grinding mountains of technical documentation to find the correct software framework that has the libraries with the necessary optimizations and so on. Even simple things like if it can tell me what command to paste into a config file to turn on a functionality would be a great help. Non of that stuff requires understanding meaning. Before you object, i get that this is no more than a slightly more sophisticated search function, but who cares if it's supremely useful.

I get that chat-gpt can do those things, but i think corporate AI will come with some kind of novel psychological terror, and I don't want to risk subjecting my self to that, i kinda got burned by interacting with corporate social media already. So I'm looking for ethical FOSS AI that's simple, useful for task specific queries, and run on consumer hardware i can afford. Simple tools without the risk of brain-melting side effects.
>>

 No.469677

File: 1684957104032.jpg ( 622.1 KB , 947x1131 , 20230508_084455.jpg )

AI will not be captured by porky because the number of labor hours required to build them outstrips capitalists abilities to provide it.
The Giza Pyramids are estimated to have taken 100 million labor hours to build. It takes the same amount of time to build a single automobile. It takes an order or magnitude above that to create an Operating system. And it takes an order of magnitude over that to build an AI probably.
My guess is that capitalists ability to control labor hours in a single project tops out around 100 million. Which is why a capitalist has never built something equivalent to the Apollo space program. The state's ability I think tops out around 1 billion, But AI is going to require 10's of billions and more labor hours and there's no organization on earth that has the ability to control that much labor time and still own it outright.
Capitalists realize this, that's the real reason why Elon Musk called for a six month moratorium on AI development. He's hoping they can do something now to keep AI privatised before it's development is completely out of their control.

As an aside I think there's some exceptions to the 100 million labor hour rule. I think a company like Apple actually has a state level and above ability to control labor time. Which is why they're so highly valued
They have indeed created their own operating system on their phones. It's telling though that their desktop OS uses the Linux BSD operating system as it's base. So while the mighty Apple could create one OS from scratch, they couldn't create two.
>>

 No.469678

>>469677
What you say about capitalists possibly lacking the ability to organize enough labor to build AI, sounds plausible. What i don't get however is why you think it takes 100 million labor hours to build a single automobile.
>>

 No.469681

>>469678
I saw that estimate somewhere. I should have said a single automobile model. The number includes distribution, marketing and the told time to build all copies of a single model. Seeing as how a popular model can sell into the millions it seems plausible.
>>

 No.469682

>>469681
oh that makes more sense, thx for clearing that up
>>

 No.469694

>>469676
The point I'm making is not about the essence of human thought or knowledge, but about how we mechanically approach a problem compared to how an algorithm is written to solve a problem. Becoming a good computer programmer and approaching problems with such a machine is not trivial, nor is it universal to solving problems. Humans do not think algorithmically when solving all of their problems - far from it, humans are constantly active and react to many events around them, and possess an ability to integrate all of that information quickly in models which made sense to us. It makes us very good at recognizing patterns and figuring out machines or situations, through a process that is highly alien to an algorithm which must break down any sense-data into logical propositions. The algorithm can be programmed to do this, but there are long-understood barriers to doing this with a naive approach. Algorithms for example do not handle infinite regress with naive approaches - they will enter an infinite loop if carelessly programmed. Humans avert that as part of their functioning, unless they are made to go mad - and this madness is usually the result of humans' own symbolic knowledge faculties being turned against them, rather than something that happened because their brains were just defective. Humans with severe brain damage manage to continue operating in some way without being "mad" in their own approach to the world. They will likely not have the ability or function they would like or we would expect of others, but they would in their own way proceed through life, and to them what they do is as reasonable as they could be and makes some sense. Very often, the "mad" are not disconnected from the world at all, and see the same things a normal human sees, doing many of the things a normal human would. But this gets into a long discourse about political insanity and what it would mean for sanity to exist in a genuine sense, becuse there is indeed a genuine disconnect with the world and material events which is different from how we are told to consider insanity. Clearly insane people are allowed to rise to prominent positions and their insanity is celebrated, while innocent people who did nothing wrong or little wrong are declared madmen because they hold the wrong political conceits.

The point here wasn't about the machines being "crazy", but to make clear what a computer programmer does when he or she writes a program to solve a problem. There is a whole approach and philosophy about writing computer programs and their best function and utility for us, that you would know if you get into the field and talk to programmers. It's a big thing in the open source community.

It should also be clear that the machine and algorithm doesn't have this power to corrupt people in of itself. A computer is a tool. The question is who commands the tool and what is commanded. The computer is deployed primarily to command us, rather than the natural world.
>>

 No.469695

>>469694
To put it another way, the problem isn't that computers exist, but that the NSA exists, and the computer makes the NSA possible. You will never compete with the NSA's command of information individually or in any organization you would create, let alone the wider network the NSA is connected to with the state.
>>

 No.469696

>>469695
And of course, you don't get to wish the NSA out of existence because you don't like it. Something like the NSA, CIA, etc. will exist for reasons that make a lot of sense. The world where this is forbidden is the fantasy. Intelligence and command of information would always be carried out in some way, and this is the way it is done and can be done in the time we live in.

Unique IPs: 17

[Return][Catalog][Top][Home][Post a Reply]
Delete Post [ ]
[ overboard / sfw / alt / cytube] [ leftypol / b / WRK / hobby / tech / edu / ga / ent / music / 777 / posad / i / a / R9K / dead ] [ meta ]
ReturnCatalogTopBottomHome