A Writer Looking to Change the World

Search This Blog

Thursday, February 22, 2024

More Thoughts on AI

    I think that by this point we can all agree that “AI” is something of a bust. It could be just that it’s early days, but I feel like the tech people shoved the idea out the door long before it was ready. Most people know it’s not even an innovation, it’s just repackaging something people already didn’t like. I think that the biggest failure of AI doesn’t even have to do with the tech itself, it has to do with how it was marketed to capitalists. From the start of this tech bubble, the people in charge of marketing this have acted like this will replace humanity, right down to invoking a tech apocalypse where the tech becomes so smart that it decides that it doesn’t need us anymore. Why they thought that was a good idea, I don’t know, but at this point I think it’s safe to say that no one is looking for something to replace humanity. If we can’t work, the economy is kind of shot, and I’m pretty sure that an AI smart enough to replace us is smart enough to realize that ruling over a world is worth nothing if you have nothing to rule over. What I think people want is AI that’s capable of surpassing humanity. They want a machine capable of leading them.

      Considering the state of our government, it’s not hard to see why. Power in the United States divided between the government, who we elect, and the businesspeople, who we don’t. We’re seeing why a world where half our power is contained within positions that we can’t change if something goes wrong is a bad idea, not least because it makes it harder for us to get rid of elected leaders who are doing a bad job. It’s at moments like this that insane courses of action start to seem reasonable, in part because they feel like the only thing that will do any good. Any minor changes result in a world that, looked at up close, is exactly like what we have now. It takes time to see the effects of change, and humans aren’t good at thinking long term. 

      I don’t think that AI designed to lead us is a good idea, but even I can see the appeal. The biggest problem facing our world is that everything is in pieces. I’m not just talking about politics, but academics, nationality, culture, just about everything. There is something incredible about living in a world where you can have a group tailored to your exact personality, but the price of that is you can go your entire life without realizing your world isn’t real, so long as you don’t interact with anyone outside of your group. To an extent, this was also true in the path, but because groups were both smaller and built out of random chance, it was easier to build meaningful connections with people and you could live your entire life without seeing that it was all a lie. Now, the only way to keep your world real is to deny that anything outside of it is real, unless you devote your entire life to learning how to build a reality. Most of us can’t do that. I’ve tried, but there’s too much for me to keep track of. We need better tools to help us figure out exactly what people will accept, because even groups are full of people with contradicting interests. I’m just not sure if this is the answer, and even if it was I don’t trust the people in charge to implement it properly. 

      Frankly, I have my doubts that AI will become what we want it to be, and it comes down to the problems we’re seeing now. AI is infamous for having “Hallucinations”, caused by them learning incorrect patterns, and I distinctly remember reading stories where models claimed they weren’t making anything up. They point to what I think the real problem with AI is; it can’t see reality. Neither can we, but we interact with it on a daily basis, whereas the models only interact with the reality we feed into it piece at a time. I think it’s reasonable to say that in order to build a machine capable of interacting with reality, we first need to understand reality, and reality is made of so many moving parts that we won’t have an easy time doing that. It feels more likely that AI will become a series of tools made to help us interpret data, and not much else. Until we learn how we protect ourselves from the Infinite, I don’t think these new tools will do us much good. 


No comments:

Post a Comment