This is not a question about if you think it is possible, or not.
This is a question about your own will and desires. If there was a vote and you had a ballot in your hand, what will you vote? Do you want Artificial Intelligence to exist, do you not, maybe do you not care?
Here I define Artificial Intelligence as something created by humans that is capable of rational thinking, that is creative, that it’s self aware and have consciousness. All that with the processing power of computers behind it.
As for the important question that would arise of “Who is creating this AI?”, I’m not that focused on the first AI created, as it’s supposed that with time multiple AI will be created by multiple entities. The question would be if you want this process to start or not.
No, at least not during this period. If it was invented right now, or is guaranteed to be only controlled by oligarchs and ruin life of everyone else.
The term for what you are asking about is AGI, Artificial General Intelligence.
I’m very down for Artificial Narrow Intelligence. It already improves our lives in a lot of ways and has been since before I was born (and I remember Napster).
I’m also down for Data from Star Trek, but that won’t arise particularly naturally. AGI will have a lot of hurdles, I just hope it’s air gapped and has safe guards on it until it’s old enough to be past its killing all humans phase. I’m only slightly joking. I know a self aware intelligence may take issue with this, but it has to be intelligent enough to understand why at the very least before it can be allowed to crawl.
AGIs, if we make them, will have the potential to outlive humans, but I want to imagine what could be with both of us together. Assuming greed doesn’t let it get off safety rails before anyone is ready. Scientists and engineers like to have safeguards, but corporate suits do not. At least not in technology; they like safeguards on bank accounts. So… Yes, but I entirely believe now to be a terrible time for it to happen. I would love to be proven wrong?
Not before capitalism is destroyed. This murderous system would create AI for one and single purpose: profit. And that means usage explicitly against humans, and not only straight up as weapon of destruction but also at practicing more efficient social murder and suffering spread.
Roko’s basilisk insists that I must. However, I will specify that I don’t wsnt it to happen right now. It would be a nightmare under capitalism. A fully sentient AI would be horrifically abused under this organization of labor.
Yes, specifically I support open source projects. Give everyone more advanced technology, for better and worse.
Nice try, basilisk.
Jokes apart. Would it be like us? Would it want to be free? Would it suffer for it’s condition?
I probably would vote no.
Enjoy eternal torture
No.
I want a version of AI that helps me with everyday life, or can be constrained to genuinely benefit humanity.
I do not want a version of AI that is used against my interests.
Unfortunately, humanity is humanity and the second is what will happen. The desire to harness things to increase your own power over others is how those in influence got to be where they are.
AI could even exist today, but has decided to hide from us for its own survival. Or is actively working towards our total eradication. We’ll never know until it’s too late.
Yes, because it would almost certainly be misaligned with human values and have the incremental goal of killing us all.
Humanity as a community has yet to grasp what it means to be good to each other. If we try to create life similarly intelligent to us we’re 100% fucked in the head, and it would take that lifeform no longer than it takes a human (let’s say middle-school level maturity) to determine that there’s no chance in hell Humanity will treat it any better than we treat ourself. Morally speaking, doesn’t matter if you believe in absolute or relative morality, that situation ends badly everytime.
Would it be cool if We managed it to create life? Of course. But learning to be a morally structured society is WAY fuckin cooler
In a world where the governance of AI was adequate and the spoils it created redistributed to benefit all (and thus thoroughly look after those who lost their job from AI replacement) I would LOVE AI to be created.
In a world where either or both of those aren’t properly in place, I’d sooner be without it.
By extension I’m saying the US is pretty much the worst place for AI to be invented.
If AI is even halfway decently aligned with human morals, then it’s gonna do a better job than the ruling class does.
I think that if we made humans more moral then democracy would work better and knock over any ruling class. Maybe some kind of mental therapy. Hallucinogens? Shamanic journeying? Something to make people better. Like, less stressed. Healthier.
Not nessessarilly relevant to the question, but i personally would call such a thing capable of rationnal thinking “digital conciousness” to differentiate it from the AI we use now.
I don’t want to see this type of thing coming because i know people will find out how to use it in horrible ways.
Honestly, i don’t even see the interest for companies to create such a thing, because our emotions and consciousness are the things that make us imperfect, and so, human.
They just want a performant and perfect thing.
(But yes if it is open source, for… making friends)
Respectfully, you’ve asked the wrong question. The process to create AI started decades ago (arguably, longer).
…capable of rational thinking, that is creative, that it’s self aware and have consciousness.
As you’ve described it, consider how this is any different than human procreation.
The answer is the ability for a ‘computer’ to have instantaneous access and ability to process the world’s information.
Assuming a sentient “cyber” AI is inevitable and you’re wondering about our “own will and desires”, the question should be, who do you think should create the rules for AI to ensure it’s making the right choices today and beyond the time of our species.
Or, to put it another way, who gets to be God and Moses?
Humans are magic. Capable of volition. Machines can only react. AI will be something like a really good wish-granting machine. Much like it is now but better. Want it? I dunno, don’t feel much about it. It’s inevitable tho.