I sometimes wonder if this is a psy-op. Like, Google wants to make people feel less worried about AI, so they just make the AI results totally incompetent.
Shown: An AI saying "there are no known cheat codes" next to the link with all the (working) cheat codes.
Nanook
in reply to sj_zero • •sj_zero
in reply to Nanook • • •For the purposes of what I'm discussing, there doesn't need to be a disambiguation between the two.
going "Gemini isn't a threat to me" ends up essentially being "Google wielding Gemini isn't a threat to me" in the public's eye.
Contrast with chatgpt, which expresses a lot more basic competence and has people a lot more worried about what what openAI will do with its models.
Nanook
in reply to sj_zero • •sj_zero
in reply to Nanook • • •The detail here is that I'm talking about the psy-op of Google potentially neutering its AI for PR purposes. In that case, it doesn't matter to the public at large whether it's the AI or the company controlling the AI that is scary, because if the AI isn't scary then the company with the AI isn't scary. There's often talk about "the wisdom of crowds", but the crowd as a panicky lot really isn't that skin deep, so you only need to make sure it isn't looking at the thing you don't want it looking at.
I'd probably agree with you that separate from the public perception of things that AI as a whole could become something dangerous because of the blind self-interest of companies. It's already bad enough having human beings with a conscience making decisions -- if you have even a low intelligence AI making mass decisions with the sole intent of making the company more powerful, and it doesn't really care that much about the morality or ethics or humanity of the decisions, you can have a lot of evil created that actually does result in the people who caused it to be committed becoming more powerful thereby.
Forestofenchantment likes this.
Nanook
in reply to sj_zero • •sj_zero
in reply to Nanook • • •Nanook
in reply to sj_zero • •