in reply to Nanook

The detail here is that I'm talking about the psy-op of Google potentially neutering its AI for PR purposes. In that case, it doesn't matter to the public at large whether it's the AI or the company controlling the AI that is scary, because if the AI isn't scary then the company with the AI isn't scary. There's often talk about "the wisdom of crowds", but the crowd as a panicky lot really isn't that skin deep, so you only need to make sure it isn't looking at the thing you don't want it looking at.

I'd probably agree with you that separate from the public perception of things that AI as a whole could become something dangerous because of the blind self-interest of companies. It's already bad enough having human beings with a conscience making decisions -- if you have even a low intelligence AI making mass decisions with the sole intent of making the company more powerful, and it doesn't really care that much about the morality or ethics or humanity of the decisions, you can have a lot of evil created that actually does result in the people who caused it to be committed becoming more powerful thereby.