Kevin Kelly points out that we can't solve problems just by thinking about them, and as such a super smart AI can't just up and think its way to taking over the world. I also argue that such a thing doesn't make sense from an energetic standpoint.
Singularity research reminds me of Neural Network research. NN is singularly unfashionable, and pundits love to point to its lack of practical results; but what I've observed over the years is that a lot of fundamentally important research starts out in neural networks as a hypothesis, gets developed and refined, and finally renamed so that it is no longer a neural network; perhaps we would call a neural technique a Bayesian decision network, simulated annealing, a Lyaponuv functional, or a probabilistic Markov machine. At that point the technique gains mainstream usage and respectability, and we can safely continue to pooh pooh the neural network. I predict the same for the Singularity; as Kelly suggests, it will never amount to much, whilst simultaneously changing everything.