I like how someone commented on the main article that the time is getting close to where AI can step up to the plate of creativeness and how widespread and easy this will make our lives. Watson is a giant server farm, not a single PC, this stuff won't make a huge impact until IBM can shrink it or until computers get much much faster and smaller. Not that it won't happen, it's just not "around the corner" in any way.
I think "around the corner" type predictions generally fall into 2 camps:
1.Problems that we don't know how to solve yet, but we think we are close to based on "similar" problems we have solved.
2.Problems that we have a solution for, but it currently takes an unreasonable amount of time to use these solutions in practice.
Problems in class 1 are like AI in the 1960-70s, everybody thought we were super close to amazing AI based on discoveries we'd had, but these estimates were very wrong.
Problems in class 2 are like nlp and ml work in the 90s and 00s. A rather large chunk of 'wow' ml/nlp we have in applications today were pretty much solved 20 years ago, but there was no sane way to run them, certainly not on your cell phone.
Problems in class 2 are safer bets, there does seems to be consistant increases in processing power, memory, etc. Problems in class 1 are harder to guess because as history has shown, just because a solution seems similar doesn't mean that it actually is (shortest path is solved, longest path is NP-hard, shortest path touching all points once (ie. tsp) is NP-hard).
I think it's safe to say having Watson on our smart phones is right "around the corner" (20-30 years?) saying that we'll create "creative" AI, not so much.
Wireless communication is widespread enough that I don't think it matters too much where Watson "lives". The inputs and outputs required from "him" (for questioning, anyway, not for training) are tiny, so bandwidth isn't much of a concern. Assuming the architecture for it is parallel enough that it can be responding to lots of people at once how much it is distributed vs hosted on one system isn't particularly relevant to its usefulness, IMO.
I question how well Watson would handle single questions at a time being asked compared to the millions of requests a day it would get if it was setup like say, Siri. Not that you are wrong in any way, they could certainly scale into an even bigger server farm and use the internet to deliver the questions and answers, I just wonder how much more server's they'd need.
I would say that Google Search is only useful to the majority of people when it's searching across an index of the entire web. This scale is likely not achievable at the desktop scale. On the other hand, the algorithm itself (though evolving) has been around for years.
I view Google Search as a less complex algorithm over a larger set of data, while Watson is a more complex algorithm over a smaller set of data. (I've been known to be wrong ;-)
Mostly just that IBM's Watson is so very different then google search. That said you are right, there isn't much of my point that isn't invalidated when you include the ability to exchange data through the internet. Not everyone needs a Watson at home for it to be personable either, they just need IBM to save their personal settings, etc.
I suppose I was just imagining a world where everyone has their own Watson at home and not served through the web.
I don't think robots will ever be completely autonomous. There will always be Skynet or a central data center which feeds information and controls each machine. Otherwise, things could potentially get out of hand if machines are intelligent enough and all indicators indicate that they very well will be within the next 100 years if not less.
Correct. I think as intelligence comes into play, SkyNet is going to be inevitable for preventative measures. Regardless of intelligence, you'd probably want a data center or a control panel of some sort for software updates, analytics and etc.