- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Absolutely needed: to get high efficiency for this beast … as it gets better, we’ll become too dependent.
“all of this growth is for a new technology that’s still finding its footing, and in many applications—education, medical advice, legal analysis—might be the wrong tool for the job,”
AI usually got better when people realized it wasn’t going to do all it was hyped up for but was useful for a certain set of tasks.
Then it turned from world-changing hotness to super boring tech your washing machine uses to fine-tune its washing program.
Like the cliché goes: when it works, we don’t call it AI anymore.
The smart move is never calling it “AI” in the first place.
Unless you’re in comp sci, and AI is a field, not a marketing term. And in that case everyone already knows that’s not “it”.
The major thing that killed 1960s/70s AI was the Vietnam War. MIT’s CSAIL was funded heavily by DARPA. When public opinion turned against Vietnam and Congress started shutting off funding, DARPA wasn’t putting money into CSAIL anymore. Congress didn’t create an alternative funding path, so the whole thing dried up.
That lab basically created computing as we know it today. It bore fruit, and many companies owe their success to it. There were plenty of promising lines of research still going on.
I wish there was an alternate history forum or novel that explores this scenario.
Pretty sure “AI” didn’t exist in the 60s/70s either.
The perceptron was created in 1957 and a physical model was built a year later
Yes, it did. Most of the basic research came from there. The first section of the book “Hackers” by Steven Levy is a good intro.