RE: We Need to Stop Putting Blind Faith into Big Data
To a certain extent, our “machine learning algorithm” looks much like a little scientist furiously twisting dials on a huge machine.
This is huge and greatly needed. Humans are unpredictable and using an algorithm (no matter how complex) can never get the whole picture as far as I'm aware. There is something about human interaction and reasoning that give us something a machine can not.
Call it intuition or whatever but the connection that is made can show us if someone is trying to trick us or are genuinely honest. A machine can be tricked or gamed. Then you have the side were someone could be falsely labeled by a machine where a human could have found they were telling the truth.
I work in a big tech company doing human work because the machine is not able to do everything perfectly and in many situations we, the humans, need to pick up the slack.
Great post, keep up the hard work!
<3 J. R.
Thanks for the kind comments, JR. I agree with your comment about human intuition, though at times I see something interesting in the neuroscience and bio-inspired ML literature. As we struggle to create better neural nets, we start learning more about human intuition. Sometimes the parallels are spooky.
But what machine intelligence today really lacks is cognition. None of the AI out there right now, even the state of the art stuff, gives the machine an awareness of what it is, why it is making decisions, and what the consequences of those decisions might be. A self-driving car would happily drive you straight into a truck, just as soon as it would stop the car at a red light, if its training never covered its current sensor inputs.
And yes indeed, machine learning algorithms can be tricked. There is a whole area of research devoted to this, especially in computer vision, coming up with ways to fool ML algorithms (e.g., adding "white noise" to an image that is imperceptible to humans, but causes bedlam for the algorithm).