(SteemSTEM) Machine Learning and Learning Machines: Any Sufficiently Advanced Technology is Indistinguishable from a Brain

in #steemstem7 years ago (edited)

When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.

– Bruno Latour

When one has weighed the sun in a balance, and measured the steps of the moon, and mapped out the seven heavens star by star, there still remains oneself. Who can calculate the orbit of his own soul?

- Oscar Wilde

artificial-intelligence-header.jpg

robotsrobotsrobots
🤖🤖🤖

In 2017, several notable milestones in artificial intelligence (as powered by neural networks, machine learning, deep learning, genetic algorithms and reinforcement learning) went by but we just kind of brushed them off, either because we were in America or Europe where so much utter insanity took place that we couldn't spare the attention or we were in Africa, where day-to-day survival and our stupidly hot sun will not let us pay attention to the onrushing future. I can't speak for people in places other than those.

DOTA-logo-wis.png

dota logo - By Wishiyyl [CC BY-SA 4.0] from Wikimedia Commons

In the world of gaming, the AI created by the folks at OpenAI annihilated one of the best human DOTA players in one-on-one combat. I believe the next event will be a team game, five-against-five. I have no doubt the AI will triumph there as well.

In the military sphere, an Air Force colonel and highly skilled fighter pilot got annihilated by an AI-pilot in a one-on-one simulator dogfight.

Wait, sorry. That was 2016. My bad.

In the world of strategy tabletop gaming, an AI system called AlphaGO finally beat the best human players at Go, a game so much more complex than chess that for decades after computers started beating us at chess, people were certain no computer could ever master it. Those people have now been proven to be wrong.

Self-driving cars were not left out of the advancements: Uber did over 2 million miles of testing for their self-driving car program and Waymo in November 2017 tested an all-self-driving fleet in Phoenix, Arizona with plans to expand further in the coming years.

Of all these however, the coolest and most revealing machine learning/neural network story I came across from all of the internet? Was one from some grad student's Twitter post about their final-year project. The assignment was to create a neural network and have it animate a stick figure to race across a finish line as fast as possible.

Henry Animation for Neural Networks and Machine Learning - Shortened Gif Version - ezgif-5-2f588a0bce.gif

created for me on commission by the Youtube animation prodigy husky henry

Do you know what the winning neural network did? Instead of making the stick figure run faster or go quadrupedal, it disassembled him into a tall tower with the head at the top and then just ... fell over so that the head crossed the finish line. Why? And by why, I actually mean "why is it in my opinion the coolest machine learning story of the past year?") 😀 🌚 🤔

To answer that, the word of the day is Algorithmic Opacity.

Do you want to sit in front of a fire -- or do you want to be warm?

- The Danimal, Usenet

What is algorithmic opacity, you might ask? The answer is that it is the extent to which we cannot understand why a piece of software does what it does. What has become increasingly clear is that fewer and fewer people understand what is going on under the hood of these algorithms that, more and more, are running our world. There are three reasons why this is the case and, in order of increasing scariness, here they are:

  1. Intentional: where the creators or owners of the algorithm don't want you to understand why and how their software operates. A dangerous state of affairs but at least one we can resolve if we're paying attention.

  2. Illiteracy: where it's not that they don't want you to know, it's that it's too hard to know if you're not a specialist. Stay in school, kids!

  3. Intrinsic: where it's not that they don't want you to know but that nobody knows why the algorithm made its decisions. Nobody. Not even its creators, the coders and programmers and scientists.

For our purposes today, we'll be looking at 3).

Intrinsic Opacity (henceforth referred to as "opacity") means that we don't know and can't know because the software (or neural network or algorithm) has essentially trained itself past the starting point they gave it, essentially mutated itself against the stated problem until it has become that solveth the problem, that which knoweth the word.

So what does all this mean?

Nanobot: I tell you what … I won’t worry you with that detail, if you’ll promise not to worry me with the details of where we’re hiding.

Kevyn: I didn’t realize ignorance was a currency.

- Howard Tayler, Schlock Mercenary



I have said before that attention is the ultimate currency. The corollary I missed is that ignorance is the paper to its ink, the night to its car headlight, the phase space of all possible roads not taken. The fact that attention is limited by time and space means that every time it is pointed somewhere, it is explicitly not pointed somewhere else.

In this context by the way, somewhere else is literally everywhere else. That everywhere else is what we call "ignorance."

Hence our stick figure above (seriously, check out Husky Henry's Youtube channel, kid's amazing) It illustrates the kind of solution that a human mind would be very unlikely to come up with. We are biased in favour of familiarity, of habit, of doing things in the manner that our neurobiology as limited by physics has conditioned us to.

The algorithm is biased simply in favour of doing things. It has no real constraining biases (other than those we unconsciously include) because its own biases are absences rather than presences. Our type of ignorance is different from the algorithms' type of ignorance and, in fact, those respective ignorances define the kind of solutions our brains are congenitally capable of inventing. We in short choose to sit next to a fire while the algorithm seeks to be warm.

Being warm can be achieved by sitting next to a fire. It can be achieved by getting naked with the nearest fellow life-form in a sleeping bag. It can be achieved by killing an animal and turning its fur into clothing. It can be done via the Han Solo solution (look to your left) i.e. by cutting open a Tauntaun and climbing inside its steaming guts. The algorithm turns ignorance inside out and explores the most abnormal solutions because it has no "normal" from which to deviate.
The behaviours of neural networks and especially their fuckups are fascinating to us because they so closely resemble the amusing cognitive and epistemological errors that cute animals and human children make -- and for the same reason. It's a combined sensation of superiority (with or without smugness) and excitement. Our algorithms aren't mirrors but mimics, not reflections but remixes. They are the missing half of our imagination, venturing into the dark territories that human brains do not know how to visit to bring back answers that we don't know how to think of.

... And yet it played the move anyway, because this machine has seen so many moves that no human ever has.

- Cade Metz, Wired.com

Case in point: consider AlphaGO. That's the machine that has beaten the best human Go players. The quote above refers to a move it used in the game that was so extraordinary that the human player had to get up and leave the room for a while just to contemplate it fully. It blew his mind because it was a move no human player has ever made, could ever have made.

We wonder what will happen when we create artificial intelligence. This is a mistake because we're never going to create artificial intelligence, we are going to grow it. And no, not like a bonsai tree but like a child.

We have created things that are not yet minds, not yet vast but definitely cool and unsympathetic and we rub off on them as they rub off on us. We train the algorithms with the accumulated corpus of our history, our behavior, our physics and they in turn train us to use them more and more, cause us to tailor our responses to match their limits (ever notice how clearly you have to enunciate to be understood by a voice prompt menu on the phone? Exactly.)

While there are no eyes to look into, we have created masses of code, of stimulus-decision-response loops that intersect and interact with us, our very own created abyss that gazes back into us. Given that --in many ways --we don't even know our fellow human beings as well as we think we do, even our closest relatives, how much more these entities forming in our networks!

In conclusion, we find that the fastest advancements in AI come not from programming but from harnessing the fundamental laws of natural selection and evolution to our service. Funny thing is, this already happened. That's where our brains came from too. So no, don't worry about the future of AI. Worry what we're going to do with the ones we already have.

small divider thing - 760341.png

References

Out of the Loop - the article that contained the anecdote that inspired the above. Good stuff, you should totally, like, read it, yo.

The Shallowness of Google Translate - an equally fascinating article by linguist and cognitive scientist Douglas Hofsdater

Three Types of Algorithmic Opacity - exactly what it says on the tin, a primer on the subject.

Why Self-Taught Artificial Intelligence Has Trouble With the Real World - the difference between trained and self-taught machine learning algorithms

How Google's AI Viewed the Move that No One could Understand - an analysis from Wired on just how AlphaGo used an unorthodox move to blow a Go champion's mind.

---

Elsewhere



Sola

Twitter

small divider thing - 760341.png

You like this? Then behold (in descending order of preference) the means by which thou might supporteth my ministry:
👇
Steem/SBD: @edumurphy
Dash: XdCpkRRxejck5tfXumUQKSVeWK8KwJ7YQn
ETH: 0x09fd9fb88f9e524fbc95c12bb612a934f3a37ada
LTC: LLeapGkFjT8BNb2JcwunXG3HVVWLY7SaJS
Doge: : DNwUsAegdqULTArRdQ8n9mkoKLgs7HWCSX
BTC (sure, if you insist ): 1L9foNHqbAbFvmBzfKc5Ut7tBGTqWHgrbi

Cash App: $edumurphy (because fiat works too, I'm not particular 😀)

---

That failing, you might also/instead enjoy other #steemstem stuff I've written such as:

Assassin Bug: The Spider Slayer

Geospatial Big Data: The Magic of Maps and Money

Darker Than Black: Behold The Superblack Nanomaterial Vantablack

small divider thing - 760341.png

SteemStem Animated Other Thing - DQmWy4Kn1oiix2iq2Wdfc6pZ81sCq7UkfDv1U3FsNxkYao6.gif

absolutely kickass @steemstem animation by @rocking-dave

geopolis footer - divider - edited.png

@geopolis member

stemng_new badge - not a divider.png

amazing @stemng badge by @doctorveee

Steemit Animated Thingy - U5dtAVjBETmqw1AAbnbU32TA7BXiwUk.gif

@edumurphy

Sort:  

Awesome article. We humans are only limited by our thinking's but this limitation might be the only thing that is still keeping us from self destruction. Even if AI creates limitless potential, it is going to be on record that it was created by humans, with out limited thinking.

Having said that, the stemng logo was created by @doctorveee and not me.

NB: you need to properly attribute your images to confirm with the new steemstem standards. Check-in to our discord channel if you need clarifications.

Corrected the attribution of the tres cool @stemng logo (thanks so much, @gentleshaid)

I hope the image sourcing now meets with the standard? 🤗

This is quite interesting, but also confir.ation of my greatest fears.

A.I will surpass humanity. And if humanity were to be destroyed by AIs, it wouldn't be because of any superiority complex, but rather a logical conclusion that it would be better off without us.

Your "getting warm" argument holds true.

AI has no moral obligation or mental blockage that prevents it from thinking all the possible solutions to a delema .

The extra thought humans put to processing whether a thing "should" be done is unknown to an AI.

The ending of humanity comes when AIs can effectively repair, fix, create and destroy other AIs.
Once the loop is self sufficient without human input, the next logical solution would be to get rid of any threat to their programming .

Heck, a directive such as "protect humans" can and will be solved the most efficient way, which may not be in favor of humans but goes perfectly in line with said directive.

We are doomed if we so slightly mismanage this ai revolution.

I'm not so much worried about intelligences smarter than us. That might even end up being our salvation. I am worried about algorithms vastly dumber than us but more numerous, that interact with our tools and homes and vehicles, reacting to each other in sneaky and/or unpredictable ways. Driverless cars getting hijacked by advertisment software, excessive wifi signals causing drones to crash, facial recognition going horribly wrong, advertisements with eye-tracking capability that won't play unless you are looking at them ... stuff like that, only stranger and worse.

@edumurphy, great article as always bro, love the arguments brought up here

@edumurphy, great article as always bro, love the arguments brought up here

@destinysaid You bring up a very interesting point that technologists and futurists are thinking about today about whether future AI will have goals that align with our agenda as humanity.

But before you can ask that question, you have to ask the preceding question of just how superior AI can become? Is it possible to reach an AGI or ASI? Are there any upper limits on the development of these neural networks?

And can these AI Neural Networks become not just "doing machines", "learning machines" & "analyzing machines" but can they become "conscious machines"? Currently we are able to program AI within certain confines of our own goals. Is it possible to create truly autonomous AI that can create its own goals? We know that AI is able to achieve amazing feats and goals but the question is can they set their own goals and think on their own or have a mind of their own? And if so, would they feel the need for setting their own goals, would they feel rewarded for achieving their own goals? A fascinating discussion that AI brings, it is just mind-bending and mind-boggling to see how fast technology keeps escalating today! We are literally thinking about the psychology of neural networks (large repositories of code) and how software can become conscious! Just think about how crazy this discussion that we are having is lol!

Very crazy indeed sir.

For me, the questions pertaining to AIs and their capabilities isnt one that has a simple answer.

Conscious machines ? Who knows, at what leavel can we say a machine has achieved consciousness?

What is consciousness ? Can it be measured ? Etc

The old saying holds true, the more answers you get, the more questions you generate .

We solve one problem just to find 4 more behind it.

Frankly , a conscious AI would be a very good thing, it can be reasoned with.

Ill check your blog out for more of these sort of info, im not too well informed about AIs, but its a subject ive watched and grown fearful of over the years .

That's true. Thank you, I'll check you out as well.

I love reading, learning and engaging with the community on AI! I haven't written too much about the topic yet, as my primary focus has been on crypto reviews, but you can check out this post I made a while back which discusses AI in the context of what the world will look like 20 years from now.

@edumurphy First of all fantastic content and write-up!

I am reading a book that I believe you should check out if you have not already done so, it's called "Life 3.0 - Being Human in the Age of Artificial Intelligence" by Max Tegmark.

You have fundamentally analyzed the core of deep learning and it's interesting because its a subject that is at the juncture of psychology, neuroscience, computer science, computer engineering, physics and mathematics.

AI and ML are going to be game changers, they already are -like you rightly pointed out - but they are going to do things that we can't even imagine 10 or 20 years from now that are going to have much more impact than in a game of Go or Jeopardy.

I am curious as to if you view these developments in AI and ML as a positive or a negative? From my perspective, I view these developments quite pessimistically, especially in the near term. I believe that centralized capitalism will lead to widespread automation in nearly every industry and profession as AI networks and robots will be able to do blue or white collar jobs better than any human can do it much cheaper resulting in lower payroll costs and a higher ROI. This will result in mass workforce disruption in the coming 2 decades and we are already seeing how it is starting to slowly creep up into many different industries. Long-term it is interesting and exciting because it can also create a lot of solutions such as space exploration, colonization and solving global warming as well as human needs, but i believe short-term, humanity is due for a wake up call that it is not paying close attention to.

You are right about pretty much Everything you wrote up there. I suspect the deeper revelation that all that unemployment and transformation is going to show us is that ... well, the heliocentric model dethroned Earth (and therefore us) from being the center of the universe. The evolution and natural selection models dethroned us from being fundamentally different beings from the rest of the animals. Artificial intelligence and machine learning are, bit by bit, dethroning us from our intellect being uniquely possible even here on Earth.

Basically, at this point, I now firmly believe that there is no human faculty, no human capability or skill or talent, that cannot be automated. Literally, none. I think a lot of the shape the world takes over the next thirty years will be decided by that insight.

Also, thank you for your kind words.

That head felling over is just me in school trying to jump some hoops to get an assignment done. Lol. Great article.

It wasn't ignorance that brought me here. But of course, I like to think about it your way as well.

True enough. It's a deep topic with much room for interpretation and many angles to take. Mine is nowhere near the best primer on the subject anyway but it is hopefully the most entertaining, heheheh

Awesome stuff! Intrinsic opacity is a useful concept outside of computing as well, I'd say- any complex system is likely to have a certain amount.

Love the Schlock Mercenary panels, but you should link to the source!

Heheheeh, thanks. Re: Schlock, I actually had the link there but forgot to put in the link text 🤦 Anyway, that's been fixed so thanks, @mountainwashere! 👍

Any sufficiently advanced technology is indistinguishable from MAGIC - ARTHUR C. CLARKE

The title is hardly accurate. It has to be said. I think you should edit it. And try to avoid plagiarism in future.

Other than computers (at a push), most technology is nothing like any animal's brain.

The title isn't plagiarism, I quite deliberately did a variation on Clarke's very very very famous quote.

As for its accuracy relative to the subject, I would argue that it's more predictive than prescriptive, so to speak.

I don't know if it's just me or am I the only person that gets scared about artificial intelligence and its attendant manipulations. It's not like I'm averse to science and technology or development. But the shit just gives me some jitters.

It's not just you and you are very very right to be concerned. In a way, our primitivity in Naija is doing us a favour. At least we get to see how bad it goes in other countries before it reaches us.

Lol. Right. They test run it and and then we know how to manage it to avoid complications. That's why almost nothing is original to us😂

Man, reading this I feel smarter but a lil bit terrified at the same time. Some how this intrinsic opacity might be a new world for A.I but it might be the greatest disadvantage as well. Makes me think of the Will Smith movie i-robot, I might just be a conspiracy theorist or something 😂. Great post man. I'm sharing this

You have a minor grammatical mistake in the following sentence:

It has no real constraining biases (other than those we unconsciously include) because it's own biases are absences rather than presences.
It should be its own instead of it's own.

This thing go soon enter -1 rep.

Nobody likes a busy body 😑

Fixed, thanks much!

@destinysaid: hahahaha 🤣