Editorials, Ethics

We’re Just Getting Started with AI…

Remember, we’re dipping our toes in the proverbial pool at this point with AI.  There are incredible things going on, and things we’re learning can be applied and learned from yet again.

What strikes me is that we expect all of this to be perfect at such a “young” time in the technologies.  Yesterday, the editorial was about AI and chain of ownership challenges.  How do you know that data is “right?”

But as I was listening a bit to the Facebook hearings and other social media groups that are in the hot seat about all things political, and at the same time look at people’s expectations of AI… it’s clear that we have started to rely on things already – so early in this learning process.

I think about things that have to be solved that are going to be real sticklers, and things that we’re likely to get wrong, at least partially, in the early going.  Self-driving cars – I’m all for it.  But clearly there are some logic and automation challenges ahead.  Just wait until the vehicle automation is regularly making life and death decisions.

“You’re coming up to a light, a pedestrian has darted in front of the car – if you go left, you hit another car, if you go right, there is a family on the sidewalk.  You can’t stop in time.”

The car is going to have to choose.  And there will be no winners.  Of course these decisions are made every day in vehicles around the world.  But when we first get into those decisions made by computer, it’ll be very challenging to figure out “right” and wrong.

The same is true of the different automation and learning going on about systems, specifically data.  We’re relying on our systems to make choices.  To notice trends.  To prevent certain things we don’t like from happening.  The social platforms all have very deep understandings of what’s going on on their systems, human behavior and all of that.  It’s how they make money advertising based on those behaviors.  But now there is an expectation of morals for automation.  The questioning of the social media platforms is asking why they didn’t notice who was buying what types of ads and how they were paying and… and… and…

It’s valid, but we’re still learning here too.  I’m not making excuses for ANYONE.  The thing that struck me in all of this is the assumption that all data is known and visible and considered and has modeling behind it and so-on.  That all results are correct and that all data points are considered perfectly.  In short, that all of the systems are mature enough to be trusted at face value.

But we’re not there yet.  There hasn’t been enough learning.  There hasn’t been history and, well, data morals – all have to be built in to the machine learning and AI.  And how do we do that?  How is that determined?  How are data morals going to be agreed to and implemented so we know the tough choices are being made with all the right pieces and parts considered?

We’re so early in this – it’s critical to learn, observe, tweak, develop… lather, rinse, repeat.  And keep in mind that we have to keep doing this to get it right.