Barreling into the Future: Introducing Anne Ahola Ward’s Law of Machine Perception

“What we measure improves” is known as Pearson’s law. I have lived and died by this rule for the majority of my career. The cornerstone of scalable growth has always been measurement. Today I’d like to add to Pearson’s law by contributing my own.

Anne Ahola Ward’s Law Of Machine Perception: what is not measured cannot be modeled, therefore any data model/set that is not at a human level of perception cannot exceed human-level perception.

Ahola Ward’s Corollary on the contrapositive of Pearson’s Law: What is not measured cannot improve (except coincidentally). If yours does improve coincidentally, you have no way of knowing what caused it, and it is not reproducible, hence, falls outside of the scientific method.

Machine learning does not model creativity whatsoever, it merely attempts to create a model that will fit one or more predetermined datasets in the mode accurate manner. In this way, it is the most successful adaptation of the Turing test to date. Previously, the most famous example was ELIZA, the artificial intelligence program that simulates a psychotherapist. Fortunately, we’re not talking about creativity here, we’re talking specifically about human perception. This is much easier to reason about and reproduce. 

Average human intelligence for predetermined tasks CAN be exceeded by a group of experts, but while the group MAY exceed any of them individually, it cannot exceed ALL of the experts. The accuracy is enhanced, but that is achieved by eliminating mistakes, it is not endowing any new intelligence to the process. Human perception changes with society’s technological advances. One way of improving accuracy is to have a group of experts consult, does this break the law? No, because they are producing better than human datasets. Without proper measurement, this wouldn’t be possible.

What are the limits of machine perception? Well it’s hard to know, given the breakneck pace of technological progress we’ve seen in just our own lives (my own life), so it’s impossible to say for certain, but we can examine the history, and make some good guesses. In my line of work, futurism, the most effective way to see the furthest into the future is to first look deeply into the past.

First of all, let’s start with a lesson in humility. In the early 1950s, Marvin Minski helped pioneer the field of artificial intelligence at MIT. Because of his early triumphs, people were very excited when he decided to study neural networks, which were based on how biological neurons worked, therefore, we were only a few decades at most from generalized machine intelligence. Imagine the shock when he produced the famous “Perceptetron” paper, which proved mathematically that neural nets could never learn like humans, they could not even reproduce a simple XOR circuit. It sent such shockwaves through the AI community that it killed it for 30 years in a glacial period known as the AI Winter. One of the things that finally led to the “AI Thaw” was new models of neurons that were less simplistic and were able to exhibit improved learning behavior, but the memory is still raw in many researchers’ minds.

But what about people? What if we could just reverse them as machines, and then work backward from there? Would that work? Well, let’s see. Machine Learning algorithms are trained on a lot of data, but the datasets are not so large that bias can creep in along unknown dimensions. So small snippets of data can easily bias your data set.

If this were true of humans, subliminal messages would also be effective. This is the thoroughly debunked theory that playing small snippets of phrases slightly below the threshold of perception will affect behavior. The only study to ever show this (the one that in fact, created a huge “mind control” scare) was revealed to be fabricated by its author. These slushy things in our heads are a lot smarter than you think. It is, in fact, not just a ‘really fast computer’, no matter what the popular science press states. 

We need to first have the ability to learn. Fortunately, the brain is a lot more sophisticated than a perceptron, and we also have to have a lot of previous data (again, thanks, brain!) to process new data effectively. So I don’t believe there is a “Snow Crash” styled virus that will “hack into” the human brain and cause us to turn into mind-controlled zombies. Nope! The buttons that drive us to think in the ways we do have been known to advertisers and movie directors for a very long time. People will pay a lot of money for a well-done tear-jerker, because sometimes people just want to have a good cry, but no one thinks that putting a book under your pillow will help you pass the exam the next day (hopefully). The downside to having a theoretical “button” is extremist propaganda, which is definitely a problem, especially if you end up in an echo chamber. Society has learned that “sunlight is the best disinfectant” and is the closest to an antidote we’ve found to date.

Having a group of experts look at data is a lot like what actually happens when you train a machine learning model, except the computer needs anywhere from thousands to millions of examples, something doctors would take a lifetime to see. So, in a way, machine learning diagnosing your illness would be like a team of doctors that had a thousand years of experience diagnosing sick people. Another way of looking at it would be a million doctors looking at your MRI scans. In both cases, the panel wouldn’t really be any smarter than a human, it would be just (hopefully) more statistically accurate. But even in those cases, your results might be, at the VERY best, 10% more accurate than a very good doctor on a good day. And they could be as bad as the worst doctor on the worst day if the model is not correct. You will never get a panel of doctors that together are twice as good as the best doctor, that just isn’t possible because, at the heart of it, it’s an aggregation, not a way to produce new data. The improvement isn’t better perception, it’s the elimination of outliers. Bayes optimal error (BOE) can be marginally improved, but an order of magnitude improvement is just not possible. Personally, I’d rather have a very good doctor who is well-rested than a panel of faceless unknown doctors who might not be familiar with my situation.

For example, what if they miss something? A doctor might be tired or in a hurry and not catch that one blip on the scan that shows you have a problem. On the other hand, they might overdiagnose and misinterpret complex things that aren’t there or are errors in the scan. This could result in you being sent to surgery or other treatments unnecessarily. In some cases, the cure could be worse than the disease. This happens every day. Humans make mistakes. 

One of the hardest unsolved problems in machine learning today is the elimination of bias (recall). People in the industry and beyond are very worried about AI being racist, and those concerns are valid. Racism is just one (horrific) example of bias because bias can come in many unanticipated forms. Think of it thusly, the alignment in your car can be biased to the left or to the right. That’s one dimension. Machine Learning model biases can go in thousands of directions. Bias elimination can never be achieved to perfection, it can only be accounted for. Accuracy is precision plus recall.  

Switching gears (pun intended) in terms of autonomous vehicles, this could take the form of the automobile driving off the road into a stone pylon, killing the driver. In the other case, it might see things that aren’t there and slam on the brakes at the wrong time, causing an accident. 

More is not necessarily better in the field of data science; sometimes it can actually be worse. ML models nearly always use multiple data sets due to the fact they need SO MUCH of it. Unfortunately, the rule of thumb in Machine Learning is that your data is only as good as the worst dataset. Bias is not a problem anyone can solve on their own at this point, but we need to keep talking about it.

Human error can be easily measured (in most cases), by using humans. In fact, it’s a very standard practice to use services like Amazon’s Mechanical Turk or ClickWorker. These services pay operators around the globe to answer questions in real-time, so an AI toolkit might be asking them to classify types of images or help interpret the meaning of the text, or anything that a programmer can code up. Once an API is established for making decisions by humans (the “Turks”), you can train your models on the very same data, or ask your AI to take the same questions and compare them with more humans side-by-side. When Alan Turing designed his “intelligence test” in the 1950s, he probably didn’t know how exactly right he got it.

We’re almost there. What do we do from here? Elon Musk has introduced humanoid robots that will have “generalized AI”, self-driving cars are already here, but how long it takes before they are ready for regular highway use is an open question. Advancements in self-driving trucks will be accelerated by shortages in the labor force for truck drivers in 2022. The truth is that we’ve been on the cusp of “machines taking over” since the Luddites were smashing things resembling machines in the 1600s. 
Technology can and should improve our lives. I for one cannot wait to see how technology and humanity continue to become intertwined, it could be beautiful. We shouldn’t fear the idea of AI entering our daily lives, because to some extent it already has. We will leverage AI to scale processes, learn and think faster, overall it will be a beneficial tool…. Transparency is key to building up public trust, but to stand the true test of time AI has to be seen acta non verba: by what it really does, not what we’re told it can do.

The Future is in our hands socks

Credit: Source link

Comments are closed.