How Do Machines Learn? Part II: They Fight
Disclaimer: Check out the first part of this series, HOW DO MACHINES LEARN? PART I: THEY TRAIN.
History is filled with amazing brawls, but some stand out above the rest: Ali vs. Frazier. Tyson vs. Holyfield. Generator vs. Discriminator.
* record scratches *
Ok, that last one sounds like it came from the last Transformers movie. And, honestly, it sorta it did. The truth is, we aren’t likely to ever see the kinds of sentient machines depicted in Hollywood films such as Terminator or The Matrix without Generator and Discriminator first facing off.
Competition can be a great educator, often bringing out the best (and worst) in human beings. As it turns out, the same is also true for machine learning and artificial intelligence. Computers, like people, are shaped by the struggle, and going head-to-head is a key component in teaching them the best ways to use data to make predictions.
Stay Tuned for Your Feature Presentation
Different fields of science and health use a variety of names to describe the characteristics that people are interested in studying: traits, factors, covariates, variables, and many more. When it comes to AI, these characteristics are called features, and they usually don’t distinguish between the overall characteristic (eg, gender) or a specific level of that characteristic (eg, male). Competing features is a hallmark of AI, as computers struggle to figure out which pieces of information are most useful.
For instance, knowing a person’s “exact age,” as a trait, is not very predictive in some scenarios. However, in those same scenarios, knowing whether someone is “at or above a certain age” can be extremely predictive. It all depends on the scenario. It also may be that the intersection of many different features is what really matters, and machine learning algorithms often ignore subgroups that don’t provide new information.
Ensembles or Frankenstein?
Why do we care? Because competition in AI goes well beyond feature selection and can extend into competition between actual algorithms. In Part I of this series, we discussed the idea that machine learning is filled with many different types of mathematical algorithms, all built upon different frameworks. It isn’t always clear exactly which framework is best suited for a specific problem. AI experts have grown accustomed to running many different algorithms, Royal Rumble-style, to see which one works best for a given situation. This strategy has created some interesting avenues of research in thinking about how we can use multiple AI techniques to tackle larger problems. One route has been the aptly named ensemble modeling, where we hedge our predictions by averaging across multiple models.
Ensembles do a good job of democratizing AI and can serve as a balance for cases where specific algorithms provide extreme predictions. But while most ensembles work on an averaging-out type of approach, there are few that work in a targeted realm.
Suppose we think of an ensemble model as a fighter trained in many different styles of fighting, who picks the best fighting style based on the type of opponent. The basic philosophy here is to use the algorithm that makes the best predictions for the exact type of prediction being made. So, if one algorithm predicts well for older individuals, then use that one. And if a different algorithm works best for young females, then use that one instead.
The question AI researchers are faced with is determining the best approach: does using balanced predictions, which may be useful in a broad array of situations, work best? Or are targeted predictions, designed to deal with specific issues, the better choice? The targeted route may seem like a no-brainer, pick the best tool for the job, but that can be misleading. Algorithms are not martial arts masters. They are just code. And using targeted predictions in all scenarios can create something of a Frankenstein’s monster AI, using the “leg” of one algorithm and the “arm” of another.
But Do They Fight?
All of this brings us back to the battle at hand. AI features and algorithms compete, but that isn’t really fighting, per se. The fighting is actually much more deliberate.
One of the most recent and promising innovations in AI research in the past few years is something called an Adversarial Network, which works by pitting two AIs against each other in a structured competition. This isn’t always a useful thing. Take chess, for example. We could teach AI to play chess by having two AIs play each other over and over. Eventually they learn strategy by brute force, like the computer in Wargames. (That was tic-tac-toe, but the effect is the same.)
It turns out that this is not optimal. In the end, we have two AIs that basically “learn” the same stuff. We have, in effect, created two copies of the same chess player, and this returns a somewhat balanced result, just like we see in ensemble modeling. It works, but it’s inefficient. A more efficient contest would pit two AIs head-to-head with competing goals. And this brings us back to Generator vs. Discriminator.
There is a really nice description of Adversarial Networks here, but basically a generative adversarial network (GAN) has two sides (Generator and Discriminator) that compete across some task, where each AI has different and opposing goals. In this way, we train two distinct AIs by pitting them against each other in a specific task. I think this summary from the same website is absolutely brilliant in describing how this works:
You can think of a GAN as a game of cat and mouse between a counterfeiter (Generator) and a cop (Discriminator). The counterfeiter is learning to create fake money, and the cop is learning to detect the fake money. Both of them are learning and improving. The counterfeiter is constantly learning to create better fakes, and the cop is constantly getting better at detecting them. The end result being that the counterfeiter (Generator) is now trained to create ultra-realistic money!
We’re still learning what kinds of problems machine learning and artificial intelligence are good at tackling. The key is identifying scenarios where we can fit our questions or needs into a framework that we can get the machines to exploit, and we are still working to develop good ways to get machines to make the best use of data. But suppose we have a machine-learning algorithm that has been adequately trained on data, and the algorithms have fought it out with a clear winner. What happens next?
Tune in next month for Part III of this series: “How Do Machines Learn? They Recover.”
Jason S. Brinkley, PhD, MS, MA is a Senior Researcher and Biostatistician at Abt Associates Inc. where he works on a wide variety of data for health services, policy, and disparities research. He maintains a research affiliation with the North Carolina Agromedicine Institute and serves on the executive committee for the Southeast SAS Users Group. Follow him on Twitter. [Full Bio]
Previous posts in this series:
- How Do Machines Learn? Part 1: They Train
- Taste Testing Generic Drugs
- Halloween by the Numbers
- What Kills Kids?
- The Golden Age of Health Research Funding
- Does Living on a Prayer Work?
- The Opioid Data Crisis
- Income Lost from Snow Days*
- What the #$@&*! Is Blockchain?
- Opportunistic Research Opportunities
- Text Mining UFO Data: Little Green Aliens or Santa’s Elves?
- Should You Know Your Doctor’s Home Address?
- The Population Bullet
- The Unknown Unknowns of Missing Data
- Communicating Science–More Than Just Good Words?
- Counting Alabamas
- The Third World in Your Own Backyard
- The Unrealistic Gold Standard
- Does MACRA Signal the Beginning of the End for Medicare Claims Data?
- Think You Aren’t Extraordinary? Odds Are You’re Wrong
- Mapping by Words
- Are We Asking Too Much From Surveys?
- Making Better Comparisons
- What Kills Us?