Charlie Munger’s first rule of wisdom states:
“The first rule is you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together in a latticework of theory, you don’t have them in usable form.”
Experts. After years of hard work they internalize mental models of their respective fields; they understand both the fundamental principles and the in-depth intricacies of how concepts interrelate. They also grasp new ideas a whole lot quicker than others. Apparently that’s how Elon Musk learns about new fields effectively.
Mental models are useful, but it’s pretty hard to build them. While the data gold rush has been well on its way since ~2012, to date it’s a non-trivial problem to interpret the terabytes of raw data some companies had piled up. And while there’s legitimate excitement about actionable insights and roadmap-shifting discoveries, let’s calm down and take a step back. The interpretation layer is missing. That latticework of theory.
Context is the key to personalization
In the realm of intelligent applications, that latticework is a general user model that serves as a basis to differentiate individual users from. Obsessed with engagement metrics, consumer-facing applications strive to optimize customer touch points that make for a good overall user experience. While general improvements like simplifying sign-up forms almost always lead to better conversion rates, it’s understanding the nuances of each user action in the context of the individual that makes a product stand out.
That requires building sophisticated models to explain what makes each individual different from the norm. Illuminated by the amount of raw user data one is gathering, it’s tempting to jump on the big data train. However, understanding the nuance in user behavior does not come from the size but rather the variance and the quality of the data set. A good strategy is to gather lots of potentially irrelevant periphery information from a multitude of (non-traditional) sources. It’s those who go the extra mile in identifying and aggregating quality contextual information who will win the personalization game.
Often, the benefits to integrate various data systems aren’t clear. Also, personalization might not seem like a priority for product owners at the moment. However, beware that the so-called silo-mentality inevitably hurts businesses down the road. Divergent goals of different organizational units lead to a misalignment in incentives, and to that much-needed holistic user context to be missing.
The rise of context-aware security
An excellent example of a business case where understanding a nuanced user context has been of key importance is account takeover fraud, an $8B problem. Banks, online payment services and e-commerce stores have been relying on rule-based security measures that deny transactions at the slightest signal of fraud, without even attempting to understand the users’ context. This is not only a problem from a customer experience or a missed transaction perspective, but also because hackers are always one step ahead of those rigid rule-systems.
Reversing the problem has been of key importance in finding an adaptive solution: instead of modeling what an attack is like, security experts want to know what a user is not like. That forms the core of our philosophy regarding fraud detection here at ThreatMark. Our main goal is to precisely understand legitimate user behavior with all its obscure variations in a given context.
The signal and the noise
In an online setting, a sophisticated model of an individual consists of behavioral markers such as individual typing style, mouse movements and navigation profiles; purchasing habits, demographic information; device fingerprints, IP addresses, browser and network characteristics, along with a range of other fancy hacker stuff.
Recognizing the rich contextual significance of individual user actions is only possible by utilizing all of the above-mentioned information that describes the user from multiple different angles. This is hard, and bringing in alternative information sources often requires lots of creativity and engineering swag. At ThreatMark we take pride in going out of our way to gather and aggregate lots of non-traditional user information to build some of the most sophisticated individual user models to date. We utilize these models to understand individuals to such degree that even yet-unseen anomalous behavior becomes apparent. That means that ThreatMark’s fraud system knows what actions are likely to be within a normal range for you, even if you’ve never done those actions before (such as traveling to Bangkok or purchasing a yacht on a whim). That’s possible by interpreting actions in both the context of the individual and the general norm. Ultimately, it’s not about what a user knows, but about what a user does.
Context as a competitive advantage
The key takeaway is that among the terabytes of noisy data sets, deeply hidden patterns emerge if there is enough variation in the context. As mundane and profound as it sounds, the value and meaning of every piece of information changes based on the context. At times, potentially alarming signals get amplified. Other times, exceptions are made from rigid rules.
For the ThreatMark data team, the most beautiful part is algorithmically understanding what is normal or optimal, without being explicitly told the conditions under which certain rules can be bent. Intelligence is beyond learning from past examples. It is about understanding the likely variation in certain patterns, much like experts, which lies at the core of building a cognitive business. By gathering lots of quality data that can serve as user context, companies pave their way to gaining an understanding of their individual users, allowing for intelligent, real-time personalization of features, UI elements or promotions. This user context is essentially like that latticework of theory Charlie mentions, and is what makes ThreatMark’s fraud prevention system stand out.