No credit card required
Browse credit cards from a variety of issuers to see if there's a better card for you.
I've been enjoying this forum since I became interested in my own credit score earlier this year.
However, I think trying to understand FICO scores and scoring models is kind of a waste of time, simply because they're not entirely based on logic and math. They've been "detuned" and "deoptimized" at an institutional level due to societal pressure to make them more "palatable."
From this question and answer, this white paper and this article which cites the white paper, it seems that FICO as well as other purveyors of large-scale predictive algorithms have had to "dumb them down" compared to their in-house, pure machine-learning siblings so they don't run afoul of current political trends and protected social classes.
That is to say, predictions based on pure math and logic, while being extremely accurate, would disadvantage certain groups of people and cannot therefore be used. To summarize the white paper, the results of pure statistical prediction wouldn't be very "palatable." In fact, it uses the term "palatable" four times. It doesn't say for whom the palatability or unpalatability exists though. FICO only uses pure machine-learning statistical models internally to make sure its compromised-but-friendlier scorecard-based models aren't too far off. And, to give the folks at FICO credit, their "palatable" scorecard-based system is 98% as good as the real stuff, though it takes them far longer to generate.
To show how careful FICO, Inc. has to be with regard to social scrutiny of their models, the third link above, which cites FICO's white paper among its 256 references, is entitled Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy and its premise is quite literally that accurately predicting human behavior through statistical modeling is unethical and harmful and should be avoided, period. There is no right way to do it. "Our thesis is that predictive optimization raises ... concerns that cause it to fail on its own terms." [emphasis mine] That is to say, current approaches to predictive modeling aren't the problem; the very idea of using math to predict human behavior at all is fundamentally bad.
The upshot being, we're sitting here on the outside of this FICO Skinner-box, unable to see inside, only able to see the inputs and outputs. But we're assuming that whatever is going on inside is underpinned by machine-like logic and that we could, somehow, eventually deduce its internal workings if only we could get enough data points.
But all the while, the machine has been detuned and made purposely inaccurate so that politicos allow it to even exist. If a purely mathematical FICO model were actually allowed to "run free" and be used in the wild rather than only in the back rooms of FICO, Inc., it would be so good at its job that it would be soon condemned and probably eradicated by the powers-that-be.
So, as far as trying to understand the current FICO scoring models, it would be difficult enough if they were 100% logical. But, given that the scoring systems have inaccuracies purposely embedded within them in the name of social fairness, I think it's completely impossible for anyone on the outside to figure them out.