No credit card required
Browse credit cards from a variety of issuers to see if there's a better card for you.
@SouthJamaica wrote:
@Thomas_Thumb wrote:
@SouthJamaica wrote:
5. The only change was that one account reporting a 25% balance dropped to a 1% balance. There were a total of 8 accounts reporting >9% balances on the 23rd; there were a total of 7 accounts reporting >9% balances on the 24th.
What were the list of cards, dollar amount limits for each card, and dollar amount balances on the last statements for each card? And of course, the one card changing balance in these days.
For each card also, when did their last statement cut, to report that balance?
@HeavenOhio wrote:SJ wouldn't be the first to report being dinged at a point that's less than 8.9% overall. About a year and a half ago, I was dinged on all three bureaus. My utilization went from 4% to 5% to somewhere back under 5%. Before and after scores were identical.
This suggests a possible 4.9% threshold on my profile. However, some have suggested that flat dollar amounts may come into play.
Do you have records of how your individual card utilization was changing as these overall utilization changes happened?
@Anonymous wrote:...SJ already explained earlier in the thread why the individual utilization on the card based on prior testing against his own profile constitues a non-event. He again reiterated this a couple of posts above....
+1
Hello all. It's not uncommon in science for an experimenter to get a really odd result -- meaning one that flies in the face of received wisdom. That's what has happened here. SouthJ believes he may have detected a new breakpoint for utilization, one less than the orthodox one of 8.99%.
In cases like this, the experimented may have found a new important truth. Or there may have been some grit in his test tube.
When these surprising results occur, one way to address it is (a) to look carefully at the actual protocols of the specific historic test that was done.
But an equally common method is (b) for other scientists to attempt to run the same experiment (possibly with tightened protocols) and see if they can obtain the same result. This is called replicating the result.
If many different scientists, including those who might be initially skeptical, can replicate the result, that becomes the best way to confirm it. If they cannot replicate the result, then the relevant scientific community ends up concluding that there must have been some mistake in the initial experiment: grit in the test tube, a beaker that got too hot, whatever. They just accept that there is no way to reach back in time and know for sure what happened.
I suggested early in this thread that the simplest way to address SouthJ's interesting result is to attempt to replicate it -- especially by more than one person who has a comparatively small total credit limit. Again I just want to say that this is the best approach at this point. The whole reason that this would be of interest is precisely because it is something that would be universal -- so let's have more people attempt to replicate it.
Validation through repeatability is what is required. This should be done by the OP. As we know, scoring is, too some degree, profile dependent. Therefore, it is equally important for others with different profiles to test. If/when results are conflicting that shows we are dealing with a conditional threshold or some noise factor is affecting results.
Again, the 1st step is retesting the hypothesis while isolating other factors to avoid confounding the data.
@Anonymous wrote:I suggested early in this thread that the simplest way to address SouthJ's interesting result is to attempt to replicate it -- especially by more than one person who has a comparatively small total credit limit. Again I just want to say that this is the best approach at this point. The whole reason that this would be of interest is precisely because it is something that would be universal -- so let's have more people attempt to replicate it.
Hi CGID. Wouldn't it be wiser to have SJ replicate the test though? I'm all for other people (as many as possible) trying the same experiment, but no one other than SJ possesses SJ's profile, meaning other variables are introduced by anyone else doing the experiment. Many of these quirky credit-related things seem to be profile-dependent, perhaps based on score card assignment or who knows what else. For example, we know SJ has large credit limits. If the experiement is replicated by someone with small credit limits it would be easy for them to test the 6% proposed threshold, but if somehow dollars are a factor here it could be a variable that yields a different result. If SJ himself is able to cross over and back that proposed threhold a few times and sees the same + and - result, I'd say it's pretty safe to say that on his profile, it could certainly be a threshold.
In post #9 he expressed some doubt as to whether he can go back and forth the way you recommend. He said he'd try, but I got the feeling he was saying we shouldn't get our hopes up. His CL is so huge that these kinds of tests involve thousands of dollars of spending.
Let's suppose that he can though. I think it's valuable, but only as a prelude for testing by some different people. Because the bare fact of knowing how SJ's profile behaves should be of interest only to SJ -- whereas if it could be shown that for all people there was a breakpoint below 8.99%, regardless of dollar value, that would be of practical importance (one more reason to go with AZEO/1% for any important credit pull) and theoretical importance too (evidence against the idea of dollar values mattering).
Likewise if it could be shown that for people with a small total credit limit there was no sub-8.99% breakpoint, but that people with huge credit limits saw a difference between 1% and 8%, then it would suggest that dollar values do matter.
Bottom line is that I find SJ's result interesting -- but only if it helps us learn something bigger that applies to more people.
Certainly though a great step would be for him to repeat his tests over and over again with tight controls over confounding factors.
@Anonymous wrote:
@Anonymous wrote:I suggested early in this thread that the simplest way to address SouthJ's interesting result is to attempt to replicate it -- especially by more than one person who has a comparatively small total credit limit. Again I just want to say that this is the best approach at this point. The whole reason that this would be of interest is precisely because it is something that would be universal -- so let's have more people attempt to replicate it.
Hi CGID. Wouldn't it be wiser to have SJ replicate the test though? I'm all for other people (as many as possible) trying the same experiment, but no one other than SJ possesses SJ's profile, meaning other variables are introduced by anyone else doing the experiment. Many of these quirky credit-related things seem to be profile-dependent, perhaps based on score card assignment or who knows what else. For example, we know SJ has large credit limits. If the experiement is replicated by someone with small credit limits it would be easy for them to test the 6% proposed threshold, but if somehow dollars are a factor here it could be a variable that yields a different result. If SJ himself is able to cross over and back that proposed threhold a few times and sees the same + and - result, I'd say it's pretty safe to say that on his profile, it could certainly be a threshold.
I am expecting to be hovering in that area for awhile, but one thing I'm feeling uncertain about is whether I should be looking at true 6%, or a rounded 6%.
I.e., whether I should be looking for things like transitions between, e.g. 6.1% (rounded down to 6%) and 6.8% (rounded up to 7%), or more like transitions between 6.1% and 5.9%?