Many people have asked the question “Are certain months of the year bad months to launch or end a Kickstarter project and others months good months?”
Well, let me start by saying… no one can actually answer this question definitively because it’s far too subjective of a question. So humor me for minute and let me ask a question that the data may be able to actually answer.
Do the average success rates of projects on Kickstarter change significantly based upon the month they are launched or the month they end?
The answer is a definitive, YES…
Before we continue I should say that this data did influence my thinking when I created my current Kickstarter project, so hopefully it can add some value to yours as well.
A Visual Summary of Monthly Success Rates
Below is a graph that overlays the success rates of projects that launch (blue line) and end (orange line) in a given month. The trend is pretty clear to see.
Kickstarter Success Rates Based Upon Month Launched
Success rates (on average) are significantly lower if you launch your campaign during the months of July or August and significantly higher if you launch your project during the months of January, February or April.
You also see a small drop in the success rates of projects launched during November and December but not significant enough to say definitely that this drop matters. Further analysis needs to be done to determine if these small drops effect all projects launched during November and December, or just those that incorporate thanksgiving and Christmas.
Kickstarter Success Rates Based Upon Month Ended
Success rates (on average) are significantly lower if you end your campaign during the months of August, September or December and significantly higher if you end your project during the months of February, March and May. Again, further analysis is needed to determine how the holiday seasons effect success rates.
The variation across all other months is not strong enough to conclude a significant difference exists.
You may have a lot of questions at this point. If you really want to understand the data, the process I took to arrive at these results, the statistical analysis I used and the underlying thought behind the analysis, you should read my set of blogs called Kickstarter Statistics 101 – A Rough Introduction to Stats via Kickstarter on the Kickstarter Statistics 101 landing page. If you just want to know about what the data says, keep reading.
Determining Statistical Significance
All the data has been compiled and laid out in an easy to understand fashion (you can read a blog about how the data was prepared for analysis here). The lines graph and bar graphs above shows the average success rates for each month. From this we can see that these success rates differ from month to month, but what we really want to know, is if these success rates differ “significantly”.
Performing the Statistical Analysis – ANOVA
To find this out if these difference are significant, I ran a simple ANOVA test, which means ANalysis Of VAriance. The ANOVA determines how much discrepancy (or variation) there exists within a group of data, and then compares that to the amount of variation there exists across all the groups analyzed. I will post a blog later to describe this in more detail, but for now a Google search or YouTube search should suffice to provide some basic understanding.
The ANOVA test will return many pieces of information, three of which are important to our conversation here. Those are the F-critical, The F-value and the P-value. Without getting drowned in detail, here is what each of these mean.
The F-critical, The F-value and the P-value
F-critical – the F-critical is a threshold. If the F-value calculated from the ANOVA test is greater than the F-critical, then we have reason to believe that our results are significant. If our F-value does not exceed this threshold, than we cannot be sure that our results are significant. The P-value on the other hand gives us an idea of just how significant is our significance. In other terms, it gives us an idea of how often we could return the same result by pure chance.
So we want our P-value to be as low as possible (lower than 0.05 at least but even lower is even better) and we want our F-vale to surpass F-critical by as much as possible!
The Initial Kickstarter Stats Results
When I performed the ANOVA on the success rates of the launch data I found an F-critical of 0.953, an F-value of 13.6 and a P-value of 2.74E-11 (a very tiny number!). Simply put, the differences between the success rates of each month are HIGHLY SIGNIFICANT.
For the end dates we had an F-critical of 1.99, an F-value of 12.56 and a P-value of 1.04E-10(another very tiny number). Again, the differences between the success rates of each month are HIGHLY SIGNIFICANT.
But now the question remains: Are certain months more significant than other? We can answer this question by determining which months in particular are causing such extreme results.
Digging into the Kickstarter Statistics a Bit Further
What I decided to do next, was to remove each month one-by-one, and re-run the ANOVA test. Once the results of the ANOVA test found that the difference in success rates where no longer significant, we would have a pretty good idea of which month’s really diverged from the norm.
For the launch dates, those months were January, February, April, July and August. July and August affording lower success rates while January, February and April afforded higher success rates. For the end dates, those months were the February, March, May, August, September and December. August, September and December affording lower success rates while February, March and May afforded higher success rates.
The results of the ANOVA test only showed that no significance difference existed for the launch dates and end dates when all the above mentioned months were removed from the analysis. Statistically, this means we can thus assume that there’s really not much of a difference between the other months (at least not one that couldn’t have occurred by chance 5% of the time – which in my opinion is still a quite extreme threshold for this case) even though the average success rates do still differ by just a little bit.
My Personal Interpretation
Please allow me to blabber in pure ignorance about my opinions. Nothing I say here should be construed as truth or fact, rather a good reason for you to leave a comment explaining how big of a dummy I am.
If you look at the months that produce higher and lower success rates, I think it’s apparent that it overlaps quite closely with the U.S. academic calendar as well as the traditional popular U.S. holidays. I think the summers afford many people the time to think up and prepare for an increasing number of Kickstarter launches (students and teachers probably make up a significant proportion of the Kickstarter community – demographic stats needed here). But once the school year starts back up again, that number begins to taper off again.
The bar graph below shows the total number of tabletop game projects launched each month since the beginning of Kickstarter with the success rates of each of those month laid over in line graph fashion in orange. I have not done any testing to identify the significance of this claim, but it does appear there in an inverse correlation between the number of projects launched in a given month and the success rates of a given month.
If there is an inverse correlation between the number of projects launched in a given month and the success rates of a given month, it would suggest that projects of a similar type (tabletop games for my example) are fighting for pieces of the same pie. Or in other words, the pool of potential backers is limited and projects will likely be fighting over this pool of limited backers. Again, this is just my assumption and based upon the graph above.
Here’s something else quite interesting. Since most projects are roughly one month or roughly 30 days (a valid assumption according to Kickstarter) then we could also assume that a project launched in one month (say February for example) will end the next month (say March for example). If I overlay the success rates of projects launched in a given month, with the success rates of projects which ended the following month, the graph seems to match up quite precisely.
The small bars show a 5% error in the estimates. It’s clear that for most months, the success rates of the both projects launched that month, and projects ending the following month fall with a 5% error of both sets of success rates. Not sure what to conclude about this, except that they are measuring the same projects, with the exception of December and January. Any ideas why this is the case?
Boy I wish I had more time to really dig deeper into the analysis.
Nerdy Statistics Information
If you want to know more about how I got the data I am using and hear about some of the complications I had in getting it and using it, the whole data collection process is described in a previous blog post here.
If you want to know more about how I prepared the data for analysis, to try and minimize erroneous results and other complications, you can read a blog about how the data was prepared for the analysis here.
Testing for a Normal Distribution
Before we perform any statistical test, we will want know whether the data is normally distributed. This is very important. I go into more detail about what exactly this means in an earlier blog post here. But for the sake of completeness:
Normally distributed simply means that results derived from the data will be driven dominantly by the average of the data, (what’s normal about the data) and likewise, results will not be driven by certain values which are abnormal to the data and that could, all by themselves, skew those results.
In case you’re curious, here are my histograms and P-P plot.
Please Leave a Comment!
Please comment if you learned anything, if you want to correct anything, or if you have any questions!