It is a calculation in algebra. Basically because there is only so many things that can be accessed at the same time by separate cores there is only so much benefit per extra core.
Say you have 1-10 worth of tasks. And they have to be completed in order, then one core does 1,2,3,4,5,6,8,9,10 in that order and it is 100% efficient. Adding a second core means core 1 does 1 then core 2 does 2, core 1 does 3 then core 2 does 4... this is 50% efficient.
Now say you have 1-10 and A1-A10 and B1-B10 Tasks A and B must be completed before next level
1 Core = 1, A1, B1, 2, A2, B2 and so on doing all 30 tasks
2 Core = Core1 1, Core2 A1 then Core1 B1, Core2 Idle, Core1 2, Core2 A2 then Core1 B2, Core2 Idle and so on
Now with 2 cores you complete the chain of tasks 33% faster, but the entire system is only 75% efficient.
This is a combination of several laws but the predominant one is Amdahl's law. Only the parts of code that can be utilized independently of other code can be optimized, thus the diminishing returns on multiple cores.
AMD started putting their memory controllers directly on the die of the CPU, giving it an advantage in memory intensive games but Intel focused on raw power to each core and have them act more or less independent of them. Remember that the first Core 2 Duos were just two Pentium III (Most efficient processor until Sandy Bridge) slapped together on the same die.
Edit: I apologize for the math speak and all, I am an accountant by trade, project manager and statistical analysis and forecaster by title. I do project management for software and hardware integration for a living.