Note: The first two parts of this multi-part article can be found here and here.
Several theoretical frameworks have played an important role in physics. Classical mechanics, quantum mechanics and statistical mechanics are all frameworks. It is worth repeating here that in and of themselves, none of these make direct predictions that can be "tested" or "falsified". For example in classical mechanics one can write down the Hamiltonian of a hypothetical system and study the solutions of this problem, even if such a system has no existence in nature. We do this all the time: problem sets in textbooks are aimed to teach us how to use classical mechanics and not always how to make definite experimental predictions.
It is then up to us to make a model of a chosen physical system and try to experimentally test that model. If the model has been designed with suitable hindsight, it will surely work qualitatively up to a point. But now we test it by going to higher accuracy or varying the experimental parameters. What if the model fails, i.e. it disagrees with experiment? Then there are two very different possibilities: (i) the model failed because it was inadequate as a model and can be improved by tweaking it, for example by adding another term to the Hamiltonian, (ii) the model failed because classical mechanics as a framework is inadequate to address the problem. Both these types of failures are well-known and well-understood. In category (i) falls virtually any concrete model, for example a model of fluid dynamics that is missing some important feature of the fluid under study. One then rectifies the model by adding a term that captures this feature. In category (ii) is the fact that the electron in a hydrogen atom simply cannot be described by any known Hamiltonian in classical mechanics. One might say the framework of classical mechanics is thereby falsified and replaced by quantum mechanics. But I prefer to think that frameworks are not falsified. They simply outlive their usefulness and applicability.
The appropriate framework to describe elementary particles and fundamental forces is quantum field theory (QFT). It encapsulates both classical and quantum mechanics and extends them to the relativistic domain. By considering finite-temperature field theory, one also encapsulates statistical mechanics. This framework was originally formulated by Feynman, Schwinger and Tomonaga to study the quantum theory of electromagnetism. QFT is difficult and takes years of study to master. But merely mastering how QFT works would not provide us any description of electromagnetism: one needs an actual model within the framework of QFT. This model, called "quantum electrodynamics" (QED) was proposed by the above authors and has been a spectacular success. It is noteworthy that historically, the study of QFT (the framework) and QED (the model) went hand in hand.
Interestingly, QFT also has applications to condensed matter physics. It can be reformulated to describe a many-body system such as a crystal with local interactions between different sites. The framework is (essentially) the same but the physical models one studies are quite different, bearing little resemblance to QED. The models are tailored to study some material of interest, rather than the interactions of electrons and photons in the vacuum. It is remarkable, but no longer surprising to any experienced physicist, that a framework created to deal with one class of systems can be usefully applied to very different classes of systems. In the previous posting I referred to the "renormalisation group" which was also an example of such a framework, indeed it is a part of the overarching framework of QFT.
The fact that one framework can apply to very different systems has played a central role in the development of physics. Perhaps the best example is that of the mass problem for weak vector bosons. Such bosons were proposed by Schwinger in the late 1950's as mediators of the weak interactions. By the early 1960's, many physicists were looking for a mechanism to assign a mass to such particles without contradicting the requirement of gauge invariance, a crucial consistency condition for vector bosons in QFT. A possible mechanism was first suggested in 1963 by Phil Anderson using an analogy with the properties of superconducting materials. These embryonic ideas were converted in 1964 by Englert and Brout, and Higgs, into a mechanism that can be generically applied to elementary particles: the Higgs mechanism.
As I've already mentioned, there was at first no definite prediction of how the Higgs mechanism should be tested, and no definite model - just a mechanism within a framework. But by the end of the 1970's, using both the Higgs mechanism and a novel framework due to Yang and Mills dating back to 1954, a single unified model of the electromagnetic and weak interactions was achieved. It is often called the "electroweak" model because it unifies electromagnetism and weak interactions in much the same way as special relativity unified electricity and magnetism at the beginning of the 20th century. The electroweak model had many predictions, some of which (the existence of W and Z bosons) were soon tested, at first indirectly and then directly. Other predictions like the Higgs boson had to wait 50 years.
In the meanwhile a model of the strong interactions (QCD) was proposed in 1973. The authors of this theory were very clever, but they also had a lot of good fortune. The frameworks of QFT, including Yang-Mills theory and the renormalisation group, were available. Experiments indicated the existence of quarks that interacted weakly at short distances. They proposed QCD by putting together all these ingredients in a brilliant and elegant fashion. I must mention here that if the Yang-Mills framework had not been (i) known, (ii) rendered respectable by its success in a totally different system, the weak interactions, (iii) rendered consistent by the mathematical work of 'tHooft and Veltman, the strong interactions would have remained a mystery. But none of these points (i), (ii) and (iii) have anything to do with experiments on strongly interacting particles.
QCD together with the electroweak theory forms what is today called the Standard Model of fundamental interactions. This describes all elementary particles in nature and all the fundamental interactions among them, except gravity, and it is a stunning success at extremely high levels of precision. Such an ambitious enterprise was surely not anticipated by Feynman et al when they initially formulated QFT. However, if they had been different people, or the age had been different, they might have ambitiously declaimed in 1948 that the QFT framework would come to describe all the fundamental forces relevant for terrestrial particle physics - a "theory of everything". For saying this they would have surely been ridiculed, but they would have been correct.
(to be concluded)