Is the world deterministic or probabilistic? This is an age-old question that has never been answered properly. In the beginning the debate was purely philosophical. But later, the debate spilled over into the arena of exact science.
Determinism is about the rigid chain of cause and effect that events follow in time. The rigid law of causality requires that the whole universe from its beginning till the end of time is unerringly predictable, if only it is possible to know the initial conditions.
The debate over determinism was raging even in Ancient Greece a few thousand years ago. According to the deterministic model of science, the universe unfolds in time like the workings of a perfect machine, without a shred of randomness or deviation from the predetermined laws.
The person most closely associated with the establishment of determinism at the core of modern science is Isaac Newton, who lived in England about 300 years ago.
Newton discovered a concise set of principles, expressible in only a few sentences, which he showed could predict the motion in an astonishingly wide variety of systems to a very high degree of accuracy.
Although Newton's laws were superseded around the year 1900 by a larger set of physical laws, determinism remains today as the core philosophy and goal of physical science.
An important discovery that that paved the way for modern science in 1500 A.D. was the idea that the laws of the material universe could be understood meaningfully only by expressing physical properties as quantified measurements, that is, in numerical terms and not just in words.
The measurements that appear in Newton's laws depend on the particular system being studied, but they typically include the position, speed, and direction of motion of all the objects in the system, as well as the strength and direction of any forces on these objects, at any given time in the history of the system.
As dynamical laws, Newton's laws are deterministic because they imply that for any given system, the same initial conditions will always produce identically the same outcome.
The Newtonian model of the universe is often depicted as a billiard game, in which the outcome unfolds mathematically from the initial conditions in predetermined fashion, like a movie that can be run forwards or backwards in time.
The billiard game is a useful analogy, because on the microscopic level, the motion of molecules can be compared to the collisions of the balls on the billiard table, with the same dynamical laws invoked in both cases.
One of the fundamental principles of experimental science is that no real measurement is infinitely precise, but instead must necessarily include a degree of uncertainty in the value.
This uncertainty which is present in any real measurement arises from the fact that any imaginable measuring device--even if designed and used perfectly---can record its measurement only with a finite precision.
Throughout most of the modern history of physics, it has been assumed that it is possible to shrink the uncertainty in the final dynamical prediction by measuring the initial conditions to greater and greater accuracy.
It is important to remember that the uncertainty in the dynamical outcome does not arise from any randomness in the equations of motion--since they are completely deterministic--but rather from the lack of the infinite accuracy in the initial conditions.
The unspoken goal of experimental science has been that as measuring instruments become more and more accurate through technology, the accuracy of the predictions made by applying the dynamical laws will become greater and greater, approaching but never reaching absolute accuracy.
Dynamical instability refers to a special kind of behaviour in time found in certain physical systems and discovered around the year 1900, by the physicist Henri Poincaré.
It is impossible to actually measure the initial positions and speeds of the planets to infinite precision, even using perfect measuring instruments, since it is impossible to record any measurement to infinite precision. Thus there always exists an imprecision, however small, in all astronomical predictions made by the equation forms of Newton's laws.
Up until the time of Poincaré, the lack of infinite precision in astronomical predictions was considered a minor problem, however, because of a tacit assumption made by almost all physicists at that time.
The assumption was that if you could shrink the uncertainty in the initial conditions---perhaps by using finer measuring instruments---then any imprecision in the prediction would shrink in the same way.
In other words, by putting more precise information into Newton's laws, you got more precise output for any later or earlier time. Thus it was assumed that it was theoretically possible to obtain nearly-perfect predictions for the behaviour of any physical system.
But Poincaré noticed that certain astronomical systems did not seem to obey the rule that shrinking the initial conditions always shrank the final prediction in a corresponding way.
By examining the mathematical equations, he found that although certain simple astronomical systems did indeed obey the "shrink-shrink" rule for initial conditions and final predictions, other systems did not.
The astronomical systems which did not obey the rule typically consisted of three or more astronomical bodies with interaction between all three. For these types of systems, Poincaré showed that a very tiny imprecision in the initial conditions would grow in time at an enormous rate.
Thus two nearly-indistinguishable sets of initial conditions for the same system would result in two final predictions which differed vastly from each other.
Poincaré mathematically proved that this "blowing up" of tiny uncertainties in the initial conditions into enormous uncertainties in the final predictions remained even if the initial uncertainties were shrunk to smallest imaginable size.
The gist of Poincaré's mathematical analysis was a proof that for these "complex systems," the only way to obtain predictions with any degree of accuracy at all would entail specifying the initial conditions to absolutely infinite precision.
The extreme "sensitivity to initial conditions" mathematically present in the systems studied by Poincaré has come to be called dynamical instability, or simply chaos.
Because long-term mathematical predictions made for chaotic systems are no more accurate than random chance, the equations of motion can yield only short-term predictions with any degree of accuracy.
At the time of its discovery, the phenomenon of chaotic motion was considered a mathematical oddity. In the decades since then, physicists have come to discover that chaotic behaviour is much more widespread, and may even be the norm in the universe.
One of the most important discoveries was made in 1963, by the meteorologist Edward Lorenz, who wrote a basic mathematical software program to study a simplified model of the weather.
The mathematics inside Lorenz's model of atmospheric currents was widely studied in the 1970's. Gradually it came to be known that even the smallest imaginable discrepancy between two sets of initial conditions would always result in a huge discrepancy at later or earlier times, the hallmark of a chaotic system, of course.
Because the atmosphere is chaotic, these uncertainties, no matter how small, would eventually overwhelm any calculations and defeat the accuracy of the forecast.
This principle is sometimes called the "Butterfly Effect." In terms of weather forecasts, the "Butterfly Effect" refers to the idea that whether or not a butterfly flaps its wings in a certain part of the world can make the difference in whether or not a storm arises one year later on the other side of the world.
Thus the presence of chaotic systems in nature seems to place a limit on our ability to apply deterministic physical laws to predict motions with any degree of certainty. The discovery of chaos seems to imply that randomness lurks at the core of any deterministic model of the universe.
The inescapable uncertainty in the predictability of the dynamical systems following the Newton's Laws of motion that Henri Poincaré first discovered over a century ago, is now an accepted truth in the micro-world of subatomic particles. But with the findings of meteorologist Edward Lorenz some four decades ago, it is now acknowledged that the world of macroscopic measurements, too, are inherently indeterminate, or even chaotic. The edifice of science will not crumble away as its deterministic foundation standing on causality is giving way to ultimate indeterminacy, that is, chaos. The chaos theory is not really spelling the doom of science. On the contrary, it is now gradually proving to be a powerful idea that is further strengthening the basic premise on which the very model of causality hinges.