I am a theoretician in the fields of Chemical Physics –Physical Chemistry, the closely related field of Atomic and Molecular Physics and also in NMR and Quantum Signal Processing. For over fifty three years. I was often inspired by experiments, their interpretation and the underlying physical models they reveal. I have developed methodologies for computing and/or analyzing from first principles the results of these experiments. In doing so I and my coworkers have proposed physical models, methods of analysis and view points and have developed new computational theories that explained the nature of the systems under experimental study and which in turn became the essence of what was learned from these experiments.
To start I will mention the work that I most value, that being my development of the “Stabilization Method”. The Stabilization Method appeared early in my career (1965-70’s). It was inspired in 1967 by new experiments which for the first time were able to resolve short lived negative ion resonances in electron atom and electron molecule scattering cross sections. For molecules the theories of the day knew that what was seen were the vibration states of some sort of short lived negative ion system. For molecules and in particular diatomic molecules, for which most of the data existed, no one understood the configurations of these states and the physics of why only certain of these states seemed to be observed. At that time what made computations on molecular systems seem out of the question was that it was believed (wrongly) by many in the physics community that only true scattering methods , as for example the close coupling method, that specified particular scattering boundary conditions, could be used to compute and calculate the energy and lifetime of such states. Scattering methods of that day for computational reasons could barely study atomic Hydrogen below the n=2 energy level. Calculations on non spherical systems were out of the question. I thought differently. I was at that time a quantum chemist in procession of state of the art programs for small diatomic molecules and ions that I developed jointly as a graduate student with Professor Frank Harris. The programs could compute ground and excited state potential curves and then obtain their vibration levels. These could be used be used to carry out computations on such negative ion systems if one big obstacle could be overcome. This was that these short lived negative ions ionized and evolved on energy minimization into neutral molecular states of lower energy. As such the Rayleigh-Ritz variation method which underpinned the method’s ability to determine linear and non-linear parameters was no longer valid. If used the method would vary parameters so as to eject the extra electron and then go on to compute the best configuration and energy for the remaining neutral system. I argued and demonstrated that the Rayleigh-Ritz variation principle could be replaced by my Stabilization method which in its first incarnation chose basis configurations and varied their non linear parameters so that the energy of the state stabilized. Intuitively I argued that this would occur if sequential calculations used more and more diffuse basis functions. Physically these changes of basis would not affect the localized resonant states, hence the stability, but would cause the energy of non localized continuum states to decrease. Using these ideas I published results for Hydrogen negative molecular ion that reproduced the main observed resonances and predicted others that were soon observed. Simultaneous with these calculations, from the nature of the configurations that stabilized, I was able to develop a physical picture for the nature of such states and an explanation as to why they existed. My “core excited” and “single particle” classifications compared nicely to Feshbach’s recently published classification for nuclear resonances. In 1970, in the now widely known Hazi-Taylor paper, the Stabilization method was shown to be generic to all short lived localized quantum states no matter what the system. The now widely used stabilization diagram was introduced. The method has since been used in the fields of nuclear physics, particle physics, nano-physics (to calculate the states of high transmission in quantum wells and dots),atomic, molecular physics and optical physics where it is used to calculate autoionization energies, predissociation states and many other phenomena involving decaying states and so called bound states in the continuum. Many variants of stabilization have been published and many attempts by others to derive the method from first principles were publishde; all failed. It wasn’t until1993 that in collaboration with my associate V.Mandelshtam that we not only derived the method from first principles but showed that a complete theory of scattering phenomena could be based on the stabilization diagram and the discreet state eigenfunctions that were computed in its construction. Papers computing reactive scattering and photoionization cross sections, microcanonical and canonical rates of reaction and dissociative photo absorption cross sections followed so as to demonstrate the generality of the theory Stabilization’s use was and is so ubiquitous and accepted that for many years it has been paid the ultimate compliment in that many publications which clearly state that use is made of the Stabilization method and which often even have a figure with the stabilization diagram do not even reference it. Along with perturbation theory and the variation principle it is, albeit not as often used, now part of basic Quantum Mechanical Theory. References to this work also appear in the sorted publications list.
Over the last thirteen years, both before and after my retirement in 2006, I have had two research efforts going. The first which I shall now describe was the study of High Molecular Vibrations. To appreciate our contribution it is well to return to the situation as it existed in the late nineties. A series of beautiful experiments had been reported by Field (the bending spectrum of acetylene), by Treffs (DCO) and by Quack (CHBrClF) that probed the high vibration region where multiple vibrational resonances existed. The problem started with the fact that all previous successful analytical work had been done in the regular single resonant interaction region. Herzberg’s methods as parameterized perturbation theory and wave function inspection were then adequate tools in that they were able to give level assignments with a value of one quasi constant of the motion quantum number per degree of freedom and were able to envision the motions of the nuclei in each state (the Dynamics). In the high vibration region as discussed here multiple resonant interactions appeared and non-linear dynamics taught that the original modes of vibration often disappeared to be replaced by multiple new but unknown motions. Also the quantum chemically computed wave functions were topologically so complex that they defied node counting to give quantum numbers and gave no clue as to the dynamics. Moreover classically in the high region a mix of chaos and regularity existed. Mathematically chaos alone would guarantee the failure of the methods used at lower excitation; for example there would be no global convergence of Schrodinger perturbation theory.
Many workers in the field (including myself and my colleague Christophe Jung)) tried to overcome these problems by pointing to non-linear dynamics and the need for semi-classical reduced dimension phase space representations for all the classical trajectories and periodic orbits in order to be able to explain the nature of the spectroscopic Hamiltonian and it’s number space quantum wave functions that the experimentalists had obtained in the process of fitting to the measured spectrum. The Hamiltonian summarized in a dynamic way the spectrum by specifying the types of resonance interactions underlying the spectrum. Even with all this until the 1999 effort by Jung and Taylor, aided by the original experimentalists Field and his student Jacobson, who explained their results to the former, nobody had put it all together so as be able to assign state by state quantum numbers and associated dynamics. No one was able to contradict Field’s claim based on the use of methods that worked in the single resonance region, that the bending spectrum of acetylene was “UNASSIGNABLE”. If this were so the promise of these experiments would go unfulfilled and in this sense their value would be reduced. In 1999 J&T resolved this problem. The new idea that J&T added to the above mix came from a somewhat more advanced use of the principles of non-linear dynamics. I taught that the dynamics should be studied in the reduced dimension action angle phase space subspaces, where the constants of the motion (polyad, bending angular momentum) had unique values. There when the excitation was high enough so that the original mode’s anharmonically modified frequencies satisfied rational ratio resonance conditions, the reduced dimension phase space partitioned into, often coexisting in energy, separate regions called resonance zones. Each zone would have a new dynamics and at its center had periodic orbits, one for each resonance that was active in the region. These periodic orbits intern guided the motion in the region. When for one active resonance the periodic orbit or for two periodic orbits, the point of intersection, was transformed back to the full dimensional normal or local displacement coordinate space they would (and did) reveal trajectories that would there exemplify the new motions. Moreover in theory if the motions indicated by the reduced dimension periodic orbits could separately be quantized then each region would yield simple but different assignable progressions or ladders of levels. The levels were simple in the sense that their reduced dimension wave functions exhibited nodes along and locally perpendicular to the periodic orbit, that when counted would give quantum numbers which when combined with the fixed constants of the motion would constitute the sought after assignment in the above sense. Just as in the low ener
gy analysis higher numbers lead to larger excursions of the motion in both the reduced dimensional configuration space and full dimensional displacement coordinate space. Hence the assignment and dynamics problem reduced to the computationally demanding problem of finding the periodic orbits and to the visually simple matching of them to density plots of the reduced dimensional wave functions. Nodal ordering and counting gave the regional ladders and assignments that would have probably been noticed by the experimentalists if they could have divided their observed states into ladders. Unfortunately the quantum level steps of different ladders interspersed in energy and no tools existed to sort them. The result was a complex and confusing spectrum. In 2010 our greatly improved skills in illustrating on reduced toroidal angle configuration spaces a given wave function density allowed us to visually sort , count nodes and see the wave function’s structurally guiding elements (backbones) that well approximated the periodic orbits. This eliminated the “search for periodic orbits” .The analysis was now totally analytic and therefore required no computation. In 2011 it was shown that the assignment, but not the dynamics, could be accomplished without any semi-classics and canonical transformations using only the eigenstates in the number representation as received from the experimentalists. The analysis was now totally quantum. The references to these papers are given on this website under the “Publications Sorted by Subject” link.
A subject that I have successfully worked on is new methods for non-linear signal processing. In collaboration with V.Mandelshtam these methods were first successfully applied to computing molecular eigenstates in dense spectral regions and to quantizing systems that were classically chaotic. Diagonalization on a basis set could not resolve the true eigenstates. New highly efficient methods for propagating wave packets emulating molecular vibrations in the bound and resonant region were developed and applied to creating a time signal. The signal could be short in time and therefore more easily computed than that needed to resolve the eigenfrequencies when the Fourier transform was employed to convert the time signal into a frequency spectrum. This signal was then converted to a highly resolved frequency spectrum using our vastly improved version of the so called Filter Diagonalization Method for transforming a time signal to a frequency spectrum. The FDM has been called the “Gold Standard” for such calculations and has been include in the program package of the Abinitio Research Group at MIT. After this success in carrying out high resolution time to frequency calculations on calculated signals I turned to applying such non linear methods to calculating frequencies in liquid phase Fourier Transform NMR experiments. Numerous papers had been published on this subject which at a minimum is how to do this type of time to frequency transform in the presence of noise. This method had a history involving numerous papers. The problem was that none of them could claim to be free of the difficulty of calculating a spectrum containing along with the properly placed Lorentzian signal features extra Lorentzian noise features that were visually indistinguishable from the Lorentzian signal features. It is this problem that we have solved for all but the most dilute of solutions in our 2013 paper on NMR sensitivity. In this paper two methods of distinguishing signal from noise features were successfully introduced. The first was to apply phasing to the calculated spectrum and to note that the signal, but not the noise, feature, phased coherently thereby allowing the noise features to be to be recognized and eliminated. The second elimination of noise features method was to use FIDs with different numbers of transients and to note that once a signal peak appeared as a function of the accumulation it always appeared at the same frequency with further accumulation As such the signal features were stable (stabilization again) to further accumulation. The same could not be said of features that were noise bundled into Lorentzian forms thereby allowing them to be recognized and eliminated. With this non linear methods can be used for obtaining frequencies and advantage can be taken of their ability to detect signals at much lower signal to noise ratios. (e.g. SNR= 1.2).This is big as it enables the many NMR studies using low isotope abundant nuclei to be done with little or no enrichment. In turn this saves much labor and treasure needed to carry out enrichment. Perhaps most importantly it enabled the many more potentially fruitful experiments on such systems which previously were not done because of cost and labor restrictions. For a fuller explanation of this work the JPC 2013 paper must be read. (Link above)