February 4, 2019
The material on this website is freely available for educational purposes.
Requests for re-use of digital images: contact the UC Press.
Proper citation of the material found in these pages is:
Tauxe, L, Banerjee, S.K., Butler, R.F. and van der Voo R, Essentials of Paleomagnetism, 5th Web Edition, 2018.
The printed version of this book appeared January, 2010. Order a printed version. (cheaper than printing it yourself!)
This book is intended to work with the companion software package described in PmagPy Cookbook.
This material is based upon work supported by the National Science Foundation.
1 Purpose of the book
The geomagnetic field acts both as an umbrella, shielding us from cosmic radiation and as a window, offering one of the few glimpses of the inner workings of the Earth. Ancient records of the geomagnetic field can inform us about geodynamics of the early Earth and changes in boundary conditions through time. Thanks to its essentially dipolar nature, the geomagnetic field has acted as a guide, pointing to the axis of rotation thereby providing latitudinal information for both explorers and geologists.
Human measurements of the geomagnetic field date to about a millenium and are quite sparse prior to about 400 years ago. Knowledge of what the field has done in the past relies on accidental records carried by geological and archaeological materials. Teasing out meaningful information from such materials requires an understanding of the fields of rock magnetism and paleomagnetism, the subjects of this book. Rock and paleomagnetic data are useful in many applications in Earth Science in addition to the study of the ancient geomagnetic field. This book attempts to draw together essential rock magnetic theory and useful paleomagnetic techniques in a consistent and up-to-date manner. It was written for several categories of readers:
There are a number of excellent references on paleomagnetism and on the related specialties (rock magnetism and geomagnetism). The ever popular but now out of print text by Butler (1992) has largely been incorporated into the present text. For in-depth coverage of rock magnetism, we recommend Dunlop and Özdemir (1997). Similarly for geomagnetism, please see Backus et al. . A rigorous analysis of the statistics of spherical data is given by Fisher et al. (1987). The details of paleomagnetic poles are covered in van der Voo (1993) and magnetostratigraphy is covered in depth by Opdyke and Channell (1996). The Treatise in Geophysics, vol. 5 (edited by Kono, 2007) and The Encyclopedia of Geomagnetism and Paleomagnetism (edited by Gubbins and Herrero-Bervera, 2007) have up to date reviews of many topics covered in this book. The present book is intended to augment or distill information from the broad field of paleomagnetism, complementing the existing body of literature.
An important part of the problems in this book is to teach students to write simple computer programs themselves and use programs that are supplied as a companion set of software (PmagPy). The programming language chosen for this is Python because it is free, cross platform, open source and well supported. There are excellent online tutorials for Python and many open source modules which make software development cheaper and easier than any other programming environment. The appendix provides a brief introduction to programming and using Python. The reader is well advised to peruse the PmagPy Cookbook for further help in gaining necessary skills with a computer. Also, students should have access to a relatively new computer (Windows, Mac OS 10.4 or higher are supported, but other computers may also work.) Software installation is described at: magician.ucsd.edu/Software/PmagPy.
2 What is in the book
This book is a collaborative effort with contributions from R.F. Butler (Chapters 1, 3, 4, 6, 7, 9, 11 and the Appendix), S.K Banerjee (Chapter 8) and R. van der Voo (Chapter 16). The MagIC database team designed and deployed the MagIC database which we have made liberal use of in providing data for problem sets and in the writing of the PmagPy Cookbook, so there were significant contributions to this book project from C.G. Constable, A.A.P. Koppers and Rupert Minnett.
At the beginning of most chapters, there are recommended readings which will help fill in background knowledge. There are also suggested readings at the end of most chapters that will allow students to pursue the subject matter in more depth.
The chapters themselves contain the essential theory required to understand paleomagnetic research as well as illustrative applications. Each chapter is followed by a set of practical problems that challenge the student’s understanding of the material. Many problems use real data and encourage students to analyze the data themselves. [Solutions to the problems may be obtained from LT by instructors of classes using this book as a text.] The appendix contains detailed derivations, assorted techniques, useful tables and a comprehensive explanation of the PmagPy set of programs.
Chapter 1 begins with a review of the physics of magnetic fields. Maxwell’s equations are introduced where appropriate and the magnetic units are derived from first principles. The conversion of units between cgs and SI conventions is also discussed and summarized in a handy table.
Chapter 2 reviews essential aspects of the Earth’s magnetic field, discussing the geomagnetic potential, geomagnetic elements, and the geomagnetic reference fields. The various magnetic poles of the Earth are also introduced.
Chaptes 3-8 deal with rock and mineral magnetism. The most important aspect of rock magnetism to the working paleomagnetist is how rocks can become magnetized and how they can stay that way. In order to understand this, Chapter 3 presents a discussion of the origin of magnetism in crystals, including induced and remanent magnetism. Chapter 4 continues with an explanation of anisotropy energy, magnetic domains and superparamagnetism. Magnetic hysteresis is covered in Chapter 5. Chapter 6 deals with specific magnetic minerals and their properties, leading up to the origin of magnetic remanence in rocks, the topic of Chapter 7. Finally Chapter 8 deals with applied rock magnetism and environmental magnetism.
Chapters 9-13 delve into the nuts and bolts of paleomagnetic data acquisition and analysis. Chapter 9 suggests ways of sampling rocks in the field and methods for treating them in the laboratory to obtain a paleomagnetic direction. Various techniques for obtaining paleointensities are described in Chapter 10. Once the data are in hand, Chapters 11 and 12 deal with statistical methods for analyzing magnetic vectors. Paleomagnetic tensors are introduced in Chapter 13, which explains measurement and treatment of anisotropy data.
Chapters 14-16 illustrate diverse applications of paleomagnetic data. Chapter 14 shows how they are used to study the geomagnetic field. Chapter 15 describes the development of the geomagnetic polarity time scale and various applications of magnetostratigraphy. Chapter 16 focuses on apparent polar wander and tectonic applications.
The appendix contains more detailed information, included for supplemental background or useful techniques. It is divided into several sections: Appendix A summarizes various definitions and detailed derivations including various mathematical tricks such as vector and tensor operations. Appendix B.1 describes some plots commonly employed by paleomagnetists. Appendix C.2 collects together methods and tables useful in directional statistics. Appendix D describes techniques specific to the measurement and analysis of anisotropy data. The PmagPy Cookbook provides an introduction to the Magnetics Information Consortium (MagIC) database, the current repository for rock and paleomagnetic data and summarizes essential computer skills including basic Unix commands, an introduction to Python programming and extensive examples of programs in the PmagPy software package used in the problems at the end of each chapter.
3 How to use the book
Each chapter builds on the principles outlined in the previous chapters, so the reader is encouraged to work through the book sequentially. There are recommended readings before and after every chapter selected to provide backgound information and supplemental reading for the motivated reader respectively. These are meant to be optional.
The reader is encouraged to study Chapter 6 in the PmagPy Cookbook before beginning to work on the problems at the end of each chapter. The utility of the book will be greatly enhanced by successfully installing and using the programs referred to in the problems. By conscientiously trying them out as they are mentioned, the reader will not only gain familiarity with PmagPy software package, but also with the concepts discussed in the chapters.
We have attempted to maintain a consistent notation throughout the book. Vectors and tensors are in bold face; other parameters, including vector components, are in italics. The most important physical and paleomagnetic parameters, acronyms and statistics are listed in Appendix A.
The problems in this book are intended to be solved with Jupyter Notebooks in conjunction with the PmagPy software of Tauxe et al., (2016). To get ready for this, you must do the following steps:
The problems at the end of each chapter assume proficiency in both Python programming and the use of Jupyter notebooks. See Python Programming for Earth Scientists for a complete course.
For examples on how to use PmagPy in a Jupyter notebook, see PmagPy.ipynb in your data_files directory.
LT is the primary author of this book who bears sole responsibility for all mistakes. There are significant contributions by RFB, SKB and RvdV. We are indebted to many people for assistance great and small. This book began life as a set of lecture notes based loosely on the earlier book by Tauxe (1998). Many pairs of eyes hunted down errors in the text and the programs each time the course was given. The course was also occasionally co-taught with Cathy Constable and Jeff Gee who contributed significantly to the development of the manuscript and the proof-reading there-of. Thanks go to the many “live” and “online” students who patiently worked through various drafts. Special thanks go to Kenneth Yuan, Liu Cy , Maxwell Brown and Michael Wack who provided many detailed comments and helpful suggestions. Reviews by Ken Kodama, Brad Clement, Scott Bogue and Cor Langereis improved the book substantially. Also, careful proof-reading by Newlon Tauxe of the first few chapters is greatly appreciated. And of course, I am deeply grateful to my mentors, Dennis V. Kent and Neil D. Opdyke who taught me how to do science.
I owe a debt of gratitude to the many sources of public domain software that ended up in the package PmagPy, including contributions by Cathy Constable, Monika Korte, Jeff Gee, Peter Selkin, Ron Shaar, Nick Swanson-Hysell, Ritayan Mitra and especially Lori Jonestrask, as well as the many dedicated contributors to the Numpy, Matplotlib, Basemap, Cartopy, and Pandas Python modules used extensively by PmagPy. Also, many illustrations were prepared with the excellent programs Magmap, Contour and Plotxy by Robert L. Parker, to whom I remain deeply grateful. I gratefully acknowledge the authors of many earlier books, too many to name but included in the Bibliography, which both educated and inspired me.
Finally, I am grateful to my husband, Hubert Staudigel, and my children, Philip and Daniel Staudigel who have long tolerated my obsession with paleomagnetism with grace and good humor and frequently good advice.
Paleomagnetism is the study of the magnetic properties of rocks. It is one of the most broadly applicable disciplines in geophysics, having uses in diverse fields such as geomagnetism, tectonics, paleoceanography, volcanology, paleontology, and sedimentology. Although the potential applications are varied, the fundamental techniques are remarkably uniform. Thus, a grounding in the basic tools of paleomagnetic data analysis can open doors to many of these applications. One of the underpinnings of paleomagnetic endeavors is the relationship between the magnetic properties of rocks and the Earth’s magnetic field.
In this chapter we will review the basic physical principles behind magnetism: what are magnetic fields, how are they produced and how are they measured? Although many find a discussion of scientific units boring, much confusion arose when paleomagnetists switched from “cgs” to the Système International (SI) units and mistakes abound in the literature. Therefore, we will explain both unit systems and look at how to convert successfully between them. There is a review of essential mathematical tricks in Appendix A to which the reader is referred for help.
Magnetic fields, like gravitational fields, cannot be seen or touched. We can feel the pull of the Earth’s gravitational field on ourselves and the objects around us, but we do not experience magnetic fields in such a direct way. We know of the existence of magnetic fields by their effect on objects such as magnetized pieces of metal, naturally magnetic rocks such as lodestone, or temporary magnets such as copper coils that carry an electrical current. If we place a magnetized needle on a cork in a bucket of water, it will slowly align itself with the local magnetic field. Turning on the current in a copper wire can make a nearby compass needle jump. Observations like these led to the development of the concept of magnetic fields.
Electric currents make magnetic fields, so we can define what is meant by a “magnetic field” in terms of the electric current that generates it. Figure 1.1a is a picture of what happens when we pierce a flat sheet with a wire carrying a current i. When iron filings are sprinkled on the sheet, the filings line up with the magnetic field produced by the current in the wire. A loop tangential to the field is shown in Figure 1.1b, which illustrates the right-hand rule (see inset to Figure 1.1b). If your right thumb points in the direction of (positive) current flow (the direction opposite to the flow of the electrons), your fingers will curl in the direction of the magnetic field.
The magnetic field H points at right angles to both the direction of current flow and to the radial vector r in Figure 1.1b. The magnitude of H (denoted H) is proportional to the strength of the current i. In the simple case illustrated in Figure 1.1b the magnitude of H is given by Ampère’s law:
where r is the length of the vector r. So, now we know the units of H: Am−1.
Ampère’s Law in its most general form is one of Maxwell’s equations of electromagnetism: in a steady electrical field, ∇× H = Jf, where Jf is the electric current density (see Section A.3.6 in the appendix for review of the ∇ operator). In words, the curl (or circulation) of the magnetic field is equal to the current density. The origin of the term “curl” for the cross product of the gradient operator with a vector field is suggested in Figure 1.1a in which the iron filings seem to curl around the wire.
An electrical current in a wire produces a magnetic field that “curls” around the wire. If we bend the wire into a loop with an area πr2 that carries a current i (Figure 1.2a), the current loop would create the magnetic field shown by pattern of the iron filings. This magnetic field is that same as the field that would be produced by a permanent magnet. We can quantify the strength of that hypothetical magnet in terms of a magnetic moment m (Figure 1.2b). The magnetic moment is created by a current i and also depends on the area of the current loop (the bigger the loop, the bigger the moment). Therefore, the magnitude of the moment can by quantified by m = iπr2. The moment created by a set of loops (as shown in Figure 1.2c) would be the sum of the n individual loops, i.e.:
So, now we know the units of m: Am2. In nature, magnetic moments are carried by magnetic minerals the most common of which are magnetite and hematite (see Chapter 6 for details).
The magnetic field is a vector field because at any point it has both direction and magnitude. Consider the field of the bar magnet in Figure 1.3a. The direction of the field at any point is given by the arrows while the strength depends on how close the field lines are to one another. The magnetic field lines represent magnetic flux. The density of flux lines is one measure of the strength of the magnetic field: the magnetic induction B.
Just as the motion of electrically charged particles in a wire (a current) create a magnetic field (Ampère’s Law), the motion of a magnetic field creates electric currents in nearby wires. The stronger the magnetic field, the stronger the current in the wire. We can therefore measure the strength of the magnetic induction (the density of magnetic flux lines) by moving a conductive wire through the magnetic field (Figure 1.3b).
Magnetic induction can be thought of as something that creates a potential difference with voltage V in a conductor of length l when the conductor moves relative to the magnetic induction B with velocity v (see Figure 1.3b): V = vlB. From this we can derive the units of magnetic induction: the tesla (T). One tesla is the magnetic induction that generates a potential of one volt in a conductor of length one meter when moving at a rate of one meter per second. So now we know the units of B: V ⋅ s ⋅ m−2 = T.
Another way of looking at B is that if magnetic induction is the density of magnetic flux lines, it must be the flux Φ per unit area. So an increment of flux dΦ is the field magnitude B times the increment of area dA. The area here is the length of the wire l times its displacement ds in time dt. The instantaneous velocity is dv = ds∕dt so dΦ = BdA and the rate of change of flux is:
Equation 1.2 is known as Faraday’s law and in its most general form is the fourth of Maxwell’s equations. We see from Equation 1.2 that the units of magnetic flux must be a volt-second which is a unit in its own right: the weber (Wb). The weber is defined as the amount of magnetic flux which, when passed through a one-turn coil of a conductor carrying a current of one ampere, produces an electric potential of one volt. This definition suggests a means to measure the strength of magnetic induction and is the basis of the “fluxgate” magnetometer.
A magnetic moment m in the presence of a magnetic field B has a magnetostatic energy (Em) associated with it. This energy tends to align compass needles with the magnetic field (see Figure 1.4). Em is given by −m ⋅ B or −mB cosθ where m and B are the magnitudes of m and B, respectively (see Section A.3.4 in the appendix for review of vector multiplication). Magnetic energy has units of joules and is at a minimum when m is aligned with B.
Magnetization M is a normalized moment (Am2). We will use the symbol M for volume normalization (units of Am−1) or Ω for mass normalization (units of Am2kg−1). Volume normalized magnetization therefore has the same units as H, implying that there is a current somewhere, even in permanent magnets. In the classical view (pre-quantum mechanics), sub-atomic charges such as protons and electrons can be thought of as tracing out tiny circuits and behaving as tiny magnetic moments. They respond to external magnetic fields and give rise to an induced magnetization. The relationship between the magnetization induced in a material MI and the external field H is defined as:
The parameter χb is known as the bulk magnetic susceptibility of the material; it can be a complicated function of orientation, temperature, state of stress, time scale of observation and applied field, but is often treated as a scalar. Because M and H have the same units, χb is dimensionless. In practice, the magnetic response of a substance to an applied field can be normalized by volume (as in Equation 1.3) or by mass or not normalized at all. We will use the symbol κ for mass normalized susceptibility and K for the raw measurements (see Table 1.1) when necessary.
Certain materials can produce magnetic fields in the absence of external magnetic fields (i.e., they are permanent magnets). As we shall see in later chapters, these so-called “spontaneous” magnetic moments are also the result of spins of electrons which, in some crystals, act in a coordinated fashion, thereby producing a net magnetic field. The resulting spontaneous magnetization can be fixed by various mechanisms and can preserve records of ancient magnetic fields. This remanent magnetization forms the basis of the field of paleomagnetism and will be discussed at length in subsequent chapters.
B and H are closely related and in paleomagnetic practice, both B and H are referred to as the “magnetic field”. Strictly speaking, B is the induction and H is the field, but the distinction is often blurred. The relationship between B and H is given by:
where μ is a physical constant known as the permeability. In a vacuum, this is the permeability of free space, μo. In the SI system, μ has dimensions of henries per meter and μo is
4π × 10−7H ⋅ m−1. In most cases of paleomagnetic interest, we are outside the magnetized body so M = 0 and B = μoH.
So far, we have derived magnetic units in terms of the Système International (SI). In practice, you will notice that people frequently use what are known as cgs units, based on centimeters, grams and seconds. You may wonder why any fuss would be made over using meters as opposed to centimeters because the conversion is trivial. With magnetic units, however, the conversion is far from trivial and has been the source of confusion and many errors. So, in the interest of clearing things up, we will briefly outline the cgs approach to magnetic units.
The derivation of magnetic units in cgs is entirely different from SI. The approach we will take here follows that of Cullity (1972). We start with the concept of a magnetic pole with strength p instead of with current loops as we did for SI units. We will consider the force between two poles p1,p2 (see Figure 1.5) Coulomb’s law. This states that the force between two charges (q1,q2) is:
where r is the distance between the two charges. In cgs units, the proportionality constant k is simply unity, whereas in SI units it is 1 __ 4πϵ0 where ϵ0 = 107 _ 4πc2 and c is the speed of light in a vacuum (hence ϵ0 = 8.859 ⋅ 10−12 AsV−1m−1). [You can see why many people really prefer cgs but we are not allowed to publish in cgs in most of geophysical journals so we just must grin and bear it!]
For magnetic units, we use pole strength p1,p2 in units of electrostatic units or esu, so Equation 1.5 becomes
Force in cgs is in units of dynes (dyn), so
A magnetic pole, as an isolated electric charge, would create a magnetic induction μoH in the space around it. One unit of field strength (defined as one oersted or Oe) is the unit of field strength that exerts a force of one dyne on a unit of pole strength. The related induction (μoH) has units of gauss or G.
The relationship between force, pole and magnetic field is written as:
Returning to the lines of force idea developed for magnetic fields earlier, let us define the oersted to be the magnetic field which would produce an induction with one unit of induction per square centimeter. Imagine a sphere with a radius r surrounding the magnetic monopole. The surface area of such a sphere is 4πr2. When the sphere is a unit sphere (r = 1) and the field strength at the surface is 1 Oe, then there must be a magnetic flux of 4π units of induction passing through it.
You will have noticed the use of the permeability of free space μo in the above treatment – a parameter missing in many books and articles using the cgs units. The reason for this is that μo is unity in cgs units and simply converts oersteds (H) to gauss (B = μoH). Therefore in cgs units, B and H are used interchangeably. We inserted it in this derivation to remind us that there IS a difference and that the difference becomes very important when we convert to SI because μo is not unity, but 4π x 10−7! For conversion between commonly used cgs and SI paramters, please refer to Table 1.1.
Proceeding to the notion of magnetic moment, from a cgs point of view, we start with a magnet of length l with two poles of strength p at each end. Placing the magnet in a field μoH, we find that it experiences a torque Γ proportional to p,l and H such that
Recalling our earlier discussion of magnetic moment, you will realize that pl is simply the magnetic moment m. This line of reasoning also makes clear why it is called a “moment”. The units of torque are energy, which are ergs in cgs, so the units of magnetic moment are technically erg per gauss. But because of the “silent” μo in cgs, magnetic moment is most often defined as erg per oersted We therefore follow convention and define the “electromagnetic unit” (emu) as being one erg ⋅ oe−1. [Some use emu to refer to the magnetization (volume normalized moment, see above), but this is incorrect and a source of a lot of confusion.]
|Parameter||SI unit||cgs unit||Conversion|
|Magnetic moment (m)||Am2||emu||1 A m2 = 103 emu|
|by volume (M)||Am−1||emu cm−3||1 Am−1 = 10−3 emu cm−3|
|by mass (Ω)||Am2kg−1||emu gm−1||1 Am2kg−1 = 1 emu gm−1|
|Magnetic Field (H)||Am−1||Oersted (oe)||1 Am−1 = 4π x 10−3 oe|
|Magnetic Induction (B)||T||Gauss (G)||1 T = 104 G|
|of free space (μo)||Hm−1||1||4π x 10−7 Hm−1 = 1|
|total (K:mH)||m3||emu oe−1||1 m3 = 106 4π emu oe−1|
|by volume ( χ: M H)||-||emu cm−3 oe−1||1 S.I. = 1 _ 4π emu cm−3 oe−1|
|by mass (κ:mm ⋅ 1 _ H)||m3kg −1||emu g−1 oe−1||1 m3kg−1 = 103 4π emu g−1 oe−1|
1 H = kg m2A−2s−2, 1 emu = 1 G cm3, B = μoH (in vacuum), 1 T = kg A−1 s−2
An isolated electrical charge produces an electrical field that begins at the source (the charge) and spread (diverge) outward (see Figure 1.6a). Because there is no return flux to an oppositely charged “sink”, there is a net flux out of the dashed box shown in the figure. The divergence of the electrical field is defined as ∇⋅ E which quantifies the net flux (see Appendix A.3.6 for more). In the case of the field around an electric charge, the divergence is non-zero.
Magnetic fields are different from electrical fields in that there is no equivalent to an isolated electrical charge; there are only pairs of “opposite charges” – magnetic dipoles. Therefore, any line of flux starting at one magnetic pole, returns to its sister pole and there is no net flux out of the box shown in Figure 1.6b; the magnetic field has no divergence (Figure 1.6b). This property of magnetic fields is another of Maxwell’s equations: ∇⋅ B = 0.
In the special case away from electric currents and magnetic sources (so B = μoH), the magnetic field can be written as the gradient of a scalar field that is known as the magnetic potential, ψm, i.e.,
The presence of a magnetic moment m creates a magnetic field which is the gradient of some scalar field. To gain a better intuitive feel about the relationship between scalar fields and their gradient vector fields, see Appendix A.3.6. Because the divergence of the magnetic field is zero, by definition, the divergence of the gradient of the scalar field is also zero, or ∇2ψm = 0. The operator ∇2 is called the Laplacian and ∇2ψm = 0 is Laplace’s equation. This will be the starting point for spherical harmonic analysis of the geomagnetic field discussed briefly in Chapter 2.
The curl of the magnetic field (∇×H) depends on the current density and is not always zero and magnetic fields cannot generally be represented as the gradient of a scalar field. Laplace’s equation is only valid outside the magnetic sources and away from currents.
So what is this magnetic potential and how does it relate to the magnetic moments that give rise to the magnetic field? Whatever it is, it has to satisfy Laplace’s equation, so we turn to solutions of Laplace’s equation for help. One solution is to define the magnetic potential ψm as a function of the vector r with radial distance r and the angle θ from the moment. Given a dipole moment m, a solution to Laplace’s equation is:
You can verify this by making sure that∇2ψm = 0.
The radial (Hr) and tangential (Hθ) components of H at P (Figure 1.7) then would be:
Measurement and description of the geomagnetic field and its spatial and temporal variations constitute one of the oldest geophysical disciplines. However, our ability to describe the field far exceeds our understanding of its origin. All plausible theories involve generation of the geomagnetic field within the fluid outer core of the Earth by some form of magnetohydrodynamic dynamo. Attempts to solve the full mathematical complexities of magnetohydrodynamics succeeded only in 1995 (Glatzmaier and Roberts, 1995).
Quantitative treatment of magnetohydrodynamics is (mercifully) beyond the scope of this book, but we can provide a qualitative explanation. The first step is to gain some appreciation for what is meant by a self-exciting dynamo. Maxwell’s equations tell us that electric and changing magnetic fields are closely linked and can affect each other. Moving an electrical conductor through a magnetic field will cause electrons to flow, generating an electrical current. This is the principle of electric motors. A simple electromechanical disk-dynamo model such as that shown in Figure 1.8 contains the essential elements of a self-exciting dynamo. The model is constructed with a copper disk rotating attached to an electrically conducting (e.g., brass) axle. An initial magnetic induction field, B, is perpendicular to the copper disk in an upward direction. Electrons in the copper disk experience a push from the magnetic field known as the Lorentz force, FL, when they pass through the field.
The Lorentz force is given by:
where q is the electrical charge of the electrons, and v is their velocity. The Lorentz force on the electrons is directed toward the axle of the disk and the resulting electrical current flow is toward the outside of the disk (Figure 1.8).
Brush connectors are used to tap the electrical current from the disk, and the current passes through a coil under the disk. This coil is cleverly wound so that the electrical current produces a magnetic induction field in the same direction as the original field. The electrical circuit is a positive feedback system that reinforces the original magnetic induction field. The entire disk-dynamo model is a self-exciting dynamo. As long as the disk keeps rotating, the electrical current will flow, and the magnetic field will be sustained even if the original field disappears.
With this simple model we encounter the essential elements of any self-exciting dynamo:
More complicated setups using two disks whose fields interact with one another generate chaotic magnetic behavior that can switch polarities even if the mechanical motion remains steady. Certainly no one proposes that systems of disks and feedback coils exist in the Earth’s core. But interaction between the magnetic field and the electrically conducting iron-nickel alloy in the outer core can produce a positive feedback and allow the Earth’s core to operate as a self-exciting magnetohydrodynamic dynamo. For reasonable electrical conductivities, fluid viscosity, and plausible convective fluid motions in the Earth’s outer core, the fluid motions can regenerate the magnetic field that is lost through electrical resistivity. There is a balance between fluid motions regenerating the magnetic field and loss of magnetic field because of electrical resistivity. The dominant portion of the geomagnetic field detectable at the surface is essentially dipolar with the axis of the dipole nearly parallel to the rotational axis of the Earth. Rotation of the Earth must therefore be a controlling factor on the time-averaged fluid motions in the outer core. It should also be pointed out that the magnetohydrodynamic dynamo can operate in either polarity of the dipole. Thus, there is no contradiction between the observation of reversals of the geomagnetic dipole and magnetohydrodynamic generation of the geomagnetic field. However, understanding the special interactions of fluid motions and magnetic field that produce geomagnetic reversals is a major challenge.
As wise economists have long observed, there is no free lunch. The geomagnetic field is no exception. Because of ohmic dissipation of energy, there is a requirement for energy input to drive the magnetohydrodynamic fluid motions and thereby sustain the geomagnetic field. Estimates of the power (energy per unit time) required to generate the geomagnetic field are about 1013 W (roughly the output of 104 nuclear power plants). This is about one fourth of the total geothermal flux, so the energy involved in generation of the geomagnetic field is a substantial part of the Earth’s heat budget.
Many sources of this energy have been proposed, and ideas on this topic have changed over the years. The energy sources that are currently thought to be most reasonable are a combination of cooling of the Earth’s core with attendant freezing of the outer core and growth of the solid inner core. The inner core is pure iron, while the liquid outer core is some 15% nickel (and probably has trace amounts of other elements as well). The freezing of the inner core therefore generates a bouyancy force as the remaining liquid becomes more enriched in the lighter elements. These energy sources are sufficient to power the fluid motions of the outer core required to generate the geomagnetic field.
SUPPLEMENTAL READINGS: Jiles (1991), Chapter 1; Cullity (1972), Chapter 1.
In axisymmetric spherical coordinates, ∇ (the gradient operator) is given by
We also know that
and that ψm is a scalar function of position:
Find the radial and tangential components of H if m is 80 ZAm2, [remember that “Z” stands for Zeta which stands for 1021], r is 6 x 106 m and θ is 45o. What are these field values in terms of B (teslas)?
Write your answers in a markdown cell in a jupyter notebook using latex syntax.
a) In your Jupyter notebook, write Python functions to convert induction, moment and magnetic field quantities in cgs units to SI units. Use the conversion factors in Table 1.1. Use your function to convert the following from cgs to SI:
i) B = 3.5 x105 G
ii) m = 2.78 x 10−20 G cm3
iii) H = 128 oe
b) In a new code block, modify your function to allow conversion from cgs => SI or SI => cgs. Rerun it to convert your answers from a) back to cgs.
HINTS: Call the functions with the values of B, m and H and have the function return the converted values. In the modified functions, you can specify whether the conversion is from cgs or SI.
Figure 1.9 shows a meridional cross section through the Earth in the plane of a magnetic dipole source m. At the location directly above the dipole, the field from the dipole is directed vertically downward and has intensity 10 μT. The dipole source is placed at 3480 km from the center of the Earth. Assume a mean Earth radius of 6370 km. Adapt the geometry of Figure 1.7 and the equations describing the magnetic field of a dipole to the model dipole in Figure 1.9.
a) Calculate the magnetic dipole moment of the model dipole. Remember to keep track of your units!
b) Compare this field to the total field produced by a centered axial magnetic dipole moment (i.e., one that is pointing straight up and is in the center of the circles) equivalent to that of the present geomagnetic field (m ∼ 80 ZAm2; Z=1021). Assume a latitude for the point of observation of 60∘. [HINT: the angle θ in Equation 1.10 is the co-latitude, not the latitude.]
Knowing that B = μoH, work out the fundamental units of μo in SI units. Prepare your answer in a markdown cell in your Jupyter notebook.
The part of the geomagnetic field of interest to paleomagnetists is generated by convection currents in the liquid outer core of the Earth which is composed of iron, nickel and some unknown lighter component(s). The source of energy for this convection is not known for certain, but is thought to be partly from cooling of the core and partly from the bouyancy of the iron/nickel liquid outer core caused by freezing out of the pure iron inner core. Motions of this conducting fluid are controlled by the bouyancy of the liquid, the spin of the Earth about its axis and by the interaction of the conducting fluid with the magnetic field (in a horribly non-linear fashion). Solving the equations for the fluid motions and resulting magnetic fields is a challenging computational task. Recent numerical models, however, show that such magnetohydrodynamical systems can produce self-sustaining dynamos which create enormous external magnetic fields.
The magnetic field of a dipole aligned along the spin axis and centered in the Earth (a so-called geocentric axial dipole, or GAD) is shown in Figure 2.1a. [See Chapter 1 for a derivation of how to find the radial and tangential components of such a field.] By convention, the sign of the Earth’s dipole is negative, pointing toward the south pole as shown in Figure 2.1a and magnetic field lines point toward the north pole. They point downward in the northern hemisphere and upward in the southern hemisphere.
Although dominantly dipolar, the geomagnetic field is not perfectly modeled by a geocentric axial dipole, but is somewhat more complicated (see Figure 2.1b). At the point on the surface labeled ‘P’, the geomagnetic field points nearly north and down at an angle of approximately 60∘. Vectors in three dimensions are described by three numbers and in many paleomagnetic applications, these are two angles (D and I) and the strength (B) as shown in Figure 2.1b and c. The angle from the horizontal plane is the inclination I; it is positive downward and ranges from +90∘ for straight down to -90∘ for straight up. If the geomagnetic field were that of a perfect GAD field, the horizontal component of the magnetic field (BH in Figure 2.1b) would point directly toward geographic north. In most places on the Earth there is a deflection away from geographic north and the angle between geographic and magnetic north is the declination, D (see Figure 2.1c). D is measured positive clockwise from North and ranges from 0 → 360∘. [Westward declinations can also be expressed as negative numbers, i.e., 350∘ = -10∘.] The vertical component (BV in Figure 2.1b,c) of the geomagnetic field at P, is given by
and the horizontal component BH (Figure 2.1c) by
BH can be further resolved into north and east components (BN and BE in Figure 2.1c) by
Depending on the particular problem, some coordinate systems are more suitable to use because they have the symmetry of the problem built into them. We have just defined a coordinate system using two angles and a length (B,D,I) and the equivalent Cartesian coordinates of (BN,BE,BV ). We will need to convert among them at will. There are many names for the Cartesian coordinates. In addition to north, east and down, they could also be x,y,z or even x1,x2 and x3. The convention used in this book is that axes are denoted X1,X2,X3, while the components along the axes are frequently designated x1,x2,x3. In the geographic frame of reference, positive X1 is to the north, X2 is east and X3 is vertically down in keeping with the right-hand rule. To convert from Cartesian coordinates to angular coordinates (B,D,I):
Be careful of the sign ambiguity of the tangent function. You may well end up in the wrong quadrant and have to add 180∘; this will happen if both x1 and x2 are negative. In most computer languages, there is a function atan2 which takes care of this, but most hand calculators will not. Remember that most computer languages expect angles to be given in radians, not degrees, so multiply degrees by π∕180 to convert to radians. Note also that in place of B for magnetic induction with units of tesla as a measure of vector length, (see Chapter 1), we could also use H, M ( both Am−1) or m (Am2) for magnetic field, magnetization or magnetic moment respectively.
We can measure declination, inclination and intensity at different places around the globe, but not everywhere all the time. Yet it is often handy to be able to predict what these components are. For example, it is extremely useful to know what the deviation is between true North and declination in order to find our way with maps and compasses. In principle, magnetic field vectors can be derived from the magnetic potential ψm as we showed in Chapter 1. For an axial dipolar field, there is but one scalar coefficient (the magnetic moment m of a dipole source). For the geomagnetic field, there are many more coefficients, including not just an axial dipole aligned with the spin axis, but two orthogonal equatorial dipoles and a whole host of more complicated sources such as quadrupoles, octupoles and so on. A list of coefficients associated with these sources allows us to calculate the magnetic field vector anywhere outside of the source region. In this section, we outline how this might be done.
As we learned in Chapter 1, the magnetic field at the Earth’s surface can be calculated from the gradient of a scalar potential field (H = −∇ψm), and this scalar potential field satisfies Laplace’s Equation:
For the geomagnetic field (ignoring external sources of the magnetic field which are in any case small and transient), the potential equation can be written as:
where a is the radius of the Earth (6.371 × 106 m). In addition to the radial distance r and the angle away from the pole θ, there is ϕ, the angle around the equator from some reference, say, the Greenwich meridian. Here, θ is the co-latitude and ϕ is the longitude. The glms and hlms are the gauss coefficients (degree l and order m) for hypothetical sources at radii less than a calculated for a particular year. These are normally given in units of nT. The Plms are wiggly functions called partially normalized Schmidt polynomials of the argument cosθ. These are closely related to the associated Legendre polynomials. [When m = 0 the Schmidt and Legendre polynomials are identical.] The first few of Plms are:
To get an idea of how the gauss coefficients in the potential relate to the associated magnetic fields, we show three examples in Figure 2.3. We plot the inclinations of the vector fields that would be produced by the terms with g10,g20 and g30 respectively. These are the axial (m = 0) dipole (l = 1), quadrupole (l = 2) and octupole (l = 3) terms. The associated potentials for each harmonic are shown in the insets.
In general, terms for which the difference between the subscript (l) and the superscript (m) is odd (e.g., the axial dipole g10 and octupole g30) produce magnetic fields that are antisymmetric about the equator, while those for which the difference is even (e.g., the axial quadrupole g20) have symmetric fields. In Figure 2.3a we show the inclinations produced by a purely dipolar field of the same sign as the present day field. The inclinations are all positive (down) in the northern hemisphere and negative (up) in the southern hemisphere. In contrast, inclinations produced by a purely quadrupolar field (Figure 2.3b) are down at the poles and up at the equator. The map of inclinations produced by a purely axial octupolar field (Figure 2.3c) are again asymmetric about the equator with vertical directions of opposite signs at the poles separated by bands with the opposite sign at mid-latitudes.
As noted before, there is not one, but three dipole terms in Equation 2.6, the axial term (g10) and two equatorial terms (g11 and h11). Therefore, the total dipole contribution is the vector sum of these three or . The total quadrupole contribution (l = 2) combines five coefficients and the total octupole (l = 3) contribution combines seven coefficients.
So how do we get this marvelous list of gauss coefficients? If you want to know the details, please refer Langel (1987). We will just give a brief introduction here. Recalling Chapter 1, once the scalar potential ψm is known, the components of the magnetic field can be calculated from it. We solved this for the radial and tangential field components (Hr and Hθ) in Chapter 1. We will now change coordinate and unit systems and introduce a third dimension (because the field is not perfectly dipolar). The north, east, and vertically down components are related to the potential ψm by:
where r, θ, ϕ are radius, co-latitude (degrees away from the North pole) and longitude, respectively. Here, BV is positive down, BE is positive east, and BN is positive to the north, the opposite of Hr and Hθ as defined in Chapter 1. Note that Equation 2.7 is in units of induction, not Am−1 if the units for the gauss coefficients are in nT, as is the current practice.
Going backwards, the gauss coefficients are determined by fitting Equations 2.7 and 2.6 to observations of the magnetic field made by magnetic observatories or satellite for a particular time. The International (or Definitive) Geomagnetic Reference Field or I(D)GRF, for a given time interval is an agreed upon set of values for a number of gauss coefficients and their time derivatives. IGRF (or DGRF) models and programs for calculating various components of the magnetic field are available on the internet from the National Geophysical Data Center; the address is http://www.ngdc.noaa.gov. there is also a program igrf.py included in the PmagPy package (see igrf.py documentation).
In practice, the gauss coefficients for a particular reference field are estimated by least-squares fitting of observations of the geomagnetic field. You need a minimum of 48 observations to estimate the coefficients to l = 6. Nowadays, we have satellites which give us thousands of measurements and the list of generation 10 of the IGRF for 2005 goes to l = 13.
|l||m||g( nT)||h (nT)||l||m||g( nT)||h (nT)|
In order to get a feel for the importance of the various gauss coefficients, take a look at Table 2.1, which has the Schmidt quasi-normalized gauss coefficients for the first six degrees from the IGRF for 2005. The power at each degree is the average squared field per spherical harmonic degree over the Earth’s surface and is calculated by Rl = ∑ m(l + 1)[(glm)2 + (hlm)2] (Lowes, 1974). The so-called Lowes spectrum is shown in Figure 2.4. It is clear that the lowest order terms (degree one) totally dominate, constituting some 90% of the field. This is why the geomagnetic field is often assumed to be equivalent to a magnetic field created by a simple dipole at the center of the Earth.
The beauty of using the geomagnetic potential field is that the vector field can be evaluated anywhere outside the source region. Using the values for a given reference field in Equations 2.6 and 2.7, we can calculate values of B,D and I at any location on Earth. Figure 2.1b shows the lines of flux predicted from the 2005 IGRF from the core-mantle boundary up. We can see that the field becomes simpler and more dipolar as we move from the core mantle boundary to the surface. Yet, there is still significant non-dipolar structure in the geomagnetic field even at the Earth’s surface.
We can recast the vectors at the surface of the Earth into maps of components as shown in Figure 2.5a,b. We show the potential in Figure 2.5c for comparison with that of a pure dipole (inset to Figure 2.3a). These maps illustrate the fact that the field is a complicated function of position on the surface of the Earth. The intensity values in Figure 2.5a are, in general, highest near the poles (∼ 60 μT) and lowest near the equator (∼ 30 μT), but the contours are not straight lines parallel to latitude as they would be for a field generated strictly by a geocentric axial dipole (GAD) (e.g, Figure 2.1a). Similarly, a GAD would produce lines of inclination that vary in a regular way from -90∘ to +90∘ at the poles, with 0∘ at the equator; the contours would parallel the lines of latitude. Although the general trend in inclination shown in Figure 2.5b is similar to the GAD model, the field lines are more complicated, which again suggests that the field is not perfectly described by a geocentric bar magnet.
Perhaps the most important result of spherical harmonic analysis for our purposes is that the field at the Earth’s surface is dominated by the degree one terms (l = 1) and the external contributions are very small. The first order terms can be thought of as geocentric dipoles that are aligned with three different axes: the spin axis (g10) and two equatorial axes that intersect the equator at the Greenwich meridian (h10) and at 90∘ East (h11). The vector sum of these geocentric dipoles is a dipole that is currently inclined by about 10∘ to the spin axis. The axis of this best-fitting dipole pierces the surface of the Earth at the circle in Figure 2.6. This point and its antipode are called geomagnetic poles. Points at which the field is vertical (I = ±90∘ shown by a square in Figure 2.6) are called magnetic poles, or sometimes dip poles. These poles are distinguishable from the geographic poles, where the spin axis of the Earth intersects its surface. The northern geographic pole is shown by a star in Figure 2.6.
It turns out that when averaged over sufficient time, the geomagnetic field actually does seem to be approximately a GAD field, perhaps with a pinch of g20 thrown in (see e.g., Merrill et al., 1996). The GAD model of the field will serve as a useful crutch throughout our discussions of paleomagnetic data and applications. Averaging ancient magnetic poles over enough time to average out secular variation (thought to be 104 or 105 years) gives what is known as a paleomagnetic pole; this is usually assumed to be co-axial with the Earth’s geographic pole (the spin axis).
Because the geomagnetic field is axially dipolar to a first approximation, we can write:
Note that g10 is given in nT in Table 2.1. Thus, from Equation 2.8,
Given some latitude λ on the surface of the Earth in Figure 2.1a and using the equations for BV and BN, we find that:
This equation is sometimes called the dipole formula and shows that the inclination of the magnetic field is directly related to the co-latitude (θ) for a field produced by a geocentric axial dipole (or g10). The dipole formula allows us to calculate the latitude of the measuring position from the inclination of the (GAD) magnetic field, a result that is fundamental in plate tectonic reconstructions. The intensity of a dipolar magnetic field is also related to (co)latitude because:
The dipole field intensity has changed by more than an order of magnitude in the past and the dipole relationship of intensity to latitude turns out to be not useful for tectonic reconstructions.
Magnetic field and magnetization directions can be visualized as unit vectors anchored at the center of a unit sphere. Such a unit sphere is difficult to represent on a 2-D page. There are several popular projections, including the Lambert equal area projection which we will be making extensive use of in later chapters. The principles of construction of the equal area projection are covered in the Appendix B.1.
In general, regions of equal area on the sphere project as equal area regions on this projection, as the name implies. Plotting directional data in this way enables rapid assessment of data scatter. A drawback of this projection is that circles on the surface of a sphere project as ellipses. Also, because we have projected a vector onto a unit sphere, we have lost information concerning the magnitude of the vector. Finally, lower and upper hemisphere projections must be distinguished with different symbols. The paleomagnetic convention is: lower hemisphere projections (downward directions) use solid symbols, while upper hemisphere projections are open.
The dipole formula allows us to convert a given measurement of I to an equivalent magnetic co-latitude θm:
If the field were a simple GAD field, θm would be a reasonable estimate of θ, but non-GAD terms can invalidate this assumption. To get a feel for the effect of these non-GAD terms, we consider first what would happen if we took random measurements of the Earth’s present field (see Figure 2.7). We evaluated the directions of the magnetic field using the IGRF for 2005 at 200 positions on the globe (shown in Figure 2.7a). These directions are plotted in Figure 2.7b using the paleomagnetic convention of open symbols pointing up and closed symbols pointing down. In Figure 2.7c, we plot the inclinations as a function of latitude. As expected from a predominantly dipolar field, inclinations cluster around the values for a geocentric axial dipolar field but there is considerable scatter and interestingly the scatter is larger in the southern hemisphere than in the northern one. This is related to the low intensities beneath South America and the Atlantic region seen in Figure 2.5a.
Often we wish to compare directions from distant parts of the globe. There is an inherent difficulty in doing so because of the large variability in inclination with latitude. In such cases it is appropriate to consider the data relative to the expected direction (from GAD) at each sampling site. For this purpose, it is useful to use a transformation whereby each direction is rotated such that the direction expected from a geocentric axial dipole field (GAD) at the sampling site is the center of the equal area projection. This is accomplished as follows:
Each direction is converted to Cartesian coordinates (xi) by:
These are rotated to the new coordinate system (x′i, see Appendix A.3.5) by:
where Id = the inclination expected from a GAD field (tanId = 2tanλ), λ is the site latitude, and α is the inclination of the paleofield vector projected onto the N-S plane (α = tan−1(x3∕x1)). The x′i are then converted to D′,I′ by Equation 2.4.
In Figure 2.8a we show the geomagnetic field vectors evaluated at random longitudes along a latitude band of 45∘N. The vectors are shown in their Cartesian coordinates of North, East and Down. In Figure 2.8b we show what happens when we rotate the coordinate system to peer down the direction expected from an axial dipolar field at 45∘N (which has an inclination of 63∘). The vectors circle about the expected direction. Finally, we see what happens to the directions shown in Figure 2.7b after the D′,I′ transformation in Figure 2.8. These are unit vectors projected along the expected direction for each observation in Figure 2.7a. Comparing the equal area projection of the directions themselves (Figure 2.7b) to the transformed directions (Figure 2.8c), we see that the latitudal dependence of the inclinations has been removed.
We are often interested in whether the geomagnetic pole has changed, or whether a particular piece of crust has rotated with respect to the geomagnetic pole. Yet, what we observe at a particular location is the local direction of the field vector. Thus, we need a way to transform an observed direction into the equivalent geomagnetic pole.
In order to remove the dependence of direction merely on position on the globe, we imagine a geocentric dipole which would give rise to the observed magnetic field direction at a given latitude (λ) and longitude (ϕ). The virtual geomagnetic pole (VGP) is the point on the globe that corresponds to the geomagnetic pole of this imaginary dipole (Figure 2.9a).
Paleomagnetists use the following conventions: ϕ is measured positive eastward from the Greenwich meridian and ranges from 0 → 360∘; θ is measured from the North pole and goes from 0 → 180∘. Of course θ relates to latitude, λ by θ = 90 − λ. θm is the magnetic co-latitude and is given by Equation 2.12. Be sure not to confuse latitudes and co-latitudes. Also, be careful with declination. Declinations between 180∘ and 360∘ are equivalent to D - 360 ∘ which are counter-clockwise with respect to North.
The first step in the problem of calculating a VGP is to determine the magnetic co-latitude θm by Equation 2.12 which is defined in the dipole formula (Equation 2.12). The declination D is the angle from the geographic North Pole to the great circle joining the observation site S and the pole P, and Δϕ is the difference in longitudes between P and S, ϕp −ϕs. Now we use some tricks from spherical trigonometry as reviewed in Appendix A.3.1.
We can locate VGPs using the law of sines and the law of cosines. The declination D is the angle from the geographic North Pole to the great circle joining S and P (see Figure 2.9) so:
which allows us to calculate the VGP co-latitude θp. The VGP latitude is given by:
To determine ϕp, we first calculate the angular difference between the pole and site longitude Δϕ.
If cosθm ≥ cosθs cosθp, then ϕp = ϕs + Δϕ. However, if cosθm < cosθs cosθp then ϕp = ϕs + 180 − Δϕ.
Now we can convert the directions in Figure 2.7b to VGPs (see Figure 2.9c). The grouping of points is much tighter in Figure 2.9c than in the equal area projection because the effect of latitude variations in dipole fields has been removed. If a number of VGPs are averaged together, the average pole position is called a “paleomagnetic pole”. How to average poles and directions is the subject of Chapters 11 and 12.
The procedure for calculating a direction from a VGP is a similar procedure to that for calculating the VGP from the direction. Magnetic colatitude θm is calculated in exactly the same way as before and yields inclination from the dipole formula. The declination can be calculated by solving for D in Equation 2.14 as:
This equation works most of the time, but breaks down under some circumstances, for example, when the pole latitude is further to the south than the site latitude. The following algorithm works in the more general case:
As pointed out earlier, magnetic intensity varies over the globe in a similar manner to inclination. It is often convenient to express paleointensity values in terms of the equivalent geocentric dipole moment that would have produced the observed intensity at a specific (paleo)latitude. Such an equivalent moment is called the virtual dipole moment (VDM) by analogy to the VGP (see Figure 2.9a). First, the magnetic (paleo)co-latitude θm is calculated as before from the observed inclination and the dipole formula of Equation 2.10. Then, following the derivation of Equation 2.11, we have
Sometimes the site co-latitude as opposed to magnetic co-latitude is used in the above equation, giving a virtual axial dipole moment (VADM; see Figure 2.9d).
SUPPLEMENTAL READINGS: Merrill et al. (1996), Chapters 1 & 2
For this and future problem sets, you will need the PmagPy package (see section in the Preface at the beginning of the book). After you have installed this and properly set your path, you can import the functions from PmagPy using these commands:
Please consult the Jupyter notebook PmagPy.ipynb for more help on using PmagPy functions within a notebook.
a) Write a python script in an Jupyter notebook that converts declination, inclination and intensity to North, East, and Down. Read in the data in the file Chapter_2/ps2_prob1_data.txt. For this the loadtxt function in the Numpy module will come in handy.
b) Choose 10 random spots on the surface of the earth. You can use the pmag.get_unf to generate a list for you. Then use the ipmag.igrf function to evaluate the declination, inclination and intensity at each of these locations in January 2006. As with all PmagPy programs, and functions, you can find out what they do by printing out the doc string: you can find out what they do by getting the help message:
Calls like these generates help messages which will help you to call the function properly.
c) Take the vectors from the output of Problem 1b and convert them to cartesian coordinates, using the script you wrote in Problem 1a.
a) Plot the IGRF directions from Problem 1b on an equal area projection by hand. Use the equal area net provided in the Appendix. Remember that the outer rim is horizontal and the center of the diagram is vertical. Azimuth goes around the rim with clockwise being positive. Put a thumbtack through the equal area (Schmidt) net and place a piece of tracing paper on the thumbtack. Mark the top of the stereonet with a tick mark on the tracing paper.
To plot a direction, rotate the tick mark of the tracing paper around counter clockwise until the top of the paper is rotated by the declination of the direction. Then count tick marks toward the center from the outer rim (the horizontal) to the inclination angle, plot the point, and rotate back so that the tick is North again. Put all your points on the diagram.
b) Now use the ipmag functions plot_net and plot_di. or write your own! Both plots should look the same....
You went to Wyoming (112∘ W and 36∘ N) to sample some Cretaceous rocks. You measured a direction with a declination of 345∘ and an inclination of 47∘.
a) What direction would you expect from the present (GAD) field?
b) What is the virtual geomagnetic pole position corresponding to the direction you actually measured? [Hint: Use the function pmag.dia_vgp in the PmagPy module or for a challenge, write your own! ]
Scientists in the late 19th century thought that it might be possible to exploit the magnetic record retained in accidental records to study the geomagnetic field in the past. Work in the mid 20th century provided the theoretical and experimental basis for presuming that such materials might retain a record of past geomagnetic fields. There are several books and articles that describe the subject in detail (see e.g., the supplemental readings). We present here a brief overview of theories on how rocks get and stay magnetized. We will begin with magnetism at the atomic level caused by electronic orbits and spins giving rise to induced magnetizations. Then we will see how electronic spins working in concert give rise to permanently magnetized substances (like magnetic minerals) making remanent magnetization possible.
We learned in Chapter 1 that magnetic fields are generated by electric currents. Given that there are no wires leading into or out of permanent magnets, you may well ask, “Where are the currents?” At the atomic level, the electric currents come from the motions of the electrons. From here quantum mechanics quickly gets esoteric, but some rudimentary understanding is helpful. In this chapter we will cover the bare minimum necessary to grasp the essentials of rock magnetism.
In Chapter 1 we took the classical (pre-quantum mechanics) approach and suggested that the orbit of an electron about the nucleus could be considered a tiny electric current with a correspondingly tiny magnetic moment. But quantum physics tells us that this “planetary” view of the atom cannot be true. An electron zipping around a nucleus would generate radio waves, losing energy and eventually would crash into the nucleus.
Apparently, this does not happen, so the classical approach is fatally flawed and we must turn to quantum mechanics.
In quantum mechanics, electronic motion is stabilized by the fact that electrons can only have certain energy states; they are quantized. The energy of a given electron can be described in terms of solutions, Ψ, to something called Schrödinger’s wave equation. The function Ψ(r,θ,ϕ) gives the probability of finding an electron at a given position. [Remember from Chapter 2 that r,θ,ϕ are the three spherical coordinates.] It depend on three special quantum numbers (n,l,m):
The number n is the so-called “principal” quantum number. The Rnl(r) are functions specific to the element in question and the energy state of the electron n. It is evaluated at an effective radius r in atomic units. The Y lm are a fully normalized complex representation of the spherical harmonics introduced in Section 2.2. For each level n, the number l ranges from 0 to n-1 and m from l backwards to −l.
The lowest energy of the quantum wave equations is found by setting n equal to unity and both l and m to zero. Under these conditions, the solution to the wave equation is given by:
where Z is the atomic number and ρ is 2Zr∕n. Note that at this energy level, there is no dependence of Y on ϕ or θ. Substituting these two equations into Equation 3.1 gives the probability density Ψ for an electron as a function of radius of r. This is sketched as the line in Figure 3.1. Another representation of the same idea is shown in the inset, whereby the density of dots at a given radius reflects the probability distribution shown by the solid curve. The highest dot density is found at a radius of about one atomic unit, tapering off the farther away from the center of the atom. Because there is no dependence on θ or ϕ the probability distribution is a spherical shell. All the l,m = 0 shells are spherical and are often referred to as the 1s, 2s, 3s shells, where the numbers are the energy levels n. A surface with equal probability is a sphere and example of one such shell is shown in Figure 3.2a.
For l = 1, m will have values of -1, 0 and 1 and the Y lm(ϕ,θ)s are given by:
As might be expected, the shells for l = 2 are even more complicated that for l = 1. These shells are called “d” shells and two examples are shown in Figure 3.2c and d.
Returning to the tiny circuit idea, somehow the motion of the electrons in their shells acts like an electronic circuit and creates a magnetic moment. In quantum mechanics, the angular momentum vector of the electron L is quantized, for example as integer multiples of ℏ, the “reduced” Planck’s constant (or h _ 2π where h = 6.63 x 10−34 Js). The magnetic moment arising from the orbital angular momentum is given by:
This is known as the Bohr magneton.
So far we have not mentioned one last quantum number, s. This is the “spin” of the electron and has a value of ±1 2. The spin itself produces a magnetic moment which is given by 2smb, hence is numerically identical to that produced by the orbit.
Atoms have the same number of electrons as protons in order to preserve charge balance. Hydrogen has but one lonely electron which in its lowest energy state sits in the 1s electronic shell. Helium has a happy pair, so where does the second electron go? To fill in their electronic shells, atoms follow three rules:
Each unpaired spin has a moment of one Bohr magneton mb. The elements with the most unpaired spins are the transition elements which are responsible for most of the paramagnetic behavior observed in rocks. For example, in Figure 3.3 we see that Mn has a structure of: (1s22s22p63s23p6)3d54s2, hence has five unpaired spins and a net moment of 5 mb. Fe has a structure of (1s22s22p63s23p6)3d64s2 with a net moment of 4 mb, In minerals, the transition elements are in a variety of oxidation states. Fe commonly occurs as Fe2+ and Fe3+. When losing electrons to form ions, transition metals lose the 4s electrons first, so we have for example, Fe3+ with a structure of (1s22s22p63s23p6)3d5, or 5 mb. Similarly Fe2+ has 4 mb and Ti4+ has no unpaired spins. Iron is the main magnetic species in geological materials, but Mn2+ (5 mb) and Cr3+ (3 mb) occur in trace amounts.
We have learned that there are two sources of magnetic moments in electronic motions: the orbits and the (unpaired) spins. These moments respond to external magnetic fields giving rise to an induced magnetization, a phenomenon alluded to briefly in Chapter 1. We will consider first the contribution of the electronic orbits.
The angular momentum of electrons is quantized in magnitude but also has direction (see L in Figure 3.4). The angular momentum vector has an associated magnetic moment vector mb. A magnetic field H exerts a torque on the moment, which nudges it (and the momentum vector associated with it) to the side (ΔL). L therefore will precess around the magnetic field direction, much like a spinning top precesses around the direction of gravity. The precession of L is called Larmor precession.
The changed momentum vector from Larmor precession in turn results in a changed magnetic moment vector Δm. The sense of the change in net moment is always to oppose the applied field. Therefore, the response of the magnetic moments of electronic orbitals creates an induced magnetization MI that is observable outside the substance; it is related to the applied field by:
We learned in Chapter 1 that the proportionality between induced magnetization and the applied field is known as the magnetic susceptibility. The ratio MI∕H for the response of the electronic orbitals is termed the diamagnetic susceptibility χd; it is negative, essentially temperature independent and quite small. This diamagnetic response is a property of all matter, but for substances whose atoms possess atomic magnetic moments, diamagnetism is swamped by effects of magnetic fields on the atomic magnetic moments. In the absence of unpaired electronic spins, diamagnetic susceptibility dominates the magnetic response. Common diamagnetic substances include quartz (SiO2), calcite (CaCO3) and water (H2O). The mass normalized susceptibility of quartz is -0.62 x 10−9 m3kg−1 to give you an idea of the magnitudes of these things.
In many geological materials, the orbital contributions cancel out because they are randomly oriented with respect to one another and the magnetization arises from the electronic spins. We mentioned that unpaired electronic spins behave as magnetic dipoles with a moment of one Bohr magneton. In the absence of an applied field, or in the absence of the ordering influence of neighboring spins which are known as exchange interactions, the electronic spins are essentially randomly oriented. An applied field acts to align the spins which creates a net magnetization equal to χpH where χp is the paramagnetic susceptibility. For any geologically relevant conditions, the induced magnetization is linearly dependent on the applied field. In paramagnetic solids, atomic magnetic moments react independently to applied magnetic fields and to thermal energy. At any temperature above absolute zero, thermal energy vibrates the crystal lattice, causing atomic magnetic moments to oscillate rapidly in random in orientations. In the absence of an applied magnetic field, atomic moments are equally distributed in all directions with a resultant magnetization of zero.
A useful first order model for paramagnetism was worked out by P. Langevin in 1905. (Of course in messy reality things are a bit more complicated, but Langevin theory will work well enough for us at this stage.) Langevin theory is based on a few simple premises:
Magnetic energy is at a minimum when the magnetic moment is lined up with the magnetic field.
Consider an atomic magnetic moment, (m = 2mb = 1.85×10−23 Am2), in a magnetic field of 10−2 T, (for reference, the largest geomagnetic field at the surface is about 65 μT – see Chapter 2). The aligning energy is therefore mB = 1.85 × 10−25 J). However, thermal energy at 300K (traditionally chosen as a temperature close to room temperature providing easy arithmetic) is Boltzmann’s constant times the temperature, or about 4 x 10−21 J. So thermal energy is several orders of magnitude larger than the aligning energy and the net magnetization is small even in this rather large (compared to the Earth’s field) magnetizing field.
Using the principles of statistical mechanics, we find that the probability density of a particular magnetic moment having a magnetic energy of Em is given by:
From this we see that the degree of alignment depends exponentially on the ratio of magnetic energy to thermal energy. The degree of alignment with the magnetic field controls the net magnetization M. When spins are completely aligned, the substance has a saturation magnetization Ms. The probability density function leads directly to the following relation (derived in Appendix A.2.1):
where a = mB∕kT. The function enclosed in square brackets is known as the Langevin function (ℒ).
Equation 3.6 is plotted in Figure 3.5a and predicts several intuitive results: 1) M = 0 when B = 0 and 2) M∕Ms = 1 when the applied magnetic field is infinite. Furthermore, M is some 90% of Ms when mB is some 10-20 times kT. When kT >> mB,ℒ(a) is approximately linear with a slope of ∼ 1∕3. At room temperature and fields up to many tesla, ℒ(a) is approximately mB∕3kT. If the moments are unpaired spins (m = mb), then the maximum magnetization possible (Ms) is given by the number of moments N, their magnitude (mb) normalized by the volume of the material v or Ms = Nmb∕v, and
Please note that we have neglected all deviations from isotropy including quantum mechanical effects as well as crystal shape, lattice defects, and state of stress. These complicate things a little, but to first order the treatment followed here provides a good approximation. We can rewrite the above equation as:
To first order, paramagnetic susceptibility χp is positive, larger than diamagnetism and inversely proportional to temperature. This inverse T dependence (see Figure 3.5b) is known as Curie’s law of paramagnetism. The paramagnetic susceptibility of, for example, biotite is 790 x 10−9 m3 kg−1, or about three orders of magnitude larger than quartz (and of the opposite sign!).
We have considered the simplest case here in which χ can be treated as a scalar and is referred to as the bulk magnetic susceptibility χb. In detail, magnetic susceptibility can be quite complicated. The relationship between induced magnetization and applied field can be affected by crystal shape, lattice structure, dislocation density, state of stress, etc., which give rise to possible anisotropy of the susceptibility. Furthermore, there are only a finite number of electronic moments within a given volume. When these are fully aligned, the magnetization reaches saturation. Thus, magnetic susceptibility is both anisotropic and non-linear with applied field.
Some substances give rise to a magnetic field in the absence of an applied field. This magnetization is called remanent or spontaneous magnetization, also loosely known as ferromagnetism (sensu lato). Magnetic remanence is caused by strong interactions between neighboring spins that occur in certain crystals.
The so-called exchange energy is minimized when the spins are aligned parallel or anti-parallel depending on the details of the crystal structure. Exchange energy is a consequence of the Pauli exclusion principle (no two electrons can have the same set of quantum numbers). In the transition elements, the 3d orbital is particularly susceptible to exchange interactions because of its shape and the prevalence of unpaired spins, so remanence is characteristic of certain crystals containing transition elements with unfilled 3d orbitals.
In oxides, oxygen can form a bridge between neighboring cations which are otherwise too far apart for direct overlap of the 3d orbitals in a phenomenon known as superexchange. In Figure 3.6 the 2p electrons of the oxygen are shared with the neighboring 3d shells of the iron ions. Pauli’s exclusion principle means that the shared electrons must be antiparallel to each of the electrons in the 3d shells. The result is that the two cations are coupled. In the case shown in Figure 3.6 there is an Fe2+ ion coupled antiparallel to an Fe3+ ion. For two ions with the same charge, the coupling will be parallel. Exchange energies are huge, equivalent to the energy associated with the same moment in a field of the order of 1000 T. [The largest field available in the Scripps paleomagnetic laboratory is about 2.5 T, and that only fleetingly.]
As temperature increases, crystals expand and exchange becomes weaker. Above a temperature characteristic of each crystal type (known as the Curie temperature Tc), cooperative spin behavior disappears entirely and the material becomes paramagnetic.
While the phenomenon of ferromagnetism results from complicated interactions of neighboring spins, it is useful to think of the ferromagnetic moment as resulting from a quasi-paramagnetic response to a huge internal field. This imaginary field is termed the Weiss molecular field Hw. In Weiss theory, Hw is proportional to the magnetization of the substance, i.e.,
where β is the constant of proportionality. The total magnetic field that the substance experiences is:
where H is the external field. By analogy to paramagnetism, we can substitute a = μomb(Htot)∕kT) for H in Langevin function:
For temperatures above the Curie temperature Tc (i.e. T − Tc > 0) there is by definition no internal field, hence βM is zero. Substituting Nmb∕v for Ms, and using the low-field approximation for ℒ(a), Equation 3.8 can be rearranged to get:
Equation 3.9 is known as the Curie-Weiss law and governs ferromagnetic susceptibility above the Curie temperature (dashed line in Figure 3.7).
Below the Curie temperature Hw >> H; we can neglect the external field H and get:
Substituting again for Ms and rearranging, we get:
where Tc is the Curie temperature and is given by:
We have treated ferromagnetism from a classical point of view and this is strictly incorrect because ferromagnetism results primarily from quantum mechanical phenomena. The primary difference between the classical derivation and the quantum mechanical one lies in the fact that in quantum mechanics, only certain angles of the magnetic moments are allowed, as opposed to all directions in Langevin theory. In the end, the predictions of magnetization as a function of temperature are different in detail. The end product of the quantum mechanical treatment (see Dunlop and Özdemir, 1997) is that the variation of saturation magnetization as a function of temperature can be reasonably well approximated (near the Curie Temperature, Tc) by a normalized power law variation:
where γ is 0.5 from simple molecular field theory and To is absolute zero (in kelvin). Dunlop and Özdemir (1997) cite a value of around 0.43 for γ, but the data sets cited by Dunlop and Özdemir (1997; e.g., Figure 3.5 on page 52) are actually best-fit with values for γ of about 0.36 – 0.39 (see Figure 3.8). These curves have been normalized by their inferred curie Temperatures which are around 565∘C (data of B. Moskowitz, cited in Banerjee, 1991).
As we have seen, below the Curie temperature, certain crystals have a permanent (remanent) magnetization resulting from the alignment of unpaired electronic spins over a large area within the crystal. Spins may be either parallel or anti-parallel; the sense of spin alignment is controlled entirely by crystal structure. The energy term associated with this phenomenon is the exchange energy. There are three categories of spin alignment: ferromagnetism (sensu stricto), ferrimagnetism and antiferromagnetism (see Figure 3.9).
In ferromagnetism (sensu stricto, Figure 3.9a), the exchange energy is minimized when all the spins are parallel, as occurs in pure iron. When spins are perfectly antiparallel (antiferromagnetism, Figure 3.9b), there is no net magnetic moment, as occurs in ilmenite. Occasionally, the antiferromagnetic spins are not perfectly aligned in an antiparallel orientation, but are canted by a few degrees. This spin-canting (Figure 3.9c) gives rise to a weak net moment, as occurs in hematite, a common magnetic mineral (see Chapter 6). Also, antiferromagnetic materials can have a net moment if spins are not perfectly compensated owing to defects in the crystal structure, as occurs in fine-grained hematite. The uncompensated spins result in a so-called defect moment (Figure 3.9d). We note in passing that the temperature at which spins become disordered in antiferromagnetic substances is termed the Néel temperature. In ferrimagnetism, spins are also aligned antiparallel, but the magnitudes of the moments in each direction are unequal, resulting in a net moment (Figure 3.9e).
In figures like Figure 3.9, electronic spins are depicted as being simply aligned with some minimum energy direction (aligned with the field, or along some easy axis). Yet we already know about the paramagnetic effect of misalignment through random thermal fluctuations. We learned that an external magnetic field generates a torque on the electronic spins, and in isolation, a magnetic moment will respond to the torque in a manner similar in some respects to the way a spinning top responds to gravity: the magnetic moment will precess about the applied field direction, spiraling in and come to a rest parallel to it (Figure 3.10a). Because of the strong exchange coupling in ferromagnetic phases, spins tend to be aligned parallel (or antiparallel) to one another and the spiralling is done in a coordinated fashion, with neighboring spins as parallel as possible to one another (Figure 3.10b). This phenomenon is known as a spin wave.
SUPPLEMENTAL READINGS: O’Reilly (1984), Chapter 3.1; Dunlop and Özdemir (1997), Chapter 2.1 to 2.7.
a) Given one Bohr magneton (mb) in the Earth’s field (40 μT), write a program using Python that calcuates magnetostatic interaction energy (-mbB cosθ) for angles 0→ 180∘. Make a plot of this with the matplotlib module in Python.
b) Calculate the thermal energy at room temperature (300K). How does this compare with the interaction energy?
Fayalite (Fe2SiO4) is a paramagnetic solid with magnetic susceptibility χ = 4.4 x 10−4 (cgs units) at 0∘C (= 273K). A single crystal of fayalite has a volume of 2 cm3. This crystal is placed in a magnetic field, H = 10 oe at 0∘C. What is the resulting induced magnetic moment m of this crystal?
a) Do this problem first in cgs units. Then convert your answer to SI using the conversion factors in Table 1.1 in Chapter 1.
b) Do the problem again by first converting all the parameters into SI units. Check your answer by converting the SI answer that you get back to cgs. You should get the same answer (but you would be surprised how many people do this wrong).
If fayalite is placed in a magnetic field H= 100 oe at a temperature of 500∘C (= 773K), what is the resulting magnetization, M?
MnS is a paramagnetic solid. At 300K there are 4 x 1028 molecules of MnS per m3. Look up the number of unpaired spins for the cationic magnetic moment of Mn2+ in the text and find the paramagnetic susceptibility, χ, of MnS at 300K?
a) Read into a Pandas DataFrame the datafile Chapter_3/BMoskinBan91.txt provided. Make a plot of magnetization versus temperature. What is the Curie temperature of the material?
b) Using this Equation 3.11 from the chapter, find the value for γ between 0.35 and 0.43 at intervals of 0.01 that fits the best. Plot the data as in Figure 3.8 in the chapter, i.e. Ms(T)∕Ms(To) against T∕Tc.
We will start with the second part of the question: what fixes magnetizations in particular directions? A basic principle is that ferromagnetic particles have various contributions to the magnetic energy which controls their magnetization. No matter how simple or complex the combination of energies may become, the grain will seek the configuration of magnetization which minimizes its total energy. The short answer to our question is that certain directions within magnetic crystals are at lower energy than others. To shift the magnetization from one “easy” direction to another requires energy. If the barrier is high enough, the particle will stay magnetized in the same direction for very long periods of time – say billions of years. In this chapter we will address the causes and some of the consequences of these energy barriers for the magnetization of rocks. Note that in this chapter we will be dealing primarily with energy densities (volume normalized energies), as opposed to energy and will distinguish the two by the convention that energies are given with the symbol E and energy densities with ϵ.
In Chapter 6, we will discuss the behavior of common magnetic minerals, but to develop the general theory, it is easiest to focus on a single mineral. We choose here the most common one, magnetite. It has a simple, cubic structure and has been the subject intensive study. However, we will occasionally introduce concepts appropriate for other magnetic minerals where appropriate.
The simplest permanently magnetized particles are quasi-uniformly magnetized. These so-called single domain (SD) particles have spins that act in concert, staying as parallel (or anti-parallel) as possible. As particles get larger, the external energy can be minimized by allowing neighboring spins to diverge somewhat from strict parallelism; these particles are referred to as pseudo-single domain or PSD. Eventually, the spins organize themselves into regions with quasi-uniform magnetization (magnetic domains) separated by domain walls and are called multi-domain (MD) particles. These more complicated spin structures are very difficult to model and most paleomagnetic theory is based on the single domain approximation. Therefore we begin with a discussion of the energies of uniformly magnetized (single-domain) particles.
We learned in Chapter 3 that some crystalline states are capable of ferromagnetic behavior because of quantum mechanical considerations. Electrons in neighboring orbitals in certain crystals “know” about each other’s spin states. In order to avoid sharing the same orbital with the same spin (hence having the same quantum numbers – not allowed by Pauli’s exclusion principle), electronic spins in such crystals act in a coordinated fashion. They will be either aligned parallel or antiparallel according to the details of the interaction. This exchange energy density (ϵe) is the source of spontaneous magnetization and is given for a pair of spins by:
We define here a parameter that we will use later: the exchange constant A = JeS2∕a where a is the interatomic spacing. A = 1.33 x 10−11 Jm−1 for magnetite, a common magnetic mineral.
Recalling the discussion in Chapter 3, while s orbitals which are spherical, the 3d electronic orbitals “poke” in certain directions. Hence spins in some directions within crystals will be easier to coordinate than in others. We can illustrate this using the example of magnetite, a common magnetic mineral (Figure 4.1a). Magnetite octahedra (Figure 4.1a), when viewed at the atomic level (Figure 4.1b) are composed of one ferrous (Fe2+) cation, two ferric (Fe3+) cations and four O2− anions. Each oxygen anion shares an electron with two neighboring cations in a covalent bond.
In Chapter 3 we mentioned that in some crystals, spins are aligned anti-parallel, yet there is still a net magnetization, a phenomenon we called ferrimagnetism. This can arise from the fact that not all cations have the same number of unpaired spins. Magnetite, with its ferrous (4 mb) and ferric (5 mb) states is a good example. There are three iron cations in a magnetite crystal giving a total of 14 mb to play with. Magnetite is very magnetic, but not that magnetic! From Figure 4.1b we see that the ferric ions all sit on the tetrahedral (A) lattice sites and there are equal numbers of ferrous and ferric ions sitting on the octahedral (B) lattice sites. The unpaired spins of the cations in the A and B lattice sites are aligned anti-parallel to one another because of superexchange (Chapter 3) so we have 9 mb on the B sites minus 5 mb on the A sites for a total of 4 mb per unit cell of magnetite.
We know from experience that there are energies associated with magnetic fields. Just as a mass has a potential energy when it is placed in the gravitational field of another mass, a magnetic moment has an energy when it is placed in a magnetic field. We have seen this energy briefly in Sections 1.4 and Equation 3.4. This energy has many names (magnetic energy, magnetostatic energy, Zeeman energy, etc.). Here we will work with the volume normalized magnetostatic interaction energy density (ϵm). This energy density essentially represents the interaction between the magnetic lines of flux and the magnetic moments of the electronic spins. It is energy that aligns magnetic compass needles with the ambient magnetic field. We find the volume normalized form (in units of Jm−3) by substituting |M| = |m|v1 2 (see Chapter 1) into Equation 3.4:
ϵm is at a minimum when the magnetization M is aligned with the field B. Single-domain particles have a quasi-uniform magnetization and the application of a magnetic field does not change the net magnetization, which remains at saturation (Ms). The direction of all the magnetic spins could swing coherently toward the applied field. Yet the magnetizations in many particles do not rotate freely toward the magnetic field (or we would not have paleomagnetism!). There is another contribution to the energy of the magnetic particle associated with the magnetic crystal itself. This energy depends on the direction of magnetization in the crystal – it is anisotropic – and is called anisotropy energy. Anisotropy energy creates barriers to free rotation of the magnetization within the magnetic crystal, which lead to energetically preferred directions for the magnetization within individual single-domain grains.
There are many causes of anisotropy energy. The most important ones derive from the details of crystal structure (magnetocrystalline anisotropy energy), the state of stress within the particle (magnetostriction), and the shape of the particle, (shape anisotropy). We will consider these briefly in the following subsections.
For equant single-domain particles or particles with low saturation magnetizations, the crystal structure dominates the magnetic energy. In such cases, the so-called easy directions of magnetization are crystallographic directions along which magnetocrystalline energy is at a minimum. The energy surface shown in Figure 4.1c represents the magnetocrystalline anisotropy energy density, ϵa for magnetite at room temperature. The highest energy bulges are in directions perpendicular to the cubic faces ([001, 010, 100]). The lowest energy dimples are along the body diagonals (). Magnetite (above about 120K) has a cubic structure with direction cosines α1,α2,α3. These direction cosines are the angles between a given direction and the crystallographic axes [100, 010, 001] – see Appendix A.3.5 for review of direction cosines). For such a crystal the magnetocrystalline anisotropy energy density is given by:
where K1 and K2 are empirically determined magnetocrystalline anisotropy constants. In the case of (room temperature) magnetite, K1 is -1.35 x 104 Jm−3. Note that the units of the Ki are in Jm−3, so ϵa is in units of energy per unit volume (an energy density). If you work through the magnetocrystalline equation, you will find ϵa is zero parallel to the  axis, K1∕4 parallel to the  and K1∕3 + K2∕27 parallel to the  direction (the body diagonal). So when K1 is negative, the  direction (body diagonal) has the minimum energy. This is the reason that there is a dimple in the energy surface along that direction in Figure 4.1c.
As a consequence of the magnetocrystalline anisotropy energy, once the magnetization is aligned with an easy direction, work must be done to change it. In order to switch from one easy axis to another (e.g. from one direction along the body diagonal to the opposite), the magnetization has to traverse a path over an energy barrier which is the difference between the energy in the easy direction and that in the intervening hard direction. In the case of magnetite at room temperature, we have this energy barrier as ϵ-ϵ or to first order K1∕3 − K1∕4 = K1∕12.
Because electronic interactions depend heavily on inter atomic spacing, magnetocrystalline anisotropy constants are a strong function of temperature (see Figure 4.2). In magnetite, K1 changes sign at a temperature known as the isotropic point. At the isotropic point, there is no large magnetocrystalline anisotropy. The large energy barriers that act to keep the magnetizations parallel to the body diagonal are gone and the spins can wander more freely through the crystal. Below the isotropic point, the energy barriers rise again, but with a different topology in which the crystal axes are the energy minima and the body diagonals are the high energy states.
At room temperature, electrons hop freely between the ferrous and ferric ions on the B lattice sites, so there is no order. Below about 120 K, there is an ordered arrangement of the ferrous and ferric ions. Because of the difference in size between the two, the lattice of the unit cell becomes slightly distorted and becomes monoclinic instead of cubic. This transition occurs at what is is known as the Verwey temperature (Tv). Although the isotropic point (measured magnetically) and the Verwey transition (measured electrically) are separated in temperature by about 15o, they are related phenomena (the ordering and electron hopping cause the change in K1).
The change in magnetocrystalline anisotropy at low temperature can have a profound effect on the magnetization. In Figure 4.3 we show a typical (de)magnetization curve for magnetite taken from the “Rock magnetic bestiary” web site maintained at the Institute for Rock Magnetism: http://irm.umn.edu/bestiary. There is a loss of magnetization at around 100 K. This loss is the basis for low-temperature demagnetization (LTD). However, some portion of the magnetization always remains after low temperature cycling (called the low temperature memory), so the general utility of LTD may be limited.
Cubic symmetry (as in the case of magnetite) is just one of many types of crystal symmetries. One other very important form is the uniaxial symmetry which can arise from crystal shape or structure. The energy density for uniaxial magnetic anisotropy is:
Here the magnetocrystalline constants have been designated Ku1,Ku2 to distinguish them from K1,K2 used before. In this equation, when the largest uniaxial anisotropy constant, Ku1, is negative, the magnetization is constrained to lie perpendicular to the axis of symmetry. When Ku1 > 0, the magnetization lies parallel to it.
An example of a mineral dominated by uniaxial symmetry is hematite, a mineral with hexagonal crystal symmetry. The magnetization of hematite is quite complicated, as we shall learn in Chapters 6 and 7, but one source is magnetization is spin-canting (see Chapter 3) within the basal plane of the hexagonal crystal. Within the basal plane, the anisotropy constant is very low and the magnetization wanders fairly freely. However, the anisotropy energy away from the basal plane is strong, so the magnetization is constrained to lie within the basal plane.
Exchange energy depends strongly on the details of the physical interaction between orbitals in neighboring atoms with respect to one another, hence changing the positions of these atoms will affect that interaction. Put another way, straining a crystal will alter its magnetic behavior. Similarly, changes in the magnetization can change the shape of the crystal by altering the shapes of the orbitals. This is the phenomenon of magnetostriction. The magnetic energy density caused by the application of stress to a crystal be approximated by:
There is one more important source of magnetic anisotropy: shape. To understand how crystal shape controls magnetic energy, we need to understand the concept of the internal demagnetizing field of a magnetized body. In Figure 4.4a we show the magnetic vectors within a ferromagnetic crystal. These produce a magnetic field external to the crystal that is proportional to the magnetic moment (see Chapter 1). This external field is identical to a field produced by a set of free poles distributed over the surface of the crystal (Figure 4.4b). The surface poles do not just produce the external field, they also produce an internal field shown in Figure 4.4c. The internal field is known as the demagnetizing field (Hd). Hd is proportional to the magnetization of the body and is sensitive to the shape. For a simple sphere in Figure 4.4a and applied field condition shown in Figure 4.4d, the demagnetizing field is given by:
where N is a demagnetizing factor determined by the shape. In fact, the demagnetizing factor depends on the orientation of M within the crystal and therefore is a tensor (see Appendix A.3.5 for review of tensors). The more general equation is Hd = N ⋅ M where Hd and M are vectors and N is a 3 x 3 tensor. For now, we will simplify things by considering the isotropic case of a sphere in which N reduces to the single value scalar quantity N.
For a sphere, the surface poles are distributed over the surface such that there are none at the “equator” and most at the “pole” (see Figure 4.4d). Potential field theory shows that the external field of a uniformly magnetized body is identical to that of a centered dipole moment of magnitude m = vM (where v is volume). At the equator of the sphere as elsewhere, Hd = −NM. But the external field at the equator is equal to the demagnetizing field just inside the body because the field is continuous across the body. We can find the equatorial (tangential) demagnetizing field at the equator by substituting in the equatorial colatitude θ = 90∘ into Hθ in Equation 1.8 from Chapter 1), so:
so substituting and solving for Hd we get Hd = −1 3M, hence N = 1 3.
Different directions within a non-spherical crystal will have different distributions of free poles (see Figures 4.4e,f). In fact the surface density of free poles is given by σm = M ⋅ . Because the surface pole density depends on the direction of magnetization, so too will N. In the case of a prolate ellipsoid magnetized parallel to the elongation axis a (Figure 4.4e), the free poles are farther apart than across the grain, hence, intuitively, the demagnetizing field, which depends on 1∕r2, must be less than in the case of a sphere. Thus, Na <1 3. Similarly, if the ellipsoid is magnetized along b (Figure 4.4e), the demagnetizing field is stronger or Nb > 1 3.
Getting back to the magnetostatic energy density, ϵm = M⋅B, remember that B includes both the external field Be = −μoHe and the internal demagnetizing field μoN ⋅ M. Therefore, magnetostatic energy density from both the external and internal fields is given by:
The two terms in Equation 4.4 are the by now familiar magnetostatic energy density ϵm, and the magnetostatic self energy density or the demagnetizing energy density ϵd. ϵd can be estimated by “building” a magnetic particle and considering the potential energy gained by each incremental volume dv as it is brought in (−μoMdv ⋅ Hd) and integrating. The 1 2 appears in order to avoid counting each volume element twice and the v disappears because all the energies we have been discussing are energy densities – the energy per unit volume.
For the case of a uniformly magnetized sphere, we get back to the relation Hd = −NM and ϵd simplifies to:
In the more general case of a prolate ellipsoid, M can be represented by the two components parallel to the a and b axes (see Figure 4.4f) with unit vectors parallel to them â, . So, M = M cosθâ + M sinθ . Each component of M has an associated demagnetizing field Hd = −NaM cosθâ − NbM sinθ where Na,Nb are the eigenvalues of the tensor N (the values of the demagnetizing tensor along the principal axes a and b). In this case, the demagnetizing energy can be written as:
In an ellipsoid with three unequal axes a,b,c, Na + Nb + Nc = 1 (in SI; in cgs units the sum is 4π). For a long needle-like particle, Na ≃ 0 and Nb = Nc ≃ 1 2. A useful approximation for nearly spherical particles is Na = 1 3[1 − 2 5(2 − b a − c a)] (Stacey and Banerjee, 1974). For more spheroids, see Nagata (1961, p. 70) and for the general case, see Dunlop and Özdemir (1997). In the absence of an external field, the magnetization will be parallel to the long axis (θ = 0) and the magnetostatic energy density (also known as the ‘self’ energy is given by:
Note that the demagnetizing energy in Equation 4.6 has a uniaxial form, directionally dependent only on θ, with the constant of uniaxial anisotropy Ku = 1 2ΔNμoM2. ΔN is the difference between the largest and smallest values of the demagnetizing tensor Nc − Na.
For a prolate ellipsoid Nc = Nb and choosing for example c∕a = 1.5 we find that Na −Nc =∼ 0.16. The magnetization of magnetite is 480 kAm−1, so Ku ≃ 2.7 x 104 Jm−3. This is somewhat larger than the absolute value of K1 for magnetocrystalline anisotropy in magnetite (K1= -1.35 x 104 Jm−3), so the magnetization for even slightly elongate grains will be dominated by uniaxial anistropy controlled by shape. Minerals with low saturation magnetizations (like hematite) will not be prone to shape dominated magnetic anisotropy, however.
Paleomagnetists worry about how long a magnetization can remain fixed within a particle and we will begin to discuss this issue later in the chapter. It is worth pointing out here that any discussion of magnetic stability will involve magnetic anisotropy energy because this controls the energy required to change a magnetic moment from one easy axis to another. One way to accomplish this change is to apply a magnetic field sufficiently large that its magnetic energy exceeds the anisotropy energy. The magnetic field capable of flipping the magnetization of an individual uniformly magnetized particle (at saturation, or Ms) over the magnetic anisotropy energy barrier is the microscopic coercivity Hk. For uniaxial anisotropy (K = Ku) and for cubic magnetocrystalline anisotropy (K = K1), microscopic coercivity is given by:
respectively (see Dunlop and Özdemir, 1997 for a more complete derivation). For elongate particles dominated by shape anisotropy, Hk reduces to ΔNM. [Note that the units for coercivity as derived here are in Am−1, although they are often measured using instruments calibrated in tesla. Technically, because the field doing the flipping is inside the magnetic particle and B (measured in tesla) depends on the magnetization M as well as the field H (Equation 1.4), coercivity should be written as μoHk if the units are quoted in tesla. Microscopic coercivity is another parameter with many names: flipping field, switching field, intrinsic coercivity and also more loosely, the coercive field and coercivity. We will come back to the topic of coercivity in Chapter 5. ]
So far we have been discussing hypothetical magnetic particles that are uniformly magnetized. Particles with strong magnetizations (like magnetite) have self energies that quickly become quite large because of the dependence on the square of the magnetization. We have been learning about several mechanisms that tend to align magnetic spins. In fact in very small particles of magnetite (< 40 nm), the spins are essentially lined up. The particle is uniformly magnetized and we called it single domain (SD). In larger particles (∼80 nm) the self energy exceeds the other exchange and magnetocrystalline energies and crystals have distinctly non-uniform states of magnetization.
There are many strategies possible for magnetic particles to reduce self energy. Numerical models (called micromagnetic models) can find internal magnetization configurations that minimize the energies discussed in the preceding sections. Micromagnetic simulations for magnetite particles (e.g. Schabes and Bertram, 1988) allow us to peer into the state of magnetization inside magnetic particles. These simulations give a picture of increasing complexity from so-called flower to vortex (Figure 4.5) remanent states. These particles share many properties of the uniformly magnetized single domain particles and are called pseudo-single domain (PSD) particles.
As particles grow larger (>∼200 nm), they break into multiple magnetic domains, separated by narrow zones of rapidly changing spin directions called domain walls. Magnetic domains can take many forms. We illustrate a few in Figure 4.6. The uniform case (single domain) is shown in Figure 4.6a. The external field is very large because the free poles are far apart (at opposite ends of the particle). When the particle organizes itself into two domains (Figure 4.6b), the external field is reduced by about a factor of two. In the case of four lamellar domains (Figure 4.6c), the external field is quite small. The introduction of closure domains as in Figure 4.6d reduces the external field to nothing.
As you might already suspect, domain walls are not “free”, energetically speaking. If, as in Figure 4.7a, the spins simply switch from one orientation to the other abruptly, the exchange energy cost would be very high. One way to get around this to spread the change over several hundred atoms, as sketched in Figure 4.7b. The wall width δ is wider and the exchange energy price is much less. However, there are now spins in unfavorable directions from a magnetocrystalline point of view (they are in “hard” directions). Exchange energy therefore favors wider domain walls while magnetocrystalline anisotropy favors thin walls. With some work (see e.g., Dunlop and Özdemir, 1997, pp. 117-118), it is possible to come up with the following analytical expressions for wall width (δw) and wall energy density (ϵw):
where A is the exchange constant (see Section 4.1.1) and K is the magnetic anisotropy constant (e.g., Ku or K1). Note that ϵw is the energy density per unit wall area, not per volume. Plugging in values for magnetite given previously we get δw = 90 nm and ϵw = 3x 10−3Jm−2.
In Figure 4.8 we plot the self energy (Equation 4.12) and the wall energy (ϵw from Equation 4.9) for spheres of magnetite. We see that the wall energy in particles with diameters of some 50 nm is less than the self energy, yet the width of the walls about twice as wide as that. So the smallest wall is really more like the vortex state and it is only for particles larger than a few tenths of a micron that true domains separated by discrete walls can form. Interestingly, this is precisely what is predicted from micromagnetic modelling (e.g., Figure 4.5).
How can we test the theoretical predictions of domain theory? Do domains really exist? Are they the size and shape we expect? Are there as many as we would expect? In order to address these questions we require a way of imaging magnetic domains. Bitter (1931) devised a way for doing just that. Magnetic domain walls are regions with large stray fields (as opposed to domains in which the spins are usually parallel to the sides of the crystals to minimize stray fields). In the Bitter technique magnetic colloid material is drawn to the regions of high field gradients on highly polished sections allowing the domain walls to be observed (see Figure 4.9a).
There are by now other ways of imaging magnetic domains. We will not review them all here, but will just highlight the ways that are more commonly used in rock and paleomagnetism. The magneto-optical Kerr effect or MOKE uses the interaction between polarized light and the surface magnetic field of the target. The light interacts with the magnetic field of the sample which causes a small change in the light’s polarization and ellipticity. The changes are detected by reflecting the light into nearly-crossed polarizers. The longitudinal Kerr effect can show the alignment of magnetic moments in the surface plane of the sample. Domains with different magnetization directions show up as lighter or darker regions in the MOKE image (see Figure 4.9b.)
Another common method for imaging magnetic domains employs a technique known as magnetic force microscopy. Magnetic force microscopy (MFM) uses a scanning probe microscope that maps out the vertical component of the magnetic fields produced by a highly polished section. The measurements are made with a cantilevered magnetic tip that responds to the magnetic field of the sample. In practice, the measurements are made in two passes. The first establishes the topography of the sample (Figure 4.9c). Then in the second pass, the tip is raised slightly above the surface and by subtracting the topographic only signal the attraction of the magnetic surface can be mapped (Figure 4.9d). Figure 4.9e shows an interpretation of the magnetic directions of different magnetic domains.
We have gone some way toward answering the questions posed at the beginning of the chapter. We see now that anisotropy energy, with contributions from crystal structure, shape and stress, that inhibits changes in the magnetic direction thereby offering a possible mechanism whereby a given magnetization could be preserved for posterity. We also asked the question of what allows the magnetization to come into equilibrium with the applied magnetic field in the first place; this question requires a little more work to answer. The key to this question is to find some mechanism which allows the moments to “jump over” magnetic anisotropy energy barriers. One such mechanism is thermal energy ET , which was given in Chapter 3 as:
We know from statistical mechanics that the probability P of finding a grain with a given thermal energy sufficient to overcome some anisotropy energy Ea and change from one easy axis to another is P = exp(−Ea∕ET ). Depending on the temperature, such grains may be quite rare, and we may have to wait some time t for a particle to work itself up to jumping over the energy barrier.
Imagine a block of material containing a random assemblage of magnetic particles that are for simplicity uniformly magnetized and dominated by uniaxial anisotropy. Suppose that this block has some initial magnetization Mo and is placed in an environment with no ambient magnetic field. Anisotropy energy will tend to keep each tiny magnetic moment in its original direction and the magnetization will not change over time. At some temperature, certain grains will have sufficient energy to overcome the anisotropy energy and flip their moments to the other easy axis. As the energy surface is spherical, with no dimples or protruberances, there is no preferred direction and, over time, the magnetic moments will become random. Therefore, the magnetization as a function of time in this simple scenario will decay to zero. The equation governing this decay is:
where t is time and τ is an empirical constant called the relaxation time. Relaxation time is the time required for the remanence to decay to 1∕e of Mo. This equation is the essence of what is called Néel theory (see, e.g., Néel, 1955). The value of τ depends on the competition between magnetic anisotropy energy and thermal energy. It is a measure of the probability that a grain will have sufficient thermal energy to overcome the anisotropy energy and switch its moment. Therefore in zero external field:
where C is a frequency factor with a value of something like 1010 s−1. The anisotropy energy is given by the dominant anisotropy parameter K (either Ku,K1, or λ) times the grain volume v.
Thus, the relaxation time is proportional to anisotropy constant and volume, and is inversely related to temperature. Relaxation time τ varies rapidly with small changes in v and T. To see how this works, we can take Ku for slightly elongate cuboids of magnetite (length to width ratio of 1.3 to 1) and evaluate relaxation time as a function of particle width (see Figure 4.10). There is a sharp transition between grains with virtually no stability (τ is on the order of seconds) and grains with stabilities of billions of years.
Grains with τ ≃ 102 − 103 seconds have sufficient thermal energy to overcome the anisotropy energy frequently and are unstable on a laboratory time scale. In zero field, these grain moments will tend to rapidly become random, and in an applied field, also tend to align rapidly with the field. The net magnetization is related to the field by a Langevin function (see Section 3.2.2 in Chapter 3). Therefore, this behavior is quite similar to paramagnetism, hence these grains are called superparamagnetic (SP). Such grains can be distinguished from paramagnets, however, because the field required to saturate the moments is typically much less than a tesla, whereas that for paramagnets can exceed hundreds of tesla.
We are now in a position to pull together all the threads we have considered in this chapter and make a plot of what sort of magnetic particles behave as superparamagnets, which should be single domain and which should be multi-domain according to our simple theories. We can estimate the superparamagnetic to single domain threshold for magnetite as a function of particle shape by finding for the length (2a) that gives a relaxation time of 100 seconds as a function of width to length ratio (b∕a) for parallelopipeds of magnetite (heavy blue line in Figure 4.11). To do this, we follow the logic of Evans and McElhinny (1969) and Butler and Banerjee (1975). In this Evans diagram, we estimated relaxation time using Equation 4.11, plugging in values of K as either the magnetocrystalline effective anisotropy constant ( 1_ 12K1) or the shape anisotropy constant (1 2ΔNμoM2), whichever was less. We also show the curve at which relaxation time is equal to 1 Gyr, reinforcing the point that very small changes in crystal size and shape make profound differences in relaxation time. The figure also predicts the boundary between the single domain field and the two domain field, when the energy of a domain wall is less than the self energy of a particle that is uniformly magnetized. This can be done by evaluating wall energy with Equation 4.9 for a wall along the length of a parallelopiped and area (4ab) as compared to the self energy (1 2μoNaM2v) for a given length and width to length ratio. When the wall energy is less than the self energy, we are in the two domain field.
Figure 4.11 suggests that there is virtually no SD stability field for equant magnetite; particles are either SP or MD (multi-domain). As the width to length decreases (the particle gets longer), the stability field for SD magnetite expands. Of course micromagnetic modelling shows that there are several transitional states between uniform magnetization (SD) and MD, i.e. the flower and vortex remaent states (see Fabian et al., 1996), but Figure 4.11 has enormous predictive power and the version of Butler and Banerjee (1975), (which is slightly different in detail) continues to be used extensively. It is worth pointing out however, that the size at which domain walls appear in magnetite is poorly constrained because it depends critically on the exact shape of the particle, its state of stress and even its history of exposure to past fields. Estimates in the literature range from as small as 20 nm to much larger (up to 100 nm) depending on how the estimates are made. Nonetheless, it is probably true that truly single domain magnetite is quite rare in nature, yet more complicated states are difficult to treat theoretically. Therefore most paleomagnetic studies rely on predictions made for single domain particles.
SUPPLEMENTAL READING: Dunlop and Özdemir (1997), Chapters 2.8 and 5.
Assume that the magnetization of magnetite is about 4.8 x 105 Am−1. Using values for other parameters from the text, write a Python program to calculate the following:
a) Self energy (or magnetostatic energy) for a sphere 1, 10 and 100 μm in diameter. [Hint: see Equation 4.12 below for the ‘self’ energy density. Also, remember the difference between energy and energy density!]
b) Magnetostatic (shape) anisotropy energy for an ellipsoid whose principal semi-axis is 1 μm and whose major and minor semi-axes are each 0.25 μm. You may use the “nearly spherical” approximation in the text.
c) The critical radius of a sphere at which wall energy equals self energy.
Calculate grain diameter for magnetite spheres with τs of 10−1, 10, 102, 103, 105, 109, 1015 seconds. Use values for Boltzmann’s constant, C (the frequency factor) and |K1| at room temperature (300K).
Problem 3 [From Jeff Gee]
a) Consider a highly elongate rod (needle-shaped grain) of magnetite. Explain why the demagnetizing factor along the long axis of the rod is about zero while that across the long axis is about one half.
b) The file Chapter_4/prolate.txt gives the values of demagnetizing factors for a prolate ellipsoid (with axes a> b=c). For an elongate rod of magnetite with range of aspect ratios (AR = c:b) provided in the table, plot the magnetostatic self energy density in the absence of an external field. Use this plot to estimate the aspect ratio at which shape anisotropy will be equal that of magnetocrystalline anisotropy (use a value of K1 at room temperature (300K) of -1.43 x 104 J/m3).
c) What is the maximum microscopic coercivity (Hk) for such an elongate grain of magnetite (assume an infinitely long grain)? Coercivities are more commonly reported in units of T so provide this corresponding value as well.
The ease with which particles can be coerced into changing their magnetizations in response to external fields can tell us much about the overall stability of the particles and perhaps also something about their ability to carry a magnetic remanence over the long haul. The concepts of long term stability, incorporated into the concept of relaxation time and the response of the magnetic particles to external magnetic fields are therefore linked through the anisotropy energy constant K (see Chapter 4) which dictates the magnetic response of particles to changes in the external field. This chapter will focus on the response of magnetic particles to changing external magnetic fields.
Magnetic remanence is the magnetization in the absence of an external magnetic field. If we imagine a particle with a single “easy” axis – a so-called “uniaxial” particle with magnetic anisotropy constant Ku, the magnetic energy density (energy per unit volume) of a particle whose magnetic moment makes an angle θ to the easy axis direction (Figure 5.1a) can be expressed as:
As the moment swings around with angle θ to the easy axis, the anisotropy energy density ϵa will change as sketched in Figure 5.1b. The energy minima are when θ is aligned parallel to the easy axis (an axis means either direction along the axis, so we pick one direction as being 0 and the other as 180∘). In the absence of a magnetic field, the moment will lie along one of these two directions. [In reality, thermal energy will perturb this direction somewhat, depending on the balance of anisotropy to thermal energy, but for the present discussion, we are assuming that thermal energy can be neglected.]
When an external field is applied at an angle ϕ to the easy axis (and an angle ϕ−θ with the magnetic moment; see Figure 5.1a), the magnetostatic interaction energy density ϵm given by the dot product of the magnetization and the applied field (Equation 4.1 in Chapter 4) or:
The two energy densities (ϵa and ϵm) are shown as the thin solid and dashed lines in Figure 5.1c for an applied field of 30 mT aligned with an angle of 45∘ to the easy axis. There is a competition between the anisotropy energy (tending to keep the magnetization parallel to the easy axis) and the interaction energy (tending to line the magnetization up with the external magnetic field). Assuming that the magnetization is at saturation, we get the total energy density of the particle to be:
The total energy density ϵt is shown as the heavy solid line in Figure 5.1c.
The magnetic moment of a uniaxial single domain grain will find the angle θ that is associated with the minimum total energy density (ϵmin; see Figure 5.1b,c). For low external fields, θ will be closer to the easy axis and for higher external fields (e.g., 30 mT; Figure 5.1c), θ will be closer to the applied field direction (ϕ).
When a magnetic field that is large enough to overcome the anisotropy energy is applied in a direction opposite to the magnetization vector, the moment will jump over the energy barrier and stay in the opposite direction when the field is switched off. The field necessary to accomplish this feat is called the flipping field (μoHf) (also sometimes the “switching field”). [Note the change to the use of H for internal fields where M cannot be considered zero.) We introduced this parameter in Chapter 4 (see Equation 4.8) as the microscopic coercivity. Stoner and Wohlfarth (1948) showed that the flipping field can be found from the condition that dϵt∕dθ = 0 and d2ϵt∕dθ2 = 0. We will call this the “flipping condition”. The necessary equations can be found by differentiating Equation 5.1:
Solving these two equations for B and substituting μoH for B, we get after some trigonometric trickery:
where t = tan1 3 ϕ. In this equation, ϕ is the angle between the applied field and the easy axis direction opposite to m.
Now we can derive the so-called “microscopic coercivity” (Hk) introduced in Section 4.1.6 in Chapter 4. Microscopic coercivity is the maximum flipping field for a particle. When magnetic anisotropy of a particle is dominated by uniaxial anisotropy constant Ku and ϕ is zero (antiparallel to the easy direction nearest the moment), μoHk = 2Ku Ms. Using the values appropriate for magnetite (Ku = 1.4 x 104 Jm−3 and Ms = 480 mAm−1 we get μoHk = 58 mT. To see why this would indeed result in a flipped moment, we plot the behavior of Equations 5.1 - 5.3 in Figure 5.2. The minimum in total energy ϵt occurs at an angle of θ = 180∘ (Figure 5.2a) and the first and second derivatives satisfy the flipping condition by having a common zero crossing (θ = 0 in Figure 5.2b). There is no other applied field value for which this is true (see, e.g., the case of a 30 mT field in Figure 5.2c,d).
The flipping condition depends not only on the applied field magnitude but also on the direction that it makes with the easy axis (see μoHf versus ϕ in Figure 5.3). When ϕ is parallel to the easy axis (zero) (and anti-parallel to m, μoHf is 58 mT as we found before. μoHf drops steadily as the angle between the field and the easy axis increases until an angle of 45∘ when μoHf starts to increase again. According to Equation 5.4, μoHf is undefined when ϕ = 90∘, so when the field is applied at right angles to the easy axis, there is no field sufficient to flip the moment.
In this section we will develop the theory for predicting the response of substances to the application of external fields, in experiments that generate hysteresis loops. We will define a number of parameters which are useful in rock and paleomagnetism. For computational details in estimating these parameters from hysteresis data, see Appendix C.1.
Let us begin by considering what happens to single particles when subjected to applied fields in the cycle known as the hysteresis loop. From the last section, we know that when a single domain, uniaxial particle is subjected to an increasing magnetic field the magnetization is gradually drawn into the direction of the applied field. If the flipping condition is not met, then the magnetization will return to the original direction when the magnetic field is removed. If the flipping condition is met, then the magnetization undergoes an irreversible change and will be in the opposite direction when the magnetic field is removed.
Imagine a single domain particle with uniaxial anisotropy. Because the particle is single domain, the magnetization is at saturation and, in the absence of an applied field is constrained to lie along the easy axis. Now suppose we apply a magnetic field in the opposite direction (see track # 1 in Figure 5.4a). When B reaches μoHf in magnitude, the magnetization flips to the opposite direction (track #2 in Figure 5.4) and will not change further regardless of how high the field goes. The field then is decreased to zero and then increased along track #3 in Figure 5.4 until μoHf is reached again. The magnetization then flips back to the original direction (track #4 in Figure 5.4a).
Applying fields at arbitrary angles to the easy axis results in loops of various shapes (see Figure 5.4b). As ϕ approaches 90∘, the loops become thinner. Remember that the flipping fields for ϕ = 22∘ and ϕ = 70∘ are similar (see Figure 5.3) and are lower than that when ϕ = 0∘, but the flipping field for ϕ = 90∘ is infinite, so that “loop” is closed and completely reversible.
Before we go on, it is useful to consider for a moment how hysteresis measurements are made in practice. Measurements of magnetic moment m as a function of applied field B are made on a variety of instruments, such as a vibrating sample magnetometer (VSM) or alternating gradient force magnetometer (AGFM). In the latter, a specimen is placed on a thin stalk between pole pieces of a large magnet. There is a probe mounted behind the specimen that measures the applied magnetic field. There are small coils on the pole pieces that modulate the gradient of the applied magnetic field (hence alternating gradient force). The specimen vibrates in response to changing magnetic fields and the amplitude of the vibration is proportional to the moment in the axis of the applied field direction. The vibration of the specimen stalk is measured and calibrated in terms of magnetic moment. The magnetometer is only sensitive to the induced component of m parallel to the applied field Bo, which is m|| = mcosϕ (because the off axis terms are squared and very small, hence can be neglected.) In the hysteresis experiment, therefore, the moment parallel to the field m|| is measured as a function of applied field B.
In rocks with an assemblage of randomly oriented particles with uniaxial anisotropy, we would measure the sum of all the millions of tiny individual loops. A specimen from such a rock would yield a loop similar to that shown in Figure 5.5a. If the field is first applied to a demagnetized specimen, the initial slope is the (low field) magnetic susceptibility (χlf) first introduced in Chapter 1. From the treatment in Section 5.1 it is possible to derive the equation χlf = μoMs2∕3Ku for this initial (ferromagnetic) susceptibility (for more, see O’Reilly 1984).
If the field is increased beyond the flipping field of some of the magnetic grains and returned to zero, the net remanence is called an isothermal remanent magnetization (IRM). If the field is increased to +Bmax, all the magnetizations are drawn into the field direction and the net magnetization is equal to the sum of all the individual magnetizations and is the saturation magnetization Ms. When the field is reduced to zero, the moments relax back to their individual easy axes, many of which are at a high angle to the direction of the saturating field and cancel each other out. A loop that does not achieve a saturating field (red in Figure 5.5a is called a minor hysteresis loop, while one that does is called the outer loop.
The net remanence after saturation is termed the saturation remanent magnetization Mr (and sometimes the saturation isothermal remanence sIRM). For a random assemblage of single domain uniaxial particles, Mr∕Ms = 0.5. The field necessary to reduce the net moment to zero is defined as the coercive field (μoHc) (or coercivity).
The coercivity of remanence μoHcr is defined as the magnetic field required to irreversibly flip half the magnetic moments (so the net remanence after application of a field equal to −μoHcr to a saturation remanence is 0). The coercivity of remanence is always greater than or equal to the coercivity and the ratio Hcr∕Hc for our random assemblage of uniaxial SD particles is 1.09 (Wohlfarth, 1958). Here we introduce two ways of estimating coercivity of remanence, illustrated in Figure 5.5. If, after taking the field up to some saturating field +Bmax, one first turned the field off (the descending curve), then increased the field in the opposite direction to the point labeled μoH′cr, and one were to then switch the field off again, the magnetization would follow the dashed curve up to the origin. For single domain grains, the dashed curve would be parallel to the lower curve (the ascending curve). So, if one only measured the outer loop, one could estimate the coercivity of remanence by simply tracing the curve parallel to the lower curve (dashed line) from the origin to the point of intersection with the upper curve (circled in Figure 5.5a). This estimate is only valid for single domain grains, hence the prime in μoHcr′.
An alternative means of estimating coercivity of remanence is to use a so-called ΔM curve (Jackson et al., 1990) which is obtained by subtracting the ascending loop from the descending loop (see Figure 5.5b). When all the moments are flipped into the new field, the ascending and descending loops join together and ΔM is 0. ΔM is at 50% of its initial value at the field at which half the moments are flipped (the definition of coercivity of remanence); this field is here termed μoHcr.
Figure 5.5a is the loop created in the idealized case in which only uniaxial ferromagnetic particles participated in the hysteresis measurements; in fact the curve is entirely theoretical. In “real” specimens there can be paramagnetic, diamagnetic AND ferromagnetic particles and the loop may well look like that shown in Figure 5.6. The initial slope of a hysteresis experiment starting from a demagnetized state in which the field is ramped from zero up to higher values is the low field magnetic susceptibility or χlf (see Figure 5.6). If the field is then turned off, the magnetization will return again to zero. But as the field increases passed the lowest flipping field, the remanence will no longer be zero but some isothermal remanence. Once all particle moments have flipped and saturation magnetization has been achieved, the slope relating magnetization and applied field reflects only the non-ferromagnetic (paramagnetic and/or diamagnetic) susceptibility, here called high field susceptibility, χhf. In order to estimate the saturation magnetization and the saturation remanence, we must first subtract the high field slope. So doing gives us the blue dashed line in Figure 5.6 from which we may read the various hysteresis parameters illustrated in Figure 5.5b.
In the case of equant grains of magnetite for which magnetocrystalline anisotropy dominates, there are four easy axes, instead of two as in the uniaxial case (see Chapter 4). The maximum angle ϕ between an easy axis and an applied field direction is 55∘. Hence there is no individual loop that goes through the origin (see Figure 5.7). A random assemblage of particles with cubic anisotropy will therefore have a much higher saturation remanence. In fact, the theoretical ratio of Mr∕Ms for such an assemblage is 0.87, as opposed to 0.5 for the uniaxial case (Joffe and Heuberger, 1974).
In superparamagnetic (SP) particles, the total magnetic energy Et = ϵtv (where v is volume) is balanced by thermal energy kT. This behavior can be modeled using statistical mechanics in a manner similar to that derived for paramagnetic grains in Section 3.2.2 in Chapter 3 and summarized in Appendix A.2.2. In fact,
where γ = MsBv kT and N is the number of particles of volume v, is a reasonable approximation. The end result, Equation 5.5, is the familiar Langevin function from our discussion of paramagnetic behavior (see Chapter 3); hence the term “superparamagnetic” for such particles.
The contribution of SP particles for which the Langevin function is valid with given Ms and d is shown in Figure 5.8a. The field at which the population reaches 90% saturation B90 occurs at γ ∼ 10. Assuming particles of magnetite (Ms = 480 mAm−1) and room temperature (T = 300∘K), B90 can be evaluated as a function of d (see Figure 5.8b). Because of its inverse cubic dependence on d, B90 rises sharply with decreasing d and is hundreds of tesla for particles a few nanometers in size, approaching paramagnetic values. B90 is a quick guide to the SP slope (the SP susceptibility χsp) contributing to the hysteresis response and was used by Tauxe et al. (1996) as a means of explaining distorted loops sometimes observed for populations of SD/SP mixtures. B90 (and χsp) is very sensitive to particle size with very steep slopes for the particles at the SP/SD threshold. The exact threshold size is still rather controversial, but Tauxe et al. (1996) argue that it is ∼ 20 nm.
For low magnetic fields, the Langevin function can be approximated as ∼1 3γ . So we have:
We can rearrange Equation 4.11 in Chapter 4 to solve for the volume at which a uniaxial grain passes through the superparamagnetic threshold we find:
Comparing this expression with that derived for ferromagnetic susceptibility in Section 5.2.1, we find that χsp is a factor of ln(Cτ) ≃ 27 larger than the equivalent single domain particle.
Moving domain walls around is much easier than flipping the magnetization of an entire particle coherently. The reason for this is the same as the reason that it is easier to move a rug by lifting up a small wrinkle and pushing that through the rug, than to drag the whole rug by the same amount. Because of the greater ease of changing magnetic moments in multidomain (MD) grains, they have lower coercive fields and saturation remanence is also much lower than for uniformly magnetized particles (see typical MD hysteresis loop in Figure 5.9a.)
The key to understanding multi-domain hysteresis is the reduction in multi-domain magnetic susceptibility χmd from “true” magnetic susceptibility (χi) because of self-demagnetization. The true susceptibility would be that obtained by measuring the magnetic response of a particle to the internal field Hi (applied field minus the demagnetizing field −NM – see Section 4.1.5; see Dunlop 2002a). Recalling that the demagnetizing factor is N, the so-called screening factor fs is (1 + Nχi)−1 and χmd = fsχi. If we assume that χmd is linear for fields less than the coercivity, then by definition χmd = Mr Hc (see Figure 5.9b). From this, we get:
By a similar argument, coercivity of remanence (Hcr) is suppressed by the screening factor which gives coercivity so:
Putting all this together leads us to the remarkable relationship noted by Day et al. (1977; see also Dunlop 2002a):
When χi Hc Ms is constant, Equation 5.8 is a hyperbola. For a single mineralogy, we can expect Ms to be constant, but Hc depends on grain size and the state of stress which are unlikely to be constant for any natural population of magnetic grains. Dunlop (2002a) argues that if the main control on susceptibility and coercivity is domain wall motion through a terrain of variable wall energies, then χi and Hc would be inversely related and gives a tentative theoretical value for χiHc in magnetite of about 45 kAm−1. This, combined with the value of Ms for magnetite of 480 kAm−1 gives a value for χi Hc Ms ∼ 0.1. When anchored by the theoretical maximum for uniaxial single domain ratio of Mr∕Ms = 0.5, we get the curve shown in Figure 5.9c. The major control on coercivity is grain size, so the trend from the SD limit down toward low Mr∕Ms ratios is increasing grain size.
There are several possible causes of variability in wall energy within a magnetic grain, for example, voids, lattice dislocations, stress, etc. The effect of voids is perhaps the easiest to visualize, so we will consider voids as an example of why wall energy varies as a function of position within the grain. We show a particle with lamellar domain structure and several voids in Figure 5.10. When the void occurs within a uniformly magnetized domain (left of figure), the void sets up a demagnetizing field as a result of the free poles on the surface of the void. There is therefore, a self-energy associated with the void. When the void is traversed by a wall, the free pole area is reduced, reducing the demagnetizing field and the associated self-energy. Therefore, the energy of the void is reduced by having a wall bisect it. Furthermore, the energy of the wall is also reduced, because the area of the wall in which magnetization vectors are tormented by exchange and magnetocrystalline energies is reduced. The wall gets a “free” spot if it bisects a void. The wall energy Ew therefore is lower as a result of the void.
In Figure 5.11, we show a sketch of a hypothetical transect of Ew across a particle. There are four LEMs labelled a-d. Domain walls will distribute themselves through out the grain in order to minimize the net magnetization of the grain and also to try to take advantage of LEMs in wall energy.
Domain walls move in response to external magnetic fields (see Figure 5.11b-g). Starting in the demagnetized state (Figure 5.11b), we apply a magnetic field that increases to saturation (Figure 5.11c). As the field increases, the domain walls move in sudden jerks as each successive local wall energy high is overcome. This process, known as Barkhausen jumps, leads to the stair-step like increases in magnetization (shown in the inset of Figure 5.11g). At saturation, all the walls have been flushed out of the crystal and it is uniformly magnetized. When the field decreases again, to say +3 mT (Figure 5.11d), domain walls begin to nucleate, but because the energy of nucleation is larger than the energy of denucleation, the grain is not as effective in cancelling out the net magnetization, hence there is a net saturation remanence (Figure 5.11e). The walls migrate around as a magnetic field is applied in the opposite direction (Figure 5.11f) until there is no net magnetization. The difference in nucleation and denucleation energies was called on by Halgedahl and Fuller (1983) to explain the high stability observed in some large magnetic grains.
Day et al. (1977) popularized the use of diagrams like that shown in Figure 5.9c which are known as Day diagrams. They placed quasi-theoretical bounds on the plot whereby points with Mr∕Ms ratios above 0.5 were labelled single domain (SD), and points falling in the box bounded by 0.5 > Mr∕Ms > 0.05 and 1.5 < Hcr∕Hc < 5 were labelled pseudo-single domain (PSD). Points with Mr∕Ms below 0.05 were labelled multi-domain (MD). This paper has been cited over 800 times in the literature and the Day plot still serves as the principle way that rock and paleomagnetists determined domain state and grain size.
The problem with the Day diagram is that virtually all paleomagnetically useful specimens yield hysteresis ratios that fall within the PSD box. In the early 90s, paleomagnetists began to realize that many things besides the trend from SD to MD behavior that control where points fall on the Day diagram. Pick and Tauxe (1994) pointed out that mixtures of SP and SD grains would have reduced Mr∕Ms ratios and enhanced Hcr∕Hc ratios. Tauxe et al. (1996) modelled distributions of SP/SD particles and showed that the SP-SD trends always fall above those observed from MD particles (modelled in Figure 5.9c).
Dunlop (2002a) argued that because Mr for SP grains is zero, the suppression of the ratio Mr∕Ms is directly proportional to the volume fraction of the SP particles. Moreover, coercivity of remanence remains unchanged, as it is entirely due to the non-SP fraction. Deriving the relationship of coercivity, however, is not so simple. It depends on the superparamagnetic susceptibility (χsp), which in turn depends on the size of the particle and also the applied field (see Section 5.2.4). In his simplified approach, Dunlop could only use a single (small) grain size, whereas in natural samples, there will always be a distribution of grain sizes. It is also important to remember that volume goes as the cube of the radius and for a mixture to display any SP suppression of Mr∕Ms almost all of the particles must be SP. It is impossible that these would all be of a single radius (say 10 or 15 nm); there must be a distribution of sizes. Moreover, Dunlop (2002a) neglected the complication in SP behavior as the particles reach the SD threshold size, whereas it is expected that many (if not most) natural samples containing both SP and SD grain sizes will have a large volume fraction of the larges SP sizes, making their neglect problematic.
Hysteresis ratios of mixtures of SD and MD particles will also plot in the “PSD” box. Dunlop (2002a) derived the theoretical behavior of such mixtures on the Day diagram. The key equations are 1) Equation 9 from Dunlop (2002) which governs the behavior of the ratio Mr∕Ms as a function of the volume fraction of single domain material (fSD) and multi-domain material (fMD):
2) Equation 10 from Dunlop (2002a) which governs the behavior of coercivity:
and 3) Equation 11 from Dunlop (2002a) which governs the behavior of coercivity of remanence in SD/MD mixtures:
where χSD and χMD are the susceptibilities of the SD and MD fractions respectively and (χr)SD and (χr)MD are the Mr vs Hcr slopes of the SD and MD remanences respectively. What we need to calculate the SD/MD mixing curve are values for the various parameters for single domain and multi domain end-members. These were measured empirically for the MV1H bacterial magnetosomes (see Chapter 6) and commercial magnetite (041183 of Wright Company) by Dunlop and Carter-Stiglitz (2006) and shown in Table ??. Using the linear mixing model of Dunlop (2002a), we plot the theoretical mixing curve predicted for these empirically constrained end-members as the heavy red line in Figure 5.9c.
|SD/MD||Mr∕Ms||χ (A m−1T−1)||χr (MA m−1T−1)||μoHc (mT)||μoHcr (mT)|
If a population of SD particles are so closely packed as to influence one another, there will be an effect of particle interaction. This will also tend to suppress the Mr∕Ms ratio, drawing the hysteresis ratios down into the PSD box. Finally, the PSD box could be populated by pseudo-single domain grains themselves. Here we will dwell for a moment on the meaning of the term “pseudo-single domain”, which has evolved from the original posed by Stacey (1961; see discussion in Tauxe et al. 2002). In an attempt to explain trends in TRM acquisition Stacey envisioned that irregular shapes caused unequal domain sizes, which would give rise to a net moment that was less than the single domain value, but considerably higher than the very low efficiency expected for large MD grains. The modern interpretation of PSD behavior is complicated micromagnetic structures that form between classic SD (uniformly magnetized grains) and MD (domain walls) such as the flower or vortex remanent states (see, e.g., Figure 4.5 in Chapter 4). Taking all these factors into account means that interpretation of the Day diagram is far from unique. The simple calculations of Dunlop (2002a) are likely to be inappropriate for almost all natural samples.
Hysteresis loops can yield a tremendous amount of information yet much of this is lost by simply estimating the set of parameters Mr,Ms,Hcr,Hc,χi,χhf, etc. Mayergoyz (1986) developed a method using what are known as First Order Reversal Curves or FORCs to represent hysteresis data. The most recent way of dealing with FORCs is that of Harrison and Feinberg (2008) which is illustrated in Figure 5.12. In the FORC experiment, a specimen is subjected to a saturating field, as in most hysteresis experiments. The field is lowered to some field μoHa, then increased again through some value μoHb to saturation (see Figure 5.12a). The magnetization curve between μoHa and μoHb is a “FORC”. A series of FORCs (see Figure 5.12b) can be generated to the desired resolution.
To transform FORC data into some useful form, Harrison and Feinberg (2008) use a locally-weighted regression smoothing technique (LOESS). For a given measurement point P LOESS fits a second-order polynomial function of the form
to the measured magnetization surface in a specified region (for example the circle shown in Figure 5.12b) where the ai are fitted coefficients. The LOESS technique takes a user defined number of the nearest neighbors (see inset to Figure 5.12b) for an arbitrary shaped region over which the data are smoothed. The coefficient −a6(Ha,Hb) is the FORC density at the point. A FORC diagram is the contour plot of the FORC densities, rotated such that μoHc = μo(Hb − Ha)∕2 and μoHu = μo(Ha + Hb)∕2. Please note that because Ha < Hb, data are only possible for positive Hc.
Imagine we travel down the descending magnetization curve (dashed line in Figure 5.12a) to a particular field μoHa less than the smallest flipping field in the assemblage. If the particles are single domain, the behavior is reversible and the first FORC will travel back up the descending curve. It is only when |μoHa| exceeds the flipping field of some of the particles that the FORC will trace a new curve on the inside of the hysteresis loop. In the simple single domain, non-interacting, uniaxial magnetite case, the FORC density in the quadrants where Ha and Hb are of the same sign must be zero. Indeed, FORC densities will only be non-zero for the range of flipping fields because these are the bounds of the flipping field distribution. So the diagram in Figure 5.12c is nearly that of an ideal uniaxial SD distribution.
Consider now the case in which a specimen has magnetic grains with non-uniform magnetizations such as vortex structures or domain walls. Walls and vortices can move much more easily than flipping the moment of an entire grain coherently. In fact, they begin to move in small jumps (from LEM to LEM) as soon as the applied field changes. If a structure nucleates while the field is decreasing and the field is then ramped back up, the magnetization curve will not be reversible, even though the field never changed sign or approached the flipping field for coherent rotation. The resulting FORC for such behavior would have much of the “action” in the region where Ha is positive. When transformed to Hu and Hc, the diagram will have the high densities for small Hc but over a range of ±Hu. The example shown in Figure 5.13 is of a specimen that has been characterized as “pseudo-single domain”. The FORC diagram in Figure 5.13b has some of the FORC densities concentrated along the Hc axis characteristic of single domain specimens (e.g., Figure 5.12c), but there is also concentration along the Hu axis characteristic of PSD and MD specimens.
In many cases the the most interesting thing one learns from FORC diagrams is the degree to which there is irreversible behavior when the field is reduced to zero then ramped back up to saturation (see Figure 5.14). Such irreversible behavior in what Yu and Tauxe (2005) call the “Zero FORC” or ZFORC can arise from particle interactions, domain wall jumps or from the formation and destruction of vortex structures in the magnetic grains.
Fabian (2003) defined a parameter called “transient hysteresis” which is the area between the ascending and descending loops of a ZFORC (shaded area in Figure 5.14). This is defined as:
SUPPLEMENTAL READING: Dunlop and Özdemir (1997), chapters 5 and 11; O’Reilly (1984), pp 69-87; Dunlop (2002a,b)
For a grain with uniaxial anisotropy in an external field, the direction of magnetization in this grain will be controlled entirely by the uniaxial anisotropy energy density ϵa and the magnetic interaction energy ϵm. The total energy can be written:
Problem 2 [From Jeff Gee]
In this problem, we will begin to use some real data. The data files used with this book are part of the PmagPy distribution, which you should have already downloaded and installed. [See Preface for instructions.]
The file hysteresis.txt) in the Chapter_5 directory contains data for a single hysteresis loop. Note that the units are as measured: H (Oe), moment (emu) and it is fine to leave them in these units.
a) Read the data into a Pandas DataFrame. Determine the high field slope at |H| > 4000 Oe. Typically one calculates separate slopes for the +H data and -H data and averages these. A general least squares polynomial fit (numpy.polyfit) should do the trick.
b) Use the slope you determined to plot both the original hysteresis loop and the slope-corrected loop (i.e. removing the high field paramagnetic slope).
c) What is the ratio Mr∕Ms (saturation remanence/saturation magnetization) for this sample? The coercivity of remanence (Hcr) for this sample was estimated at 264 Oe. Based on the Mr∕Ms and Hcr∕Hc ratios, is this sample more likely to contain single domain or multidomain grains?
d) This small sample has a mass of 10.6 mg. Assuming the magnetic material is magnetite, estimate the mass fraction of magnetite (92 Am2/kg; note 1 emu/gm is equivalent to 1 Am2/kg).
An essential part of every paleomagnetic study is a discussion of what is carrying the magnetic remanence and how the rocks got magnetized. For this, we need some knowledge of what the important natural magnetic phases are, how to identify them, how they formed, and what their magnetic behavior is. In this chapter, we will cover a brief description of geologically important magnetic phases. Useful magnetic characteristics of important minerals can be found in Table 6.1 at the end of this chapter.
Iron is by far the most abundant transition element in the solar system, so most paleomagnetic studies depend on the magnetic iron bearing minerals: the iron-nickels (which are particularly important for extra-terrestrial magnetic studies), the iron-oxides such as magnetite, maghemite and hematite, the iron-oxyhydroxides such as goethite and ferrihydrite, and the iron-sulfides such as greigite and pyrrhotite. We are concerned here with the latter three as iron-nickel is very rare in terrestrial paleomagnetic studies.
The minerals we will be discussing are mostly solid solutions which the American Heritage dictionary defines as:
A homogeneous crystalline structure in which one or more types of atoms or molecules may be partly substituted for the original atoms and molecules without changing the structure.
In iron oxides, titanium commonly substitutes for iron in the crystal structure. Because the titanium ion Ti4+ has no unpaired spins (see Chapter 3) and is a different size, the magnetic properties of titano-magnetite are different from magnetite with no titanium.
Two solid solution series are particularly important in paleomagnetism: the ulvöspinel-magnetite and ilmenite-hematite series. Both titanomagnetites and hemoilmenites crystallize at about 1300∘C. Above about 600∘C, there is complete solid solution between magnetite and ulvöspinel and above about 800∘C between hematite and ilmenite. This means that all compositions are “allowed” in the crystal structure at the crystallization temperature. As the temperature decreases, the thermodynamic stability of the crystals changes. If a mineral has a given composition, say 60% titanium substitution (green dot in Figure 6.1a), when the temperature cools to intersect the red line, that composition is no longer thermodynamically stable and the two phases to either side are the equilibrium compositions. By 400∘C the two equilibrium phases are ∼0.25 and ∼0.9 Ti substitution. To achieve the separation, the cations diffuse through the crystal leaving titanium richer and titanium poorer bands called lamellae (see Figure 6.2). Exsolution is inhibited if the crystals cool rapidly so there are many metastable crystals with non-equilibrium values of titanium substitution in nature.
Exsolution is important in paleomagnetism for two reasons. First, the different compositions have very different magnetic properties. Second, the lamellae effectively reduce the magnetic crystal size which we already know has a profound influence on the magnetic stability of the mineral. An example of this is shown in Figure 6.2b in which the larger crystal is several microns in width, too large to have single domain-like magnetization, yet the smaller magnetite lamellae are indeed small enough and carry a strong stable magnetization (Feinberg et al. 2005).
Compositions of minerals are frequently plotted on ternary diagrams like the one shown in Figure 6.3. [For help in reading ternary diagrams, please see the Appendix B.1.4] The apices of the ternary diagram are Fe2+ on the lower left, Fe3+ on the lower right and Ti4+ on the top. The oxides with these species are FeO (wüstite), Fe2O3 (hematite or maghemite depending on structure) and TiO (rutile). Every point on the triangle represents a cation mixture or solution that adds up to one cation (hence the fractional formulae).
Each of the solid arrows in Figure 6.3 (labelled titanomagnetite and hemoilmenite) represent increasing substitution of titanium into the crystal lattices of magnetite and hematite respectively. The amount of Ti substitution in titanomagnetites is denoted by “x”, while substitution in the hemoilmenites is denoted by “y”. Values for x and y range from 0 (magnetite or hematite) to 1 (ulvöspinel or ilmenite).
In earlier chapters on rock magnetism, we learned a few things about magnetite. As mentioned in Chapter 4, magnetite (Fe3O4) has an inverse spinel structure (AB2O4). The oxygen atoms form a face-centered cubic lattice into which cations fit in either octahedral or tetrahedral symmetry. For each unit cell there are four tetrahedral sites (A) and eight octahedral sites (B). To maintain charge balance with the four oxygen ions (O2−), there are two Fe3+ ions and one Fe2+ ion. Fe3+ has five unpaired spins, while Fe2+ has four. As discussed in Chapter 3, each unpaired spin contributes a moment of one Bohr magneton (mb). The divalent iron ions all reside in the octahedral lattice sites, whereas the trivalent iron ions are split evenly between octahedral and tetrahedral sites: Fe3+|Fe3+Fe2+|O4. The A and B lattice sites are coupled with antiparallel spins and magnetite is ferrimagnetic. Therefore, the net moment of magnetite is (9-5=4) mb per molecule (at 0 K).
Titanomagnetites can occur as primary minerals in igneous rocks. Magnetite, as well as various members of the hemoilmenite series, can also form as a result of high temperature oxidation. In sediments, magnetite often occurs as a detrital component. It can also be produced by bacteria or authigenically during diagenesis.
Substitution of Ti4+, which has no unpaired spins (see Chapter 3), has a profound effect on the magnetic properties of the resulting titanomagnetite. Ti4+ substitutes for a trivalent iron ion. In order to maintain charge balance, another trivalent iron ion turns into a divalent iron ion. The end members of the solid solution series are:
|x = 0||x = 1|
Ulvöspinel is antiferromagnetic because the A and B lattice sites have the same net moment. When x is between 0 and 1, the mineral is called a titanomagnetite. If x is 0.6, for example, the mineral is called TM60 (green dot in Figure 6.3).
The profound effect of titanium substitution on the intrinsic properties of titanomagnetite is illustrated in Figure 6.4. Because Ti4+ has no unpaired spins, the saturation magnetization decreases with increasing x (Figure 6.4a). The cell dimensions increase with increasing x (Figure 6.4b). As a result of the increased cell dimension, there is a decrease in Curie Temperature (Figure 6.4c). There is also a slight increase in coercivity (not shown).
The large Ms of magnetite (see Table 6.1) means that for deviations from equant grains as small as 10%, the magnetic anisotropy energy becomes dominated by shape. Nonetheless, aspects of the magnetocrystalline anisotropy provide useful diagnostic tests. The magnetocrystalline anisotropy constants are a strong function of temperature. On warming to ∼-100∘C from near absolute zero, changes in these constants can lead to an abrupt loss of magnetization, which is known loosely as the Verwey transition (see Chapter 4). Identification of the Verwey transition suggests a remanence that is dominated by magnetocrystalline anisotropy. As we shall see, the temperature at which it occurs is sensitive to oxidation and the transition can be completely supressed by maghemitization (see Dunlop and Özdemir ).
It should be noted that natural titanomagnetites often contain impurities (usually Al, Mg, Cr). These impurities also affect the magnetic properties. Substitution of 0.1 Al3+ into the unit cell of titanomagnetite results in a 25% reduction in Ms and a reduction of the Curie temperature by some 50∘C. Substitution of Mg2+ into TM60 also results in a lower saturation magnetization with a reduction of some 15%.
Hematite has a corundum structure (see Figure 6.5). It is rhombohedral with a pseudocleavage (perpendicular to the c axis) and tends to break into flakes. It is antiferromagnetic, with a weak parasitic ferromagnetism resulting from either spin-canting or defect ferromagnetism (see Chapter 3). Because the magnetization is a spin canted antiferromagnetism, the temperature at which this magnetization disappears is called the Néel Temperature instead of the Curie Temperature which is sensu strictu only for ferromagnetic minerals. The Néel temperature for hematite is approximately 685∘C.
Above about -10∘C (the Morin transition), the magnetization is constrained by aspects of the crystal structure to lie perpendicular to the c axis or within the basal plane. Below the Morin transition, spin-canting all but disappears and the magnetization is parallel to the c axis. This effect could be used to demagnetize the grains dominated by spin-canting: it does not affect those dominated by defect moments. Most hematites formed at low-temperatures have magnetizations dominated by defect moments, so the remanence of many rocks will not display a Morin transition.
Hematite occurs widely in oxidized sediments and dominates the magnetic properties of red beds. It occurs as a high temperature oxidation product in certain igneous rocks. Depending on grain size, among other things, it is either black (specularite) or red (pigmentary). Diagnostic properties of hematite are listed in Table 6.1.
The substitution of Ti into the lattice structure of αFe2O3 has an even more profound influence on magnetic properties than for magnetite. For y = 0 the magnetization is spin-canted antiferromagnetic, but when y = 0.45, the magnetization becomes ferrimagnetic (see Figure 6.6a). For small amounts of substitution, the Ti and Fe cations are distributed equally among the cation layers. For y > 0.45, however, the Ti cations preferentially occupy alternate cation layers. Remembering that the Ti4+ ions have no net moment, we can imagine that antiparallel coupling between the two sub-lattices results in ferrimagnetic behavior, as opposed to the equal and opposite style of anti-ferromagnetism.
Titanohematite particles with intermediate values of y have interesting properties from a paleomagnetic point of view. There is a solid solution at high temperatures, but as the temperatures drop the crystals exsolve into titanium rich and poor lamellae (see Figure 6.2d). Figure 6.6 shows the variation in saturation magnetization and Néel temperature with Ti substitution. For certain initial liquid compositions, the exolution lamellae could have Ti-rich bands alternating with Ti-poor bands. If the Ti-rich bands have higher magnetizations, yet lower Curie temperatures than the Ti-poor bands, the Ti-poor bands will become magnetized first. When the Curie temperature of the Ti-rich bands is reached, they will become magnetized in the presence of the demagnetizing field of the Ti-poor bands, hence they will acquire a remanence that is antiparallel to the applied field. Because these bands have higher magnetizations, the net NRM will also be anti-parallel to the applied field and the rock will be self-reversed. This is fortunately very rare in nature.
Many minerals form under one set of equilibrium conditions (say within a cooling lava flow) and are later subjected to a different set of conditions (sea-floor alteration or surface weathering). They will tend to alter in order to come into equilibrium with the new set of conditions. The new conditions are often more oxidizing than the original conditions and compositions tend to move along the dashed lines in Figure 6.3. The degree of oxidation is represented by the parameter z.
While the solid solution between magnetite and ulvöspinel exists in principle, intergrowths of these two minerals are actually quite rare in nature because the titanomagnetites interact with oxygen in the melt to form intergrowths of low Ti magnetite with ilmenite. This form of oxidation is known as deuteric oxidation.
Low temperature oxidation will tend to transform a single phase spinel (titanomagnetite) into a new single phase spinel (titanomaghemite) by diffusion of Fe2+ from the lattice structure of the (titano)magnetite to the surface where it is converted to Fe3+; titanomaghemite is a “cation-deficient” inverse spinel. The inset to Figure 6.7c shows a magnetite crystal in the process of becoming maghemite. The conversion of the Fe2+ ion means a loss in volume which results in characteristic cracking of the surface. There is also a loss in magnetization, a shrinkage of cell size and, along with the tightening unit cell, an increase in Curie Temperature. These trends are shown for TM60 in Figure 6.7. Maghematization results in a much reduced Verwey transition (see Figure 6.8).
The (titano)maghemite structure is metastable and can invert to form the isochemical, but more stable structure of (titano)hematite, or it can be reduced to form magnetite. The two forms of Fe2O3 are distinguished by the symbols γ for maghemite and α for hematite. Inversion of natural maghemite is usually complete by about 350∘C, but it can survive until much higher temperatures (for more details, see Dunlop and Özdemir, 1997). Also, it is common that the outer rim of the magnetite will be oxidized to maghemite, while the inner core remains magnetite.
Of the many iron oxyhydroxides that occur in any abundance in nature, goethite (αFeOOH; Figure 6.9a,b) is the most common magnetic phase. It is antiferromagnetic with what is most likely a defect magnetization. It occurs widely as a weathering product of iron-bearing minerals and as a direct precipitate from iron-bearing solutions. It is metastable under many conditions and dehydrates to hematite with time or elevated temperature. Dehydration is usually complete by about 325∘C. It is characterized by a very high coercivity but a low Néel temperature of about 100–150∘C. Diagnostic properties of goethite are listed in Table 6.1.
There are two iron-sulfides that are important to paleomagnetism: greigite (Fe3S4; Figure 6.9c,d) and pyrrhotite (Fe7S8-Fe11S12; Figure 6.9e,f). These are ferrimagnetic and occur in reducing environments. They both tend to oxidize to various iron oxides leaving paramagnetic pyrite as the sulfide component.
The Curie temperature of monoclinic pyrrhotite (Fe7S8) is about 325∘C (see Figure 6.10b; Table 6.1). Monoclinic pyrrhotite undergoes a transition at ∼ 35 K, so low temperature measurements can be diagnostic for this phase (see Figure 6.10a). Hexagonal pyrrhotite undergoes a structural transition from an imperfect antiferromagnet to a ferromagnet with much higher saturation magnetization at about 200∘C. During a thermomagnetic experiment, the expansion of the crystal results in a large peak in magnetization just below the Curie Temperature (see Figure 6.10c). Mixtures of monoclinic and hexagonal pyrrhotite result in the behavior sketched in Figure 6.10d. The maximum unblocking temperature of greigite is approximately 330∘C. Other diagnostic properties of greigite and pyrrhotite are listed in Table 6.1.
The composition and relative proportions of FeTi oxides, crystallizing from a silicate melt depend on a number of factors, including the bulk chemistry of the melt, oxygen fugacity and the cooling rate. The final assemblage may be altered after cooling. FeTi oxides are generally more abundant in mafic volcanic rocks (e.g. basalts) than silicic lavas (e.g., rhyolites). FeTi oxides can be among the first liquidus phases (∼ 1000∘C) in silicic melts, but in mafic lavas they generally are among the last phases to form (∼1050∘C), often with plagioclase and pyroxene.
Although there is considerable variability, the Ti (ulvöspinel) content of the titanomagnetite crystallizing from a melt generally is lower in more silicic melts (see solid black lines in Figure 6.11). Titanomagnetites in tholeiitic lavas generally have 0.5 < x < 0.8 with an initial composition near TM60 (x=0.6) characteristic for much of the oceanic crust. The range of rhombohedral phases (dashed red lines) crystallizing from silicate melts is more limited, 0.05 < y < 0.3 for most lavas.
The final magnetic mineral assemblage in a rock is often strongly influenced by the cooling rate and oxygen fugacity during initial crystallization. As a first approximation, we distinguish slowly cooled rocks (which may undergo solid state exsolution and/or deuteric oxidation) from those in which the oxide minerals were rapidly quenched. As mentioned before, FeTi oxides in slowly cooled igneous rocks can exhibit exolution lamellae with bands of low and high titanium magnetites if the oxygen fugacity remains unoxidizing. This reaction is very slow, so its effects are rarely seen in nature.
The typical case in slowly cooled rocks is that the system becomes more oxidizing with increasing differentiation during cooling and crystallization. For example, both the dissociation of magmatic water and the crystallization of silicate phases rich in Fe will act to increase the oxidation state. This will drive compositions to higher z values (see Figure 6.3). The final assemblage typically consists of ilmenite lamellae and a nearly pure magnetite host because adding O2 drives the reaction Fe2TiO4 + O2 ⇌ Fe3O4 to the right. This process is known as oxyexsolution. Under even more oxidizing conditions, these phases may ultimately be replaced by their more oxidized counterparts (e.g., hematite, pseudobrookite).
Weathering at ambient surface conditions or mild hydrothermal alteration may lead to the development of cation deficient (titano)maghemites. This can either occur by addition of oxygen to the spinel phase with a corresponding oxidation of the Fe2+ to Fe3+ to maintain charge balance, or by the removal of some of the octahedral iron from the crystal structure.
Igneous (and metamorphic) rocks are the ultimate source for the components of sedimentary rocks, but biological and low-temperature diagenetic agents work to modify these components and have a significant effect on magnetic mineralogy in sediments. As a result there is a virtual rainbow of magnetic mineralogies found in sediments. (Titano)magnetite coming into the sedimentary environment from an igneous source may experience a change in pH and redox conditions that make it no longer the stable phase, hence it may alter. Also, although the geochemistry of seawater is generally oxidizing with respect to the stability field of magnetite, pronounced changes in the redox state of sediments often occur with increasing depth as a function of the breakdown of organic carbon. Such changes may result in locally strongly reducing environments where magnetite may be dissolved and authigenic sulfides produced. Indeed, changes down sediment cores in the ferrimagnetic mineral content and porewater geochemistry suggest that this process is active in some (most?) marine sedimentary sequences. For example, dissolution of magnetite and/or production of non-magnetic sulfides may be responsible for the oft-seen decrease in various bulk magnetic parameters (e.g., magnetic susceptibility, IRM, ARM, etc.) with depth.
Some of the more spectacular magnetic minerals found in sediments are biogenic magnetites produced by magnetotactic bacteria (see recent review by Kopp and Kirsch- vink, 2008 and Figure 6.12). The sizes and shapes of bacterial magnetite, when plotted on the Evans diagram from Chapter 4, suggest that magnetotactic bacteria form magnetite in the single domain grain size range – otherwise extremely rare in nature. It appears that bacterial magnetites are common in sediments, but their role in contributing to the natural remanence is still poorly understood.
|Density = 5197 kg m−3||Dunlop and Özdemir |
|Curie temperature = 580∘C||Dunlop and Özdemir |
|Saturation Magnetization = 92 Am2kg−1||O’Reilly |
|Anisotropy Constant = -2.6 Jkg−1||Dunlop and Özdemir |
|Volume susceptibility = ∼ 1 SI||O’Reilly |
|Typical coercivities are 10’s of mT||O’Reilly |
|Verwey transition: 110-120 K||Özdemir and Dunlop |
|Cell edge = 0.8396 nm||Dunlop and Özdemir |
|Density = 5074 kg m−3||Dunlop and Özdemir |
|Curie temperature = 590-675∘C||Dunlop and Özdemir |
|Saturation Magnetization = 74 Am2kg−1||Dunlop and Özdemir |
|Anisotropy Constant = 0.92 Jkg−1||Dunlop and Özdemir |
|Verwey transition: suppressed||Dunlop and Özdemir |
|Breaks down to αFe2O3: between 250→ 750∘C||Dunlop and Özdemir |
|Density = 4939 kg m−3||Dunlop and Özdemir |
|Curie temperature = 150∘C||Dunlop and Özdemir |
|Saturation Magnetization = 24 Am2kg−1||Dunlop and Özdemir |
|Anisotropy Constant = 0.41 Jkg−1||Dunlop and Özdemir |
|Coercivity ∼ 8 mT||Dunlop and Özdemir |
|Verwey transition: suppressed||Dunlop and Özdemir |
|Cell edge = 0.8482 nm||Dunlop and Özdemir |
|Density = 5271 kg m−3||Dunlop and Özdemir |
|Néel temperature = 675∘C||O’Reilly |
|Saturation Magnetization = 0.4 Am2kg−1||O’Reilly |
|Anisotropy Constant = 228 Jkg−1||Dunlop and Özdemir |
|Volume susceptibility = ∼ 1.3 x 10−3 SI||O’Reilly |
|Coercivities vary widely and can be 10’s of teslas||Banerjee |
|Morin Transition: ∼ 250-260 K (for > 0.2 μm)||O’Reilly |
|Density = 4264 kg m−3||Dunlop and Özdemir |
|Néel temperature: 70 → 125∘C||O’Reilly |
|Saturation Magnetization = 10−3 → 1 Am2kg−1||O’Reilly |
|Anisotropy Constant = 0.25 → 2 Jkg−1||Dekkers |
|Volume susceptibility = ∼ 1 x 10−3 SI||Dekkers [1989a]|
|Coercivities can be 10’s of teslas|
|Breaks down to hematite: 250 → 400∘C|
|Density = 4662 kg m−3||Dunlop and Özdemir |
|Curie temperature = ∼ 325∘C||Dekkers |
|Curie temperature = ∼270∘C||Dekkers |
|Saturation Magnetization = 0.4 -∼ 20 Am2kg−1||Worm et al. |
|Volume susceptibility = ∼ 1 x 10−3 → 1 SI||Collinson ;O’Reilly |
|Anisotropy Constant = 20 Jkg−1||O’Reilly |
|Coercivities vary widely and can be 100’s of mT||O’Reilly |
|Has a transition at ∼ 34 K||Dekkers et al. |
|Rochette et al. |
|Hexagonal pyrrotite: transition near 200∘|
|Breaks down to magnetite: ∼ 500∘C||Dunlop and Özdemir |
|Density = 4079 kg m−3||Dunlop and Özdemir |
|Maximum unblocking temperature = ∼ 330∘C||Roberts |
|Saturation Magnetization = ∼ 25 Am2kg−1||Spender et al. |
|Anisotropy Constant = -0.25 Jkg−1||Dunlop and Özdemir |
|Coercivity 60→> 100 mT||Roberts |
|Has high Mr∕χ ratios ∼ 70 x 103 Am−1||Snowball and Thompson |
|Breaks down to magnetite: ∼ 270-350∘C||Roberts |
SUPPLEMENTAL READINGS: Dunlop and Özdemir (1997), Chapter 3; Kopp and Kirschvink (2008).
You measured Curie Temperature curves for two samples A and B as shown in Figure 6.13. Based on your knowledge of Curie Temperatures, what is the likely magnetic mineralogy for each sample?
The data in demag.dat in the Chapter_6 data directory (see Preface for instructions) are thermal demagnetization data for a specimen that had a 2 T field exposed along x, a 0.4 T field exposed along y and a 0.12 T field exposed along z. The sample was then heated to a particular temperature step (∘C) and cooled in zero magnetic field, allowing all grains that become superparamagnetic at temperatures lower than the treatment temperature to become randomized. After each treatment step, the magnetic vector was measured. The column headings are: Treatment temperature (C), Intensity, Declination, Inclination.
a) Write a python program to read the data in and convert the declination, inclination and intensity to cartesian components.
b) Modify your program to normalize the intensities to that measured at 20∘C.
c) Extend the program to plot the x and y components as a function of temperature.
d) Based on your understanding of coercivity and Curie temperatures, what is carrying the x and y components?
Ferromagnetic minerals in two rock samples are known to be FeTi oxides and are found to have the properties described below. Using this information and looking up the properties of FeTi oxides described in the text, identify the ferromagnetic minerals. For titanomagnetite or titanohematite, approximate the compositional parameter x.
a.) Strong-field thermomagnetic analysis indicates a dominant Curie temperature Tc = 420∘C. Subjecting the specimen to increasingly larger fields to measure successive isothermal remanences (see Chapter 5) reveals a coercivity spectrum with a coercivity of less than 300 mT. What is this ferromagnetic mineral?
b) Strong-field thermomagnetic analysis (used for measuring the Curie temperature) shows the behavior in Figure 6.14a with Curie temperature Tc = 200∘C. In addition, electron microprobe data indicates abundances of FeO, Fe2O3, and TiO shown in Figure 6.14b. Unfortunately, electron microprobe data are not very effective in determining the Fe2O3:FeO ratio (placement from left to right in the TiO-FeO-Fe2O3 ternary diagram). Accordingly, there is much uncertainty in the Fe2O3:FeO ratio indicated by the microprobe data. But microprobe data are effective in determining the TiO:(Fe2O3 + FeO) ratio (placement from bottom to top in the TiO-FeO-Fe2O3 ternary diagram). With this information, identify the ferromagnetic mineral.
The key to the acquisition of magnetic remanence is magnetic anisotropy energy, the dependence of magnetic energy on direction of magnetization within the crystal ( see Chapter 4). It is magnetic anistotropy energy that controls the probability of magnetic grains changing their moments from one easy direction to another. Without it, the magnetic moments of individual grains would swing freely and could not retain a “memory” of the ancient field direction.
Anisotropy energy controls relaxation time, a concept briefly introduced in Chapter 4 where we defined it as a time constant for decay of the magnetization of an assemblage of magnetic grains when placed in a null field. Equation 4.10 predicted exponential decay with relaxation time τ being the time it takes for the initial magnetization to decay to 1∕e of its initial value. Relaxation time reflects the probability of magnetic moments jumping over the anisotropy energy barrier between easy axes. Therefore, to preserve a record of an ancient geomagnetic field, there must be a way that the relaxation time changes from short (such that the magnetization is in equilibrium with the ambient geomagnetic field) to long (such that the magnetization is “frozen”, or blocked, for geologically significant periods of time).
Before we begin a more detailed look at the processes governing remanence acquisition, it is helpful to review briefly what is meant by “equilibrium” in physics and chemistry. Eager students are encouraged to read the background material recommended in the “BACKGROUND” list at the beginning of the chapter. In the following, we will go through the bare bones of statistical mechanics necessary to understand natural remanence.
We live in a world that is in constant motion down to the atomic level. The state of the things is constantly changing, but, looking at the big picture, things often seem to stay the same. Imagine for a moment a grassy field full of sheep and a fence running down the middle. The sheep can jump over the fence at will to get flowers on the other side and occasionally they do so. Over time, because the two sides of the fence are pretty much the same, the same number of sheep jump over in both directions, so if you were to count sheep on either side, the numbers would stay about the same.
Now think about what would happen if it were raining on one side of the fence. The sheep would jump more quickly back over the fence from the rainy side to the sunny side than the other way around. You might find that over time, there were more sheep on the sunny side than on the rainy side (see Figure 7.1). If you are still awake after all this sheep counting, you have begun to understand the concept of dynamic equilibrium.
Returning to magnetism, a magnet with uniaxial anisotropy in the absence of a magnetic field will tend to be magnetized in one of several possible “easy” directions (see Chapter 4). For the purpose of this discussion, let us consider the case of uniaxial anisotropy, in which there are only two easy directions in each magnetic grain. In order to “jump over the fence” (the anisotropy energy) and get from one easy axis to another, a magnetic particle must have thermal energy in excess of the anisotropy energy. According to the Boltzmann distribution law, the probability of a given particle having an energy E is proportional to e−E∕kT where kT is the thermal energy (see Chapter 4). Therefore, it may be that at a certain time, a particular magnetic grain has enough thermal energy for the electronic spins to overcome the energy barrier and flip the sense of magnetization from one easy axis to another.
If we had a collection of magnetized particles with some initial statistical alignment of moments giving a net remanence Mo, (more sheep on one side than the other), the random “fence jumping” by magnetic moments from one easy axis to another over time will eventually lead to the case where there is no preference and the net moment will have decayed to zero (although the individual grain moments remain at saturation). This approach to equilibrium magnetization (Me) is the theoretical underpinning of Equation 4.10 (plotted in Figure 7.2a) and is the essence of what is known as Néel Theory.
The theoretical basis for how ancient magnetic fields might be preserved was established over fifty years ago with the work of Nobel prize winner Louis Néel (1949, 1955). In the introduction to this chapter, we suggested that the mechanism which controls the approach to magnetic equilibrium is relaxation time. In the sheep analogy this would be the frequency of fence jumping. We defined relaxation time by Equation 4.11 in Chapter 4, sometimes called the Néel equation, which relates τ to volume v, the anisotropy constant (K) and absolute temperature (T).
Relaxation time is controlled by the competition between anisotropy energy Kv and thermal energy, so will be constant at a given temperature with constant Kv. Iso-τs of equal relaxation time are curves in v − K space. Figure 7.2b shows the family of curves with τs ranging from ∼100 seconds to the age of the Earth. The inset to Figure 7.2b illustrates the effect of temperature on the iso-τs, which move up and to the right with increasing temperature. This behavior gives us a clue as to how a rise in temperature could change a “blocked” remanence at 0∘C (273K) (one that is stable for long periods of time) to an unblocked one. In fact, Figure 7.2b (and the inset) suggests two other ways of manipulating the approach to equilibrium besides temperature: by changing the time span of observation and by changing grain volume. Each of these mechanisms represents a different mode of remanence acquisition (thermal, viscous, and chemical remanences respectively). Naturally acquired remanences are generally referred to as natural remanent magnetizations or NRMs. In this chapter we will introduce these and other forms of NRM and how they are acquired. We will also introduce useful unnatural remanences where appropriate.
In the “sheep in the rain” scenario, jumping over the fence into the sun would occur more frequently than jumping into the rain. It is also true that the energy barrier for magnetic particles to flip into the direction of the applied field H requires less energy than to flip the other way, so relaxation time must also be a function of the applied field. This tendency is reflected in the more general form of the Néel equation:
In this chapter we are concerned mainly with magnetic remanences acquired in the presence of the Earth’s magnetic field, which is tiny compared to the coercivity of the minerals in question and so we can neglect the effect of H on τ in the next few sections.
In Equation 7.1, the product Kv is an energy barrier to the rotation of m and we will call it the blocking energy. High blocking energies will promote more stable magnetizations. We learned in Chapter 4 that K for uniaxial shape anisotropy, Ku, is related to the coercivity Hc (the field required to flip the magnetization) by:
where Ms is the saturation magnetization. Substituting for Ku in Equation 4.11 from Chapter 4 we get:
where Ms is itself a strong function of temperature (see, e.g., Figure 3.8 in Chapter 3). We can see from Equation 7.2 that relaxation time is a function of magnetization, as well as volume, coercivity and temperature, properties that we will return to later in the chapter and in future chapters through out the course.
It is instructive to plot distributions of grains on the v − K diagrams as shown in Figure 7.3b. By definition, superparamagnetic grains are those grains whose remanence relaxes quickly. A convenient critical relaxation time, for purposes of laboratory experiments may be taken as ∼100 s. Effective paleomagnetic recorders must have relaxation times on the order of geological time. So it might be more appropriate to choose τs of the age of the Earth (4.5 Gyr) as the relevant relaxation for geological time scales.
We will now consider various mechanisms by which rocks can become magnetized. The first mechanism, viscous remanent magnetization, is simply a consequence of Equation 4.11 in Chapter 4 and Figure 7.2a. Later, we will explore the role of temperature and grain volume in blocking of thermal and chemical remanences. We will finish this chapter with other remanences which are either rare or non-existent in nature but are nonetheless useful in paleomagnetism.
Placing a magnetic particle at an angle θ to an external magnetic field results in a magnetostatic energy Em of −m ⋅ B = −mB cosθ, which is at a minimum when the moment is aligned with the field (see Chapters 1 and 5). Given an arbitrary θ, the difference in Em between the two easy directions is given by:
Because of the energy of the applied field Em, the energy necessary to flip the moment from a direction with a high angle to the external field to the other direction with a lower angle is less than the energy necessary to flip the other way around. Therefore, a given particle will tend to spend more time with its moment at a favorable angle to the applied field than in the other direction. Moreover, the Boltzmann distribution law tells us that the longer we wait, the more likely it is for a given magnetic grain to have the energy to overcome the barrier and flip its moment. That is why over time the net magnetization of assemblages of magnetic particles will tend to grow (or decay) to some equilibrium magnetization Me.
We can visualize what happens in Figure 7.3b. Let us place an assemblage of magnetic grains with some initial magnetization Mo in a magnetic field. At a given time span of observation (τ), particles with that relaxation time are likely to have sufficient energy to overcome the energy barriers. In a given assemblage of blocking energies (shown as the contours), some grains will be tending toward equilibrium with the external field (those to the left and below the blocking energy line) while some will tend to remain fixed (those to the right of the line). As the time span of observation increases, the critical blocking energy line migrates up and to the right (moving from 100 s, to 1 Myr, and so on) and whatever initial magnetic state the population was in will be progressively re-magnetized in the external field.
In Figure 7.4 we consider a few different scenarios for Mo and the applied field. First, the already familiar case when a specimen with a net magnetization (Mo) is placed in zero external field; the magnetization will decay to zero as in Figure 7.4a. Conversely, if a specimen with zero initial remanence is put into a magnetic field, the magnetization M(t) will grow to Me by the complement of the decay equation:
as shown in Figure 7.4b. The magnetization that is acquired in this isochemical, isothermal fashion is termed viscous remanent magnetization or VRM and the equilibrium magnetization Me is a function of the external field B.
The general case, in which the initial magnetization of a specimen is nonzero and the equilibrium magnetization is of arbitrary orientation to the initial remanence, the equation can be written as:
which grows (or decays) exponentially from Mo → Me as t →∞. The rate is not only controlled by τ, but also by the degree to which the magnetization is out of equilibrium (see Figure 7.4c).
Some temporally short data sets appear to follow the relation M(t) ∝ log(t) and Néel (1949, 1955) suggested that VRM = S log t. Such a relationship suggests infinite remanence as t →∞, so cannot be true over a long period of time. S log t behavior can generally only be observed over a restricted time interval and closely spaced, long-term observations do not show linear log(t)-behavior, but are all curved in log(t) space. When under-sampled, these time series can appear segmented, leading to interpretations of several quasi-linear features (multiple values of S), when in fact the time series are not linear at all.
VRM is a function of time and the relationship between the remanence vector and the applied field. When the relaxation time is short (say a few hundred seconds), the magnetization is essentially in equilibrium with the applied magnetic field hence is superparamagnetic. Because relaxation time is also a strong function of temperature, VRM will grow more rapidly at higher temperature. As noted in Chapter 4 there is a very sharply defined range of temperatures over which τ increases from geologically short to geologically long time scales. In the next section, we consider the magnetization acquired by manipulating relaxation time by changing temperature: thermal remanent magnetization (TRM).
The v −K diagram shown in Figure 7.5 illustrates how TRM can be blocked. In Figure 7.5a we have a population of magnetic grains with varying volumes and anisotropies. Raising temperature works in two ways on these grains. First, the relaxation time depends on thermal energy, so higher temperatures will result in lower blocking temperatures. Second, anisotropy energy depends on the square of magnetization (Chapter 4). Elevated temperature reduces magnetization, so the anisotropy energy will be depressed relative to lower temperatures. In the diagram, this means that not only do the relaxation time curves move with changing temperature, but the anisotropy energies of the population of grains change as well. This means that a population of grains that are superparamagnetic at high temperature (Figure 7.5a) could be “blocked” as cooling causes the grains to “walk” through the superparamagnetic threshold into a region of magnetic stability (Figure 7.5b).
The key to Néel theory is that very small changes in conditions (temperature, volume, anisotropy energy) can result in enormous changes in relaxation time. In order to work out how relaxation time varies with temperature, we need to know how saturation magnetization varies with temperature. We found in Chapter 3 that calculating Ms(T) exactly is a rather messy process. If we take a reasonable value for γ in Equation 3.11 from the data in Figure 3.8 in Chapter 3 or γ ≃ 0.38 and Ms = 480 mAm−1 (from Chapter 6) we can calculate the variation of relaxation time as a function of temperature for elllipsoidal grains of various widths using Equation 7.2 (see Figure 7.6). At room temperature, a 25 nm ellipsoid of magnetite (length to width ratio of 1.3:1) would have a relaxation time of billions of years, while at 300∘C, the grain would be superparamagnetic.
The sharpness of the relationship between relaxation time and temperature allows us to define a temperature above which a grain is superparamagnetic and able to come into magnetic equilibrium with an applied field and below which it is effectively blocked. The temperature at which τ is equal to a few hundred seconds is defined as the blocking temperature or Tb. At or above the blocking temperature, but below the Curie Temperature, a grain will be superparamagnetic. Cooling below Tb increases the relaxation time sharply, so the magnetization is effectively blocked and the rock acquires a thermal remanent magnetization or TRM.
Now let us put some of these concepts into practice. Consider a lava flow which has just been extruded (Figure 7.7a). Upon meeting the chilly air (or water), molten lava solidifies quickly into rock. While the rock is above the Curie Temperature, there is no remanent magnetization; thermal energy dominates the system and the system behaves as a paramagnet. As the rock cools through the Curie Temperature of its magnetic phase, exchange energy becomes more important and the magnetic minerals become ferromagnetic. The magnetization, however, is free to track the prevailing magnetic field because anisotropy energy is still less important than the magnetostatic energy. The magnetic grains are superparamagnetic and the magnetization is in magnetic equilibrium with the ambient field.
The magnetic moments in the lava flow tend to flop from one easy direction to another, with a slight statistical bias toward the direction with the minimum angle to the applied field (Figure 7.7c). Thus, the equilibrium magnetization of superparamagnetic grains is not fully aligned, but only slightly aligned, and the degree of alignment is a linear function of the applied field for low fields like the Earth’s. The magnetization approaches saturation at higher fields (from ∼ 0.2 T to several tesla, depending on the details of the source of anisotropy energy).
Recalling the energy difference between the two easy axes of a magnetic particle in the presence of a magnetic field (Equation 7.3), we can estimate the fraction of saturation for an equilibrium magnetization at a given temperature. Applying the Boltzmann distribution law to the theory of thermal remanence, we take ΔE from Equation 7.3 to be 2mB cosθ, and the two states to be the two directions along the easy axis, one maximally parallel to and the other antiparallel to the applied field B. The total number of particles N equals the sum of those aligned maximally parallel n+ and those aligned maximally antiparallel n−. So from the Boltzmann distribution we have:
The magnetization of such a population, with the moments fully aligned is at saturation, or Ms. The strength of magnetization at a given temperature M(T) would be the net moment or n+ − n−. So it follows that:
Now imagine that the process of cooling in the lava continues. The thermal energy will continue to decrease until the magnetic anisotropy energy becomes important enough to “freeze in” the magnetic moment wherever it happens to be. Thus, as the particles cool through their “blocking” temperatures (Tb), the moments become fixed with respect to further changes in field and to get the final magnetization for randomly oriented grains, we integrate over θ or:
where mo is the grain moment at the blocking temperature.
We show the theoretical behavior of TRM as a function of applied field for different assemblages of particles in Figure 7.8a. This plot was constructed assuming ellipsoidal particles whose saturation magnetization varied according to Equation 3.11 from Chapter 3 with γ = 0.38. For small, equant particles, TRM is approximately linear with applied field for values of B as small as the Earth’s (∼ 20-65 μT). However, the more elongate and the larger the particle, the more non-linear the theoretically predicted TRM behaves. This non-linear behavior has been experimentally verified by Selkin et al. (2007) for geologically important materials (see Figure 7.8b).
The exact distribution of blocking temperatures depends on the distribution of grain sizes and shapes in the rock and is routinely determined in paleomagnetic studies. By heating a rock in zero field to some temperature T, grains with relaxation times that are superparamagnetic at that temperature become randomized, a process used in so-called thermal demagnetization which will be discussed further in Chapter 9. Thermal demagnetization allows us to determine the portion of TRM that is blocked within successive blocking temperature intervals. A typical example is shown in Figure 7.9. The total TRM can be broken into portions acquired in distinct temperature intervals. The portion of TRM blocked in any particular blocking temperature window is referred to as partial TRM, often abbreviated pTRM. Each pTRM is a vector quantity, and for single domain remanences, the total TRM is the vector sum of the pTRMs contributed by all blocking temperature windows:
According to Néel theory for single domains, individual pTRMs depend only on the magnetic field during cooling through their respective blocking temperature intervals and are not affected by magnetic fields applied during cooling through lower temperature intervals. This is the law of additivity of pTRM. Another useful feature of pTRMs in single domain grains is that their blocking temperatures are the same as the temperature at which the remanence is unblocked, the so-called unblocking temperature (Tub). This is the law of reciprocity. While it may seem intuitively obvious that Tb would be the same as Tub, it is actually only true for single domain grains and fails spectacularly for multi-domain grains and even grains whose remanences are in the vortex state.
As an example of the laws of additivity and reciprocity of pTRM, again consider our lava flow. It originally cooled to produce a TRM that is the vector sum of all pTRMs with Tb distributed from Tc to room temperature. If the magnetic field was constant during the original cooling, all pTRMs would be in the same direction. Now consider that this rock is subsequently reheated for even a short time to a temperature, Tr, intermediate between room temperature and the Curie temperature and then cooled in a different magnetizing field. All pTRMs with Tub < Tr will record the new magnetic field direction. However, neglecting time-temperature effects to be considered later, the pTRMs with Tub > Tr will retain the TRM record of the original magnetizing field. This ability to strip away components of magnetization held by grains with low unblocking temperatures while leaving the higher Tub grains unaffected is a fundamental element of the thermal demagnetization technique to be discussed in later chapters.
Perhaps the most severe simplification in the above model of TRM acquisition is that it considers only single-domain grains. Given the restricted range of grain size and shape distributions for stable SD grains of magnetite or titanomagnetite (see Chapter 4), at most a small percentage of grains in a typical igneous rock are truly SD. The question then arises as to whether larger grains can acquire TRM.
Figure 7.10 shows the particle size dependence of TRM acquired by magnetite in a magnetizing field 100 μT. Note that it is a log-log plot and efficiency of TRM acquisition very low in the grain-size range from 1 μm to about 10 μm. However, grains in 1-2 μm range do acquire TRM that can be stable against time decay and against demagnetization by later magnetic fields. This observation is the source of the term pseudo-single domain (PSD; see also Chapter 5) which characterizes the behavior of grains that are too large to be truly single domain, yet do exhibit stability unexpected for grains with domain walls (MD grains). The physics of PSD grains is much more complicated than for SD grains and is not fully understood (see Section 5.3 for a brief discussion.)
For grains larger than a few microns, the acquisition of TRM is very inefficient. In addition, TRM in these larger grains can be quite unstable; they are prone to acquire viscous magnetization. SD and PSD grains are the effective carriers of TRM, while larger MD grains are likely to carry a component of magnetization acquired long after original cooling.
Rapidly cooled volcanic rocks generally have grain-size distributions with a major portion of the distribution within SD and PSD ranges. Also deuteric oxidation of volcanic rocks can produce intergrowth grains with effective magnetic grain size less than the magnetic grains that crystallized from the igneous melt. Thus, volcanic rocks are commonly observed to possess fairly strong and stable TRM. A typical intensity of TRM in a basalt flow is 1 Am−1. Because grain size depends in part on cooling rate of the igneous body, rapidly cooled extrusive rocks are frequently preferable to slowly cooled intrusive rocks. However, exsolution processes can break what would have been unsuitable MD magnetic grains into ideal strips of SD-like particles (see Chapter 6) so there is no universal rule as to which rocks will behave in the ideal single domain manner.
Equation 7.2 shows that blocking energy depends on volume. This means that relaxation time could change from very short to very long by increasing the size of the grain (see Figure 7.11). Chemical changes that form ferromagnetic minerals below their blocking temperatures which then grow in a magnetizing field result in acquisition of a chemical remanent magnetization or Chemical reactions involving ferromagnetic minerals include a) alteration of a pre-existing mineral (possibly also ferromagnetic) to a ferromagnetic mineral CRM. alteration chemical remanence aCRM or b) precipitation of a ferromagnetic mineral from solution. This section outlines a model of CRM acquisition that explains the basic attributes of this type of grain-growth CRM (gCRM).
Magnetic mineralogy can change after a rock is formed in response to changing chemical environments. Red beds (see Figure 7.12a), a dominant sedimentary facies in earlier times, are red because pigmentary hematite grew at some point after deposition. Hematite is a magnetic phase and the magnetic remanence it carries when grown at low temperatures is an example of gCRM.
Magnetite is an example of a magnetic phase which is generally out of chemical equilibrium in many environments on the Earth’s surface. It tends to oxidize to another magnetic phase (maghemite) during weathering. As it changes state, the iron oxide may change its magnetic moment, acquiring an aCRM.
The relationship of the new born CRM to the ambient magnetic field can be complicated. It may be largely controlled by the prior magnetic phase whence it came, it may be strongly influenced by the external magnetic field, or it may be some combination of these factors. We will begin with the simplest form of CRM – the gCRM.
Inspection of Equation 7.2 for relaxation time reveals that it is a strong function of grain volume. A similar theoretical framework can be built for remanence acquired by grains growing in a magnetic field as for those cooling in a magnetic field. As a starting point for our treatment, consider a non-magnetic porous matrix, say a sandstone. As ground water percolates through the sandstone, it begins to precipitate tiny grains of a magnetic mineral (Figure 7.12c). Each crystal is completely isolated from its neighbors. For very small grains, the thermal energy dominates the system and they are superparamagnetic. When volume becomes sufficient for magnetic anisotropy energy to overcome the thermal energy, the grain moment is blocked and can remain out of equilibrium with the magnetic field for geologically significant time periods. Keeping temperature constant, there is a critical blocking volume vb below which a grain maintains equilibrium with the applied field and above which it does not. We can find this blocking volume by solving for v in the Néel equation:
The magnetization acquired during grain growth is controlled by the alignment of grain moments at the time that they grow through the blocking volume. Based on these principles, CRM should behave very similarly to TRM.
There have been a few experiments carried out with an eye to testing the grain growth CRM model and although the theory predicts the zeroth order results quite well (that a simple CRM parallels the field and is proportional to it in intensity), the details are not well explained, primarily because the magnetic field affects the growth of magnetic crystals and the results are not exactly analogous to TRM conditions (see e.g. Stokking and Tauxe, 1990a.) Moreover, gCRMs acquired in changing fields can be much more complicated than a simple single generation, single field gCRM (Stokking and Tauxe, 1990b).
Alteration CRM can also be much more complicated than simple gCRM in a single field. Suffice it to say that the reliability of CRM for recording the external field must be verified (as with any magnetic remenance) with geological field tests and other techniques as described in future chapters.
Sediments become magnetized in quite a different manner from igneous bodies. Detrital grains are already magnetized, unlike igneous rocks which crystallize above their Curie temperatures. Magnetic particles that can rotate freely will turn into the direction of the applied field just as compass needles do. The net magnetization of such particles, if locked in place can result in a depositional remanent magnetization (DRM). Sediments are also subject to post-depositional modification through the action of organisms, compaction, diagenesis and the aquisition of VRM all of which will affect the magnetization. This magnetization is usually called post-depositional remanent magnetization or pDRM. In the following, we will consider the syn-depositional processes of physical alignment of magnetic particles in viscous fluids (giving rise to the primary DRM).
The theoretical and experimental foundation for DRM is less complete than for TRM. Placing a magnetic moment m in an applied field B results in a torque Γ on the particle Γ = m×B = mB sinθ, where θ is the angle between the moment and the magnetic field vector. In a fluid like water, the torque is opposed by the viscous drag and inertia so the equation of motion governing the approach to alignment is:
where λ is the viscosity coefficient opposing the motion of the particle through the fluid and I is the moment of inertia. Neglecting the inertial term (which is orders of magnitude less important that the other terms) we have:
where θo is the initial angle between m and B (Nagata, 1961). By setting λ = 8πr3η where r is the particle radius and η to the viscosity of water (∼ 10−3 kg m−1s−1), the time constant ϒ of Equation 7.9 over which an inital θo reduces to 1∕e of its value would be:
where M is the volume normalized magnetization.
Plugging in reasonable values for η,M and B and assuming isolated magnetic particles yields a time constant that is extremely short (microseconds). The simple theory of unconstrained rotation of magnetic particles in water as developed by Nagata (1961) predicts that sediments with isolated magnetic particles should have magnetic moments that are fully aligned and insensitive to changes in magnetic field strength; DRM should be at saturation. Yet even from the earliest days of laboratory redeposition experiments (e.g., Johnson et al., 1948; see Figure 7.13a) we have known that depositional remanence (DRM) can have a strong field dependence and that DRMs are generally far less than saturation remanences (∼0.1%). Much of the research on DRM has focussed on explaining the strong field dependence observed for laboratory redepositional DRM.
The observation that DRM is usually orders of magnitude less than saturation and that it appears to be sensitive to changing geomagnetic field strengths implies that the time constant of alignment is much longer than predicted by Equation 7.10. Either there is a disruption of alignment by some mechanism, or we have underestimated ϒ somehow.
Collinson (1965) invoked Brownian motion to disrupt alignment. Reasonable parameter assumptions suggest that particles smaller than about 100 nm could be affected by Brownian motion suggesting a possible role in DRM of isolated magnetite grains free to rotate in water. The problem with this suggestion is that such small particles take an extremely long time to settle. Also, in almost all natural waters, magnetite particles will adhere to clay particles making isolated magnetic particles in nature unlikely (see, e.g., Katari et al., 2000).
To increase ϒ, one can either assume a larger viscosity than that of pure water, or decrease magnetization. by for example, using values for M much lower than the saturation magnetizations of common magnetic minerals (e.g., Collinson, 1965) or padding the magnetic particles with non-magnetic “fluff” through the process of flocculation (Shcherbakov and Shcherbakova, 1983). Using the viscosity in the sediment itself in Equation 7.10 fails to explain laboratory remanences that are demonstrably “fixed” after settling – the viscosity of the mud appears to be too high to allow post-depositional re-alignment, yet these sediments exhibit field dependence (e.g., Tauxe et al., 2006). Alternatively, one could increase ϒ by assuming a reduce value for M. However, even using the magnetization of hematite, which is two orders of magnitude lower than magnetite, results in values for ϒ that are still less than a second.
In saline environments, sedimentary particles tend to flocculate. For magnetic particles embedded in a non-magnetic matrix, the magnetic field must turn the entire particle and the net magnetization of the floc must be used in Equation 7.10.
The tendency to flocculate increases with increasing salinity. There are therefore two completely different systems when discussing DRM: ones in which magnetic particles remain essentially isolated or embedded in very small flocs (e.g., in freshwater lakes; see Figure 7.14a) and ones in which flocculation plays a role (e.g., marine environments; see Figure 7.14b). For the case of magnetite in freshwater, Brownian motion may reduce DRM efficiency and give rise to the dependence on B. In saline waters, however, the most important control on DRM is the size of the flocs in which the magnetic particles are embedded. In the following we briefly explore these two very different environments.
In freshwater we expect to have relatively unflocculated particles whose magnetic moments are presumably a saturation remanence. Although, even in fresh water, the magnetic particles are likely to be attached to clays through van der Waals attraction the clays themselves have no great mutual attraction. It is possible, therefore that magnetic particles could be subject to Brownian motion. Here we outline the theory to investigate the behavior of DRM that would be expected from a Brownian motion mechanism (henceforth a Brownian remanent magnetization or BRM).
To estimate the size of particles affected by Brownian motion, Collinson (1965) used the equation:
where ϕ is the Brownian deflection about the applied field direction (in radians), k is Boltzmann’s constant (1.38 x 10−23JK−1) and T is the temperature in kelvin. The effect of viscous drag on particles may also be important when the magnetic moments of the particles are low (see Coffey et al., 1996 for a complete derivation), for which we have:
where δ is the time span of observation (say, 1 second). According to this relationship, weakly magnetized particles smaller than about a micron will be strongly affected by Brownian motion. Particles that have a substantial magnetic moment however, will be partially stabilized (according to Equation 7.11) and might remain unaffected by Brownian motion to smaller particle sizes (e.g., 0.1 μm). In the case of isolated particles of magnetite, therefore, we should use Equation 7.11 and BRM should follow the Langevin equation for paramagnetic gases, i.e.:
Here the quantity sIRM is a saturation isothermal remanence (Mr in Chapter 5) and is the moment acquired when all the magnetic particles are aligned to the maximum extent possible. To get an idea of how BRMs would behave, we first find m from M(r) [here we use the results from micromagnetic modeling (see Chapter 4)]. Then, we evaluate Equation 7.12 as a function of B for a given particle size (see Figure 7.15a). We can also assume any distribution of particle sizes (e.g, that shown as the inset to Figure 7.15b), and predict BRM/sIRM for the distribution (blue line in Figure 7.15b). It is interesting to note that BRMs are almost never linear with the applied field unless the particle sizes are very small.
BRMs are fixed when the particles are no longer free to move. The fixing of this magnetization presumably occurs during consolidation, at a depth (known as the lock-in depth) where the porosity of the sediment reduces to the point that the particles are pinned (see Figure 7.14a). Below that, the magnetization may be further affected by compaction (e.g., Deamer and Kodama, 1990) and diagenesis (e.g., Roberts, 1995).
Equation 7.9 predicts that a magnetic moment m making an initial angle θo with the applied field B will make an angle θ with the field after time t. From this, we can make a simple numerical model to predict the DRM for an initially randomly oriented assemblage of magnetic moments, after time t [or the equivalent settling length l using some settling law (e.g., Gibbs 1985; see Katari and Bloxham 2001)]. In Figure 7.16a and b, we show the DRM curves predicted by Tauxe et al. (2006) for simple flocs with a single magnetite grain in each as a function of magnetic field and radius.
In general, the magnetic flocs are either nearly aligned with the magnetic field, or nearly random with only a narrow band of floc sizes in between the two states for a given value of B. Increasing B increases the size for which particles can rotate into the field, giving rise to the dependence of DRM intensity on applied field strength. Taking a given particle size and evaluating DRM as a function of the applied field (Figure 7.16b) predicts the opposite behavior for DRM than the Brownian motion approach (Figure 7.15) in that the larger the floc size, the weaker the DRM and also the more linear with respect to the applied field. Brownian motion, therefore, predicts low DRM efficiency for the smallest particles increasing to near saturation values for particles around 0.1 μm while composite floc theory predicts decreased DRM efficiency for larger floc sizes.
The flocculation model of DRM makes specific predictions which can in principle be tested if the model parameters can be estimated or controlled. Tauxe et al. (2006) tested the flocculation hypothesis by dispersing natural sediments in settling tubes to which varying amounts of NaCl had been introduced. Prior to dispersal, each specimen of mud was given a saturation remanence. They measured DRM as a function of salinity (and therefore floc size) and the applied field (see Figure 7.17). In general their results suggested the following: 1) the higher the salinity, the lower the net moment and the faster the particles settled, 2) the higher the applied field, the higher the net moment, although a saturation DRM appeared to be nearly achieved in the 1 ppt NaCl set of tubes by 30 μT (Figure 7.17), 3) the relationship of DRM to B was far from linear with applied field in all cases, and 4) the saturation DRM was less than the saturation IRM so the simplest idea of one floc/one magnetic particle failed to explain the data.
In nature, flocs are formed by coalescing of “fundamental flocs” into composite flocs. Each fundamental floc would have tiny magnetic particles adhering to them and would have the sIRM imparted prior to settling. As the composite flocs grow by chance encounters with other flocs, the net moment of the composite floc will be the vector sum of the moments of the fundamental flocs. Tauxe et al. (2006) used the composite floc hypothesis to model experimental DRMs (see examples in Figure 7.17); model predictions were in excellent agreement with the redeposition data.
It appears that by combining the effects of Brownian motion for non-flocculating environments and a composite floc model for flocculating environments we are on the verge of a quantitative physical theory that can account for the acquisition of depositional remanence near the sediment/water interface. The DRM will be fixed when no further physical rotation of the magnetic particles in response to the geomagnetic field is possible. The depth at which moments are pinned is called the lock-in depth. In the “standard model” of depositional remanence (DRM) acquisition (see, e.g., Verosub, 1977) detrital remanence is acquired by locking in different grains over a range of depths. This phased lock-in leads to both significant smoothing and to an offset between the sediment/water interface and the fixing of the DRM. Many practitioners of paleomagnetism still adhere to this concept of DRM which stems from the early laboratory redeposition experiments which were carried out under non-flocculating conditions, however. As summarized by Tauxe et al. (2006), the evidence for substantial smoothing and a deep (>10 cm) lock in remains weak.
Physical rotation of particles in response to compaction can also change the magnetic remanence. As sediments lose water and consolidate, compaction can have a strong effect on DRM intensity (e.g., Anson and Kodama, 1987). Consolidation is a continuous process starting from the sediment water interface when sedimentary particles first gel (see, e.g., Figure 7.14b) and continuing until the sediment is completely compacted, perhaps as deep as hundreds of meters. The effect on magnetic remanence depends on volume loss during compaction which depends largely on clay content, so clay rich sediments will have the largest effect.
Other processes not involving post-depositional physical rotation of magnetic particles including “viscous” (in the sense of magnetic viscosity) remagnetization and diagenetic alteration resulting in a chemical remanence may also modify the DRM. All of these processes influence the intensity of remanence and hamper our efforts to decipher the original geomagnetic signal.
Some sedimentary remanences show a remanence vector that is generally shallower than the applied field, a phenomenon known as inclination error. We show the results of a typical laboratory redeposition experiment (Tauxe and Kent, 1984) in Figure 7.18. The tangent of the observed inclination is usually some fraction (∼ 0.4-0.6) of the tangent of the applied field (King 1955). Thus, inclination error is at a maximum at 45∘ and is negligible at high and low inclinations. Tauxe and Kent (1984) also demonstrated a strong link between DRM efficiency and inclination error. Sediments exhibiting inclination error have the strongest remanences in horizontal fields and the weakest in vertical fields.
Interestingly, many natural sediments (e.g. deep sea or slowly deposited lake sediments) display no inclination error. The worst culprits appear to be sediments whose NRM is carried by detrital hematite, a flakey particle with a small saturation remanence.
Examination of the Equations 7.1 and 7.2 reveals an interesting dependence of relaxation time on the coercivity of magnetic particles. We can coax the magnetization of otherwise firmly entrenched particles to follow an applied field, if that field is larger than the coercivity. Exposing a particle to a large magnetic field, will allow magnetic particles whose coercivity is below that field to flip their magnetic moments to a direction at a more favorable angle to the applied field, resulting in a gain in magnetic remanence in that direction. This type of magnetic remanence is called an isothermal remanent magnetization or IRM (see Chapters 4 and 5).
IRM is unfortunately a naturally occurring remanence. When lightning strikes in the neighborhood, rocks can become either partially or entirely remagnetized (see Figure 7.19). These magnetizations often mask the primary magnetization (TRM or DRM), but can sometimes be removed.
IRMs can also be useful. The magnitude is sensitive to the magnetic mineralogy, concentration and grain size and properties of IRMs are used for a variety of purposes, some of which we will discuss in Chapters 8 and 10. In anticipation of those chapters, we will briefly introduce some of properties of laboratory acquired IRMs.
In Figure 7.20 we illustrate the behavior of an initially demagnetized specimen as it is subjected to increasing impulse fields. The maximum IRM achieved is known as sIRM (saturation IRM) or Mr (and sometimes Mrs). After saturation, the specimen can be turned around and subjected to increasingly large back-fields. The back-field field sufficient to remagnetize half of the moments (resulting in a net remanence of zero) is the coercivity of remanence (Hcr or μoHcr depending on the magnetic units). Alternatively, we could use the magnetic field required to impart an IRM that is half the intensity of the saturation remanence (H′′′cr). We call this the H1∕2 method.
By now we have encountered four different methods for estimating the coercivity of remanence (see Table C.1). Each of these requires a monogenetic populations of grains and will give meaningless numbers if there are several different minerals or grain size populations in the specimen. The “ascending loop intercept method” also assumes uniaxial single domain particles. So differences between, for example the Hcr estimate and Hcr′ could provide clues about departures from that assumption.
Sometimes rocks are exposed to elevated temperatures for long periods of time (for example during deep burial). The grains with relaxation times (at the elevated temperature) shorter than the exposure time may have acquired a so-called thermo-viscous remanent magnetization (TVRM). To erase this remanence, the rock must be heated in the laboratory (in zero field) hot enough and long enough. We cannot wait for geologically meaningful periods of time, so we must estimate what the effective blocking temperature of the TVRM component will be on laboratory time scales. To do this, we follow the logic of Pullaiah et al. (1975). If we hold Hc,Ms and v constant in Equation 7.2, we could calculate the relationship of τ to temperature by:
For uniaxial anisotropy, Hc(T) ≃ ΔNMs for magnetite, so Hc varies linearly with Ms. Exploiting this property, we can simplify Equation 7.13 to:
Now all we need is the variation of saturation magnetiztation with temperature. As previously noted, this is not perfectly known. However, using the approximate relationship from Chapter 3 of Ms(T) (γ=0.38 in Equation 3.11 and assuming Tc = 580∘C as in Chapter 6), we can draw the plot shown in Figure 7.21a for τ versus Tb. This plot is different in detail from that of Pullaiah et al. (1975) because of the difference in assumed Ms(T) behavior.
The theoretical treatment for hematite is different than for magnetite because the dominant source of anisotropy is either a defect moment or magnetocrystalline anisotropy, and the relationship of coercivity with temperture is different than for shape anisotropy. In fact, this relationship for hematite is very poorly constrained. Pullaiah et al. (1975) assumed Hc(T) ∝ Ms3(T) from which they derived:
Using experimental values of blocking temperature for hematite, they calculated nomograms for hematite similar to that shown in Figure 7.21b.
Curves like those shown in Figure 7.21 allow us to predict what the blocking temperature of a viscous magnetization acquired over many years will be under laboratory conditions (relaxation times of hundreds of seconds). There are many assumptions built into the plot shown in Figure 7.21 and some discussion in the literature (see Dunlop and Özdemir, 1997 for a good summary). Because of the sensitivity to the Ms(T) behavior and the even more poorly constrained (at least for hematite) Hc(T) behavior, these plots should be used with caution.
A rock collected from a geological formation has a magnetic remanence which may have been acquired by a variety of mechanisms some of which we have described. The remanence of this rock is called simply a natural remanent magnetization in order to avoid a genetic connotation in the absence of other compelling evidence. The NRM is often a combination of several components, each with its own history. The NRM must be picked apart and the various components carefully analyzed before origin can be ascribed. The procedures for doing this are described in later chapters.
Another way to magnetize rocks (although not in nature) is to subject a sample to an alternating field (see Figure 7.22). Particles whose coercivity is lower than the peak oscillating field will flip and flop along with the field. These entrained moments will become stuck as the peak field gradually decays below the coercivities of individual grains. Assuming that there is a range of coercivities in the sample, the low stability grains will be stuck half along one direction of the alternating field and half along the other direction; the net contribution to the remanence will be zero. This is the principle of so-called alternating field (AF) demagnetization which we will discuss in later chapters.
If there is a small DC bias field superposed on the alternating field, then there will be a statistical preference in the remagnetized grains for the direction of the bias field, analogous to TRM acquired during cooling. This net magnetization is termed the anhysteretic remanent magnetization or ARM. By analogy to partial thermal remanence, one can impart a partial anhysteretic remanence (pARM) by only turning on the DC field for part of the AF cycle (solid blue line in Figure 7.22). Also, by normalizing the magnetization (volume normalized with units of Am−1) by the DC field (also converted to Am−1), one has the dimensionless parameter known as ARM susceptibility (χARM). This parameter assumes that ARM is linearly related to the inducing field so that χARM is independent of the applied field. This is of course only true for small DC fields and may not be true for the fields used in most laboratories (50-100 μT).
A related remanence known as the gyromagnetic remanent magnetization or GRM is a somewhat mysterious remanence that is acquired by stationary specimens in moving fields or by rotating specimens in either steady or moving fields. It is most frequently observed as a component of magnetization acquired during alternating field demagnetization that is perpendicular to the last axis of demagnetization. It was originally thought to arise from the gyroscopic response of SD moments to the torque of an applied field which, in anisotropic distributions of SD moments resulted in a net moment perpendicular to the applied field (Stephenson, 1981). But, truly uniaxial single domain particles will have no net remanence if demagnetized along all three axes, no matter how anisotropic the distribution of easy axes is. More recently, Potter and Stephenson (2005) hypothesized that small deviations from the uniaxial constraint for small acicular magnetic particles could explain the behavior. They performed experiments on elongate particles of maghemite (1 μm in length and 0.22 μm in diameter) and confirmed that the non-ideal (not strictly uniaxial) behavior could explain the GRM. They referred to these particles as being single domain, and while they may not have had domain walls, it is likely that such large particles were in fact in the size range that exhibit vortex remanent states (see Chapter 4). It is therefore likely that anisotropic distributions of vortex state particles is the cause of GRM.
SUPPLEMENTAL READINGS: Dunlop and Özdemir (1997), Chapters 8, 10,11, 13.
SD grains of hematite (αFe2O3) are precipitating from solution at a temperature of 280K. The coercivity is μoHc= 1 T. Use what you need from Table 6.1 from Chapter 6 and find the diameter of a spherical hematite particle with a relaxation time of 100 seconds.
In the text, you were given a brief discussion of the time required for a magnetic grain to become substantially aligned with the magnetic field in a viscous fluid. For water at room temperature, η is approximately 10−3 m−1 kg s−1. Calculate the time constant of alignment for saturation values of magnetization for both magnetite and hematite in water. [HINT: use values listed in Table 6.1 from Chapter 6.]
Sometimes rocks are exposed to elevated temperatures for long periods of time (for example during deep burial). The grains with relaxation times (at the elevated temperature) shorter than the exposure time will have acquired a so-called thermo-viscous remanence. In order to demagnetize this remanence on laboratory time scales of, say, 100 seconds, we need to know the blocking temperature on laboratory time scales.
a) Use the curves in Figure 7.21a to determine the laboratory blocking temperature of a VRM acquired since the last reversal (0.78 Ma) by a rock remaining at 20∘ C for magnetite. Do the same for a rock buried for 30 Ma to a depth at temperature 250∘C.
b) Hydrothermal activity elevates the temperature of a red sandstone to 225∘C for a time interval of 1000 yr and results in formation of thermoviscous remanent magnetization (TVRM). If hematite is the exclusive ferromagnetic mineral in this red sandstone, approximately what temperature of thermal demagnetization is required to unblock (remove) this TVRM? The time at maximum temperature during thermal demagnetization is approximately 30 min.
Relaxation time is controlled by saturation magnetization, coercivity, volume and temperature. Write a program that will draw curves for a given relaxation time for coercivity (on the X axis) versus grain volume (on the y axis). Plot out curves for 100 sec, 1 Myr and 1 Gyr for magnetite and for hematite. Use coercivities from 1 mT to 100 mT.
There is a lively field within rock magnetism that exploits the dependence of rock magnetic parameters on concentration, grain size and mineralogy for gleaning information about past (and present) environments. Examples of applied rock magnetism (environmental magnetism) run from detection of industrial pollution to characterizing changes across major climatic events In this chapter we will review the basic tool-kit used by environmental magnetists and illustrate various applications with examples.
Applied rock magnetism relies on imaging techniques and magnetic measurements. Images come from optical microscopes, magnetic force microscopes, scanning electron and transmission electron microscopes using magnetic separates, polished sections or thin sections. Magnetic measurements include magnetic susceptibility, magnetic remanence and hysteresis, all as a function of temperature. All of these measurements can also be done as a function of orientation, but orientation is not usually important in environmental applications; anisotropy of rock magnetic measurements will be the topic of a later chapter. A list of the most frequently used parameters is included in Table 8.1.
Images of magnetic phases are used to shed light on the origin of the magnetic phases. Scanning electron microscope images of igneous (Figure 8.1a), detrital or aeolian (Figure 8.1b), authigenic (Figure 8.1c), biogenic (Chapter 6), anthropogenic (Figure 8.1d) and cosmic (Figure 8.1e) sources all have distinctive ear-marks, so actually looking at the particles in question can provide invaluable information.
|median destructive temperature||MDT||∘C or K||8.2|
|Curie (Néel) Temperature||Tc||∘C or K||3.3, 8.2|
|Hopkinson Effect||Th||∘C or K||8.2|
|Verwey transition||Tv||∘C or K||4.1.3,6.1.1|
|Morin transition||Tm||∘C or K||6.1.2|
|Pyrrhotite transition||Tp||∘C or K||6.2|
|low field (initial)||χlf& 5.2.2 high field||χhf||5.2.2,8.5|
|saturation remanence||Mr or sIRM||5.2.1,7.7,C.1|
|partial anhysteretic remanence||pARM||7.10|
|Coercivity||Hc or μoHc||Am−1 or T||4.1.3,5.2.1,C.1|
|Coercivity of remanence||Hcr or μoHcr||Am−1 or T||5.2.1,7,C.1|
|median destructive field||MDF||Am−1 or T||8.2|
|HIRM||Mr - IRMx||8.7|
|δ − δ||δFC∕δZFC||dimensionless||8.8.4|
In Table 8.1 we list several critical temperatures useful for characterizing the magnetic mineralogy of specimens that are observed in magnetic systems. The Curie (and Néel) temperatures above which spontaneous magnetization ceases, the Verwey and Morin transitions in magnetite and hematite respectively and the pyrrhotite transition at which the magnetic anisotropy energies change character resulting in an observable effect in the magnetization were all encountered in previous chapters. However, there are several critical temperatures that are new, or require additional clarification. The so-called Hopkinson effect listed in Table 8.1 is discussed in Section 8.3.2 under magnetic susceptibility measurements. The median destructive temperature is simply the temperature at which 50% of the NRM is destroyed when a specimen is heated to that temperature and cooled in zero field. It is a measure of stability, only rarely used and only mentioned here for completeness. [An analogous parameter for stability against alternating fields is the median destructive field (MDF), which is the alternating field required to reduce a remanence to 50% of its initial value.]
Although we defined the Curie temperature in Chapter 3, we did not really describe how the measurements were made or how the temperature can be estimated. The principles are illustrated in Figure 8.2. A specimen is placed near the pole pieces of a strong electromagnet. The field gradient will pull a magnetic specimen in. A pick-up coil counteracts this force with a restoring force of equal magnitude. The current required to keep the specimen stationary is proportional to the magnetization. A thermocouple monitors the temperature as the specimen heats in a water cooled oven. Both the output of the pickup coil and the thermocouple can be put into a computer to make a graph of saturation magnetization versus temperature an example of which is shown as the solid line in Figure 8.3a.
Estimating the Curie temperature is not as simple as it seems at first glance. Grommé et al. (1969) used the the intersection point of the two tangents to the thermomagnetic curve that bounds the Curie temperature, as shown in the inset to Figure 8.3a. The intersecting tangents method is straightforward to do by hand, but is rather subjective and is difficult to automate. Moskowitz (1981) applied a method based on statistical physics for extrapolating the ferromagnetic behavior expected from experimental data through the Curie temperature to determine the point at which the ferromagnetic contribution reaches zero.
A third method for estimating Curie temperatures from thermomagnetic data, the differential method of Tauxe (1998), seeks the maximum curvature in the thermomagnetic curve. This method is shown in Figure 8.3b,c. First, we calculate the derivative (dM∕dT) of the data in Figure 8.3a (see Figure 8.3b). Then, these data are differentiated once again to produce d2M∕dT2 (Figure 8.3c). The maximum in the second derivative occurs at the point of maximum curvature in the thermomagnetic curve and is a reasonable estimate of the Curie temperature.
The principal drawback of the differential method of Curie temperature estimation is that noise in the data is greatly amplified by differentiation, which makes identification of the Curie temperature difficult. These drawbacks can often be overcome by smoothing the data either by calculating three or more point running means, or using some filter either by Fourier methods or in the temperature domain.
There are a host of other measurements of remanent magnetization as a function of temperature. These can contribute significantly to the discussion of degree of alteration, degree of particle interaction and grain size of the magnetic phases in a specimen. A complete discussion of these are beyond the scope of this chapter, but the student should be aware of the rich possibilities of low and high temperature measurements of remanence. For interesting examples, peruse the various issues of the IRM Quarterly at: http://www.irm.umn.edu/IRM/quarterly.html.
We first encountered the concept of magnetic susceptibility in Chapter 1 and again in more detail in Chapters 3 and 5. We defined it as the ratio of the induced magnetization to an inducing magnetic field or MI∕H. Because everything in a rock or mineral separate contributes to the magnetic susceptibility, it can be a fertile source of information on the composition of the sample. [For the same reasons, it can also be somewhat nightmarish to interpret on its own.] It is quick and easy to measure both in the field and in the laboratory; hence, magnetic susceptibility is used in a variety of ways in applied rock magnetism, including lithologic correlation, magnetic fabric, magnetic grain size/domain state, mineralogy and so on.
It is worth thinking briefly about what controls magnetic susceptibility and what the data might mean. At an atomic level, magnetic susceptibility results from the response of electronic orbits and/or unpaired spins to an applied field (Chapter 3). The diamagnetic response (orbits) is extremely weak and unless a specimen, e.g., from some ocean sediments, is nearly pure carbonate or quartz, it can be neglected. The paramagnetic response of, say, biotite, is much stronger, but if there is any appreciable ferromagnetic material in the specimen, the response will be dominated by that. In highly magnetic minerals such as magnetite, the susceptibility is dominated by the shape anisotropy. For a uniformly magnetized particle (e.g., small SD magnetite), the maximum susceptibility is at a high angle to the easy axis, because the moments are already at saturation along the easy direction. So we have the somewhat paradoxical result that uniformly magnetized particles have maximum susceptibilities along the short axis of elongate grains. For vortex remanent state, or multi-domain particles and perhaps for strongly flowered grains, this would not be the case and the maximum susceptibility is along the particle length. Another perhaps non-intuitive behavior is for superparamagnetic particles whose response is quite large. We learned in Chapter 7 that it can be as much as 27 times larger than a single domain particle of the same size! Chains of particles may also have magnetic responses arising from inter-particle interaction. Therefore, although magnetic susceptibility is quick to measure, its interpretation may not be straight-forward.
Many laboratories use equipment that works on the principle illustrated in Figure 8.4 whereby an alternating current is driven through the coil on the right inducing a current in the coil on the left. This alternating current generates a small alternating field (generally less than 1 mT) along the axis of the coil. When a specimen is placed in the coil (Figure 8.4b), the alternating current induces an alternating magnetic field in the specimen. This causes an offset in the alternating current in the coil on the right which is proportional to the induced magnetization. After calibration, this offset can then be cast in terms of magnetic susceptibility. If the specimen is placed in the solenoid in different orientations the anisotropy of the magnetic susceptibility can be determined, a topic which we defer to Chapter 13.
Susceptibility can be measured as a function of temperature by placing the specimen in a heating coil (see examples in Figure 8.5). We know from Chapter 3 that diamagnetism is negative and independent of temperature (dashed line in Figure 8.5a) and that paramagnetism is inversely proportional to temperature (solid line in Figure 8.5a). There is a difference of a factor of ln(Cτ) or about 27 between the superparamagnetic and the stable single domain magnetic susceptibility for a given grain. This means that as the blocking temperatures of the magnetic grains in a particular specimen are reached, the susceptibility of the grain will increase by this factor until the Curie temperature is reached, at which point only paramagnetic susceptibility is exhibited and the susceptibility will drop inversely with temperature (solid line in Figure 8.5b). An SP peak in susceptibility below the Curie temperarure could explain the so-called Hopkinson effect which is a peak in magnetic susceptibility associated with the Curie temperature. The Hopkinson effect is frequently used to approximate Curie temperatures but may actually be related to unblocking in some specimens.
Susceptibility can also be measured as a function frequency of the applied oscillating field. Superparamagnetic behavior depends on the time scale of observation (the choice of τ) so grains may behave superparamagnetically at one frequency, but not at another. Frequency dependent susceptibility χfd can therefore be used to constrain grain size/ domain state of magnetic materials. We illustrate this effect in Figure 8.6 which shows data gathered at the Institute for Rock Magnetism (IRM) on samples of the Tiva Canyon Tuff which are well known for their superparamagnetic/single domain grain size range (e.g., Schlinger et al., 1991).
In Figure 8.6a we show measurements made at room temperature. Because of the far greater magnetic susceptibility of superparamagnetic particles, χ drops with the loss of SP behavior. Magnetic grains that act superparamagnetically at 1 Hz, may behave as stable single domains at higher frequencies (remember that SP behavior depends on time scale of observation), hence the loss of magnetic susceptibility with increasing frequency in the Tiva Canyon Tuff specimens. While the magnetization drops with increasing frequency, it can rise with increasing temperature as described in Section 8.3.2. This behavior is shown in Figure 8.6b.
Although most laboratories make magnetic susceptibility measurements on small specimens, it is also possible to make measurements on core sections or even at the outcrop. The latter can be done with hand held susceptometers various shapes and sizes, depending on the application. We show a map made with a field device in Figure 8.7. Magnetic susceptibility is enhanced where magnetite spheres produced in the combustion of petroleum products are present as pollutants in dust particles. Therefore, magnetic susceptibility can be used as a tracer of industrial pollution (see, e.g., Petrovsky et al. 2000).
Table 8.1 lists various magnetizations that are useful in applied rock magnetism. These were all introduced in previous chapters but several deserve additional discussion. We will discuss the hysteresis parameters, Mr and Ms together with their critical field counterparts Hc and Hcr in Section 8.5. In this section we will flesh out our understanding of IRM with particular attention to its uses in applied rock magnetism.
Cisowski (1981) suggested that by comparing IRM acquisition curves like that shown in Figure 7.20 in Chapter 7 with the curves obtained by progressively demagnetizing the sIRM in alternating fields, one might be able to detect the effect of particle interaction. He collected data from a specimen thought to be dominated by uniaxial single domain particles (the Lambert plagioclase) and from a specimen of chiton teeth, thought to be dominated by interacting particles of magnetite. The IRM acquisition data for the two specimens are shown as the solid lines in Figure 8.8a and the demagnetization of the saturation IRMs are shown as dashed lines. The field at which the demagnetization curve crosses the acquisition curve is called the crossover point, here designated Rx. This point should theoretically by reached when the IRM is half the saturation valued for uniaxial single domain particles. The value of nearly 0.5 for the Lambert plagioglase (Rx(LP) in Figure 8.8a) supports the claim of uniaxial single domain behavior for this specimen. The much depressed value of Rx(C) ≃ 0.25 for the chiton teeth also supports the interpretation of significant inter-particle interaction for that specimen. Magnetic interactions are nowadays more frequently assessed using the FORC diagrams discussed in Chapter 5, but the cross-over technique has been used extensively in the past.
Another method for detecting magnetic interactions was developed by Sugiura (1979). He showed that the ARM acquired as a function of DC bias field (BDC) is a strong function of magnetite concentration. We show examples of two ARM acquisition curves in Figure 8.8b, one with high magnetite concentration (2.33 volume percent, circles) and one with low magnetite concentration (2.5 x 10−4 volume percent, squares). The ARM acquisition curve for the low concentration is highly non-linear and achieves a substantially higher fraction of the saturation IRM as opposed to the curve for the high concentration, which is linear and much less efficient.
Robertson and France (1994) suggested that if populations of magnetic materials have generally log-normally distributed coercivity spectra and if the IRM is the linear sum of all the contributing grains, then an IRM acquisition curve could be “unmixed” into the contributing components. The basic idea is illustrated in Figure 8.9 whereby two components each with log normally distributed coercivity spectra (see dashed and dashed-dotted lines in the inset) create the IRM acquisition curve shown. By obtaining a very well determined IRM acquisition plot (the “linear acquisition plot” or LAP in Figure 8.9 using the terminology of Kruiver et al., 2001), one could first differentiate it to get the “gradient acquisition plot” or GAP (heavy solid line in the inset to Figure 8.9). This then can be “unmixed” to get the parameters of the contributing components such as the mean and standard deviation of the log-normal distribution (called B1∕2 and DP respectively by Robertson and France, 1994). For consistency with prior usage in this book, we use the μoH and H terminology for coercivity depending on unit choice. Note that H1∕2 is a measure of Hcr if there is only one population of coercivities (see Table C.1 and Appendixapp:hyst for summary of coercivity of remanence). Also, unmixing of other forms of magnetic remanence (e.g., ARM), demagnetization as well as acquisition, and other distributions are also possible as are more complex methods of inversion (see e.g., Egli, 2003).
Another very useful technique for characterizing the magnetic mineralogy in a sample is the 3D IRM unblocking technique of Lowrie (1990). Some important magnetic phases in geological materials (Table 6.1; Chapter 6) are magnetite (maximum blocking temperature of ∼580∘C, maximum coercivity of about 0.3 T), hematite (maximum blocking temperature of ∼ 675∘C and maximum coercivity larger than several tesla), goethite (maximum blocking temperature of ∼ 125∘C and maximum coercivity of much larger than 5 T), and various sulfides. The relative importance of these minerals in bulk samples can be constrained by a simple trick that exploits both differences in coercivity and unblocking temperature (Lowrie, 1990).
This technique anticipates somewhat the chapter on demagnetization techniques. It also should remind you of Problem 2 in Chapter 6. In order to partially demagnetize a fraction of the magnetic remanence, a specimen is heated to a given temperature Ti at which all those grains whose blocking temperatures have been exceeded are by definition superparamagnetic. If the heating is done in zero applied field, the net magnetization of those grains will average to zero (because the SP particles are in equilibrium with a null field). Therefore, the contribution of those grains with a blocking temperature of Ti will be erased.
The “3D IRM” technique of Lowrie (1990) proceeds as follows:
An example of 3D IRM data are shown in Figure 8.10. The curve is dominated by a mineral with a maximum blocking temperature of between 550∘ and 600∘C and has a coercivity less than 0.12 T. These properties are typical of magnetite (Table 6.1; Chapter 6). There is a small fraction of a high coercivity (>0.4 T) mineral with a maximum unblocking temperature > 650∘C, which is consistent with the presence of hematite (Table 6.1; Chapter 6).
IRM and ARM acquisition and demagnetization curves can be a fecund source of information about the magnetic phases in rocks. However, these are extremely time consuming to measure taking hours for each one. Hysteresis loops on the other hand are quick, taking about 10 minutes to measure the outer loop. In principle, some of the same information could be obtained from hysteresis loops as from the IRM acquisition curves. [For computational details, see Appendix C.1.]
Hysteresis loops, like IRM acquisition curves are the sum of all the contributing particles in the specimen. There are several basic types of loops which are recognized as the “building blocks” of the hysteresis loops we measure on geological materials. We illustrate some of the building blocks of possible hysteresis loops in Figure 8.11. Figure 8.11a shows the negative slope typical of diamagnetic material such as carbonate or quartz, while Figure 8.11b shows a paramagnetic slope. Such slopes are common when the specimen has little ferromagnetic material and is rich in iron-bearing phases such as biotite or clay minerals.
When grain sizes are very small (∼10 nm), a specimen can display superparamagnetic “hysteresis” behavior (Figure 8.11c). The SP curve follows a Langevin function L(γ) (see Chapter 5) where γ is MsvB∕kT, but integrates over the distribution of v in the specimen.
Above some critical volume, grains will have relaxation times that are sufficient to retain a stable remanence (Chpater 7). Populations of randomly oriented stable grains can produce hysteresis loops with a variety of shapes (see Chapter 5), depending on the origin of magnetic anisotropy and domain state. We show loops from specimens that illustrate representative styles of hysteresis behavior in Figure 8.11d-f. Figure 8.11d shows a loop characteristic of specimens whose remanence stems from SD magnetite with uniaxial anisotropy. In Figure 8.11e, we show data from specular hematite whose anisotropy ought to be magnetocrystalline in origin (hexagonal within the basal plane). Note the very high Mr∕Ms ratio of nearly one. Finally, we show a loop that has lower Mr∕Ms ratios than single domain, yet some stability. Loops of this type have been characterized as pseudo-single domain or PSD (Figure 8.11f).
In the messy reality of geological materials, we often encounter mixtures of several magnetic phases and/or domain states. Such mixtures can lead to distorted loops, such as those shown in Figure 8.11g-i. In Figure 8.11g, we show a mixture of hematite plus SD-magnetite. The loop is distorted in a manner that we refer to as goose-necked. Another commonly observed mixture is SD plus SP magnetite which can result in loops that are either wasp-waisted (see Figure 8.11h) or pot-bellied (see Figure 8.11i).
Considering the loops shown in Figure 8.11g-i, we immediately notice that there are two distinct causes of loop distortion: mixing two phases with different coercivities and mixing SD and SP domain states. Tauxe et al. (1996) differentiated the two types of distortion as “goose-necked” and “wasp-waisted” (see Figure 8.11g,h) because they look different and they mean different things.
Jackson et al. (1990) suggested that the ΔM curve (see Figure 5.5b in Chapter 5) could be differentiated to reveal different coercivity spectra contained in the hysteresis loop. The ΔM curve and its derivative (dΔM∕dH) are sensitive only to the remanence carrying phases, and not, for example, to the SP fraction. We can use these curves to distinguish the two sources of distortion. Hence, in Figure 8.12, we show several representative loops, along with the ΔM and dΔM∕dH curves. Distortion resulting from two phases with different coercivities (e.g., hematite plus magnetite or two distinct grain sizes of the same mineral) results in a “two humped” dΔM∕dH curve, whereas wasp-waisting which results from mixtures of SD + SP populations have only one “hump”.
One quest of applied rock magnetism is a diagnostic set of measurements that will yield unambiguous grain size information. To this end, large amounts of rock magnetic data have been collected on a variety of minerals that have been graded according to size and mode of formation. The most complete set of data are available for magnetite, as this is the most abundant crustal magnetic phase in the world. There are three sources for magnetite typically used in these experiments: natural crystals that have been crushed and sieved into grain size populations, crystals that were grown by a glass ceramic technique and crystals grown from hydrothermal solution. In Figure 8.13a-c we show a compilation of grain size dependence of coercive force, remanence ratio, and coercivity of remanence respectively. There is a profound dependence not only on grain size, but on mode of formation as well. Crushed particles tend to have much higher coercivities and remanence ratios than grown crystals, presumably because of the increased dislocation density which stabilizes domain walls due to a minimum in interaction energy between internal stress and magnetostriction constants of the mineral. These abnormally high values disappear to a large extent when the particles are annealed at high temperature – a procedure which allows the dislocations to “relax” away (see, e.g., Dunlop and Özdemir, 1997). The behavior of low-field magnetic susceptibility is shown in Figure 8.13d. There is no strong trend with grain size over the entire range of grain sizes from single domain to multi-domain magnetite. However, as already mentioned, susceptibility is predicted to be sensitive to the SD/SP domain state transition.
Grain size trends in ARM are shown in Figure 8.13e. ARM has been converted to what is known as the “susceptibility of ARM” or χARM (see Chapter 7). This is done by assuming that ARM is linearly related to the applied DC field and calculating the ratio of ARM (in for example, units of Am2 to the DC field (usually 50-100μT). To do this, the DC field units must first be converted to units of H by dividing by μo and the ARM must be a volume normalized remanence in units of M. Because H and M are both in units of Am−1, χARM is dimensionless. The trend in χARM shown in Figure 8.13e is very poorly constrained because ARM is also a strong function of concentration and the method by which the particles were prepared.
A bewildering array of parameter ratios are in popular use in the applied rock and mineral magnetism literature. The most commonly used ratios are listed in Table 8.1. Most of these are new to us in this chapter and deserve some discussion. Two of the most popular ratios are the hysteresis ratios Mr∕Ms and Hcr∕Hc. These are sensitive to remanence state (SP, SD, flower, vortex, MD) and the source of magetic anisotropy (cubic, uniaxial, defects), hence reveal something about grain size and shape. Both of these ratios can be estimated from a typical hysteresis experiment (Chapter 5) and the results of many such experiments can be compiled onto a single diagram as in Figure 8.14.
Figure 8.14a is known as the Day diagram (Day et al. 1977; see Section 5.3 in Chapter 5). Day diagrams are divided into regions of nominally SD, PSD and MD behavior using some theoretical bounds as guides. The designation PSD stands for pseudo-single domain and has Mr∕Ms ratios in between those characteristic of SD behavior (0.5 or higher) and MD (0.05 or lower). In practice nearly all geological materials plot in the PSD box which comprises the entire flower and vortex state range. The PSD designation should really be split into the truly pseudo-single domain behavior of the flower state and what would better be described as pseudo-multi-domain (PMD) behavior of the vortex state. Nonetheless, data such as those shown in Figure 8.14 are often interpreted in terms of grain size using the crushed data shown in Figure 8.13 as calibration. The problem arises however that the trends strongly depend on sample preparation and the absolute grain size interpretations are therefore usually wrong in the literature.
Part of the problem is that the hysteresis behavior of multi-domain assemblages is similar to that of superparamagnetic particles (Chapter 5) and more information (such as behavior as a function of temperature) is necessary for a correct interpretation. Moreover, by taking the ratio Hcr∕Hc we lose information. For this reason, Tauxe et al. (2002) argued for the much older practice of plotting Mr∕Ms versus Hcr and Hc separately (Néel, 1955). This type of plot, known as the squareness-coercivity diagram is shown in Figure 8.14. The “F” and “V” designations for flower and vortex respectively were approximated by micromagnetic modelling (Tauxe et al. 2002).
The S-ratio is the ratio of the IRM acquired in a back field of magnitude x to the saturation IRM, Mr, (see Table 8.1). HIRM is not really a ratio, but is the difference between the saturation IRM remaining after application of a backfield of magnitude x and the sIRM (the fraction of Mr “harder” than field x). These parameters are frequently used in paleoceanographic and environmental applications because they are sensitive to changes in magnetic mineralogy.
A ratio of saturation IRM to magnetic susceptibility (Mr∕χ in Table 8.1) of greater than 20 kAm−1 can indicate the presence of minerals other than magnetite (e.g, sulfides). However, identification of exactly which minerals is a rather complicated affair (see Maher et al., 1999).
Finally, based on data similar to those shown in Figure 8.13, Banerjee et al. (1981) argued that χARM to χ can be used as a proxy for grain size changes in magnetite (see e.g., Figure 8.15). King et al. (1982) went further and suggested specific grain sizes for a given ratio, but these were based partly on crushed magnetites whose behavior differs substantially from most naturally occurring magnetite. Furthermore, as pointed out by King et al. (1983), χARM is a strong function of concentration, so caution is warranted. Finally, the cgs units used in King et al. (1982) have been translated into SI incorrectly in many applications (e.g., error in table in King et al., 1983). Nonetheless, what is clear from Figure 8.13 is that susceptibility (away from the SP grain sizes) is virtually independent of grain size while χARM is a strong function of grain size, so changes in χARM normalized by χlf should in fact reflect changes in grain size.
Three other ratios are listed in Table 8.1, ARM/Mr, and the two Königsberger (1938) ratios Qn,Qt. Maher et al. (1999) suggest that the former be used to characterize particle interactions because particle interaction suppresses ARM acquisition, but not IRM acquisition. The first Königsberger ratio is the ratio of the induced magnetization to remanent magnetization in a given field, a parameter useful for interpreting the origin of magnetic anomalies (whether from the rock’s remanent magnetization or induced by the Earth’s field). The second is the ratio of the NRM (presumed to be thermal in origin) to a laboratory induced TRM. This ratio is nowadays interpreted in terms of changes in the strength of the ancient magnetic field (to be discussed in later chapters), but Königsberger himself believed the ratio to reflect the age of the rock. He envisioned a type of viscous decay of the remanence over time, so older rocks would have a lower value of Qt than younger ones, a trend that he observed in his own data spanning the last few hundred million years.
Although we have encountered numerous practical applications in this chapter already, there are many more. Rock magnetic parameters are relatively quick and easy to measure, compared to geochemical, sedimentological and paleontological data. When used judiciously, they can be enormously helpful in constraining a wide variety of climatic and environmental changes. There are three basic types of plots of the rock and mineral magnetic parameters discussed in this chapter: maps, bi-plots and depth plots.
Because combustion related magnetic particles (see, e.g., fly ash particle in Figure 8.1d), the extent of anthropogenic pollution can be visualized by mapping magnetic susceptibility. Biplots, for example ARM versus χ have been in use since Banerjee et al. (1981) (see e.g., Figure 8.15). They can be useful for detecting changes in grain size, concentration, mineralogy, etc. If, for example, the data in a plot of Mr versus χ plot on a line, it may be appropriate to interpret the dominant control on the rock magnetic parameters as changes in concentration alone.
Depth plots are useful for core correlation, variations in concentration, mineralogy and grain size as a function of depth. An elegant example of the use of depth plots is the work of Rosenbaum et al. (1996). Figure 8.16 shows depth variations of selected rock magnetic and major (Ti) and trace (Zr) element data along with the pollen zones in sediment cores taken from Buck Lake, Oregon. A simple (first order) interpretation of susceptibility would be that glacial (cold) and interglacial (warm) periods tapped different source areas in the drainage basin to deliver magnetite (higher susceptibility) and hematite (lower susceptibility) during different climatic periods. However, much more complexity emerges when (a) chemical analyses for concentration variation of certain key elements (Fe, Ti, Zr) and (b) petrographic observations of the magnetic fractions are considered. In Figure 8.17a we observe that two elements, Ti and Zr, both derived from detrital heavy minerals are strongly correlated (R2 = 0.82) and the regression line passing (nearly) through the origin confirms that neither element shows anomalous addition or subtraction. In Figure 8.17b and 8.17c, Ti concentration is used as a measure of detrital input variations. Figure 8.17b shows that there has been post-depositional loss (vertical distance between the dashed and solid lines) of Fe, which is evidence that fluctuations in either iron or the magnetic parameters with depth cannot be a simple reflection of changes in detrital material delivery.
In Figures 8.17c and 8.17d, we get further information that hematite (proportional to HIRM) and magnetite (main contributor to susceptibility) both show negative intercepts when plotted against Ti. In both plots, HIRM and χ corresponding to the higher values of Ti are scattered, generally suggesting wide variations in detrital input, perhaps reflecting true changes in the types of detrital material delivered at different times.
But petrographic observations showed that the specimens with high scatter in HIRM (hematite) and χ (magnetite) contain fresh, relatively unweathered volcanic fragments with a wide variation of hematite and magnetite grains reflecting heterogeneity at source (volcano). Other samples of hematite and magnetite show pitting and evidence of wholesale mineral dissolution coinciding with offsets observed in HIRM and χ. Taken together, the data from Figure 8.17 and petrographic evidence provide a more nuanced understanding of the past climate record at Buck Lake. Although the pollen data could mean variations in the temperature alone (glacial/interglacial), magnetic analyses and petrographic observations lead us to a further climatic/environmental clue: sections with wide scatter in susceptibility are heterogeneous and have large chunks of fresh, unaltered material. This was deposited during rapid high velocity water flows in the drainage basin. While the hydrologic conditions were much different (low rainfall and iron dissolution), then both HIRM and χ values are offset from the ideal dashed lines going through the origin at 45∘ to either axis. The lesson for us is that a multiparameter investigation enriches our understanding based on environmental magnetic data alone, and can provide additional information.
Earlier in this chapter, we showed an early example (Banerjee et al., 1981) of the utility of ARM-χ plots for detecting environmental and anthropogenic changes in a lake sediment archive. King et al. (1982) rationalized such plots with χARM on the y-axis instead of ARM so that both axes are dimensionless. Yamazaki and Ioka (1997) used magnetic data from pelagic clay sediments to show that errors occur when the implicit assumption of identical sources contributing to x- and y-axis values breaks down. In their pelagic clay sediments, as much as 25% of the observed magnetic susceptibility (χ) came from paramagnetic clays rather than iron oxides alone.
Figure 8.18a shows two sets of frequency dependence of susceptibility measurements (χfd). In one (uncorrected) there is an increase from 10% to 12% with increasing age. This could be explained by a postulated increase in superparamagnetic (SP) particles at depth. Frequency dependence is calculated by:
where χl and χh are the low and high frequency magnetic susceptibilities. So the same frequency dependence would not result if χl had a frequency-independent contribution from paramagnetic clay. In Figure 8.18b, the corrected values of χl are gotten by subtracting the paramagnetic or high-field susceptibility contributions (χhf) obtained from the high field part of hysteresis loops (Chapter 5). As Figure 8.18a shows, the apparent increase in frequency dependence then disappears.
A similar error would occur if uncorrected χl values are used to derive the ratio ARM/χl, which is inversely proportional to particle size (see Section 8.6). In Figure 8.18c where this ratio is plotted before and after high-field susceptibility correction, the slow variation between 1 and 2.8 Ma disappears, leaving a true increase in ARM/χl below 2.8 Ma and not at 1 Ma.
The parameters χfd (ultrafine or SP fraction) and ARM/χl (slightly larger single or pseudo-single domain fraction) are extensively used in paleoceanographic studies where contributions from paramagnetic clay can be substantial. For such ocean sediments, and some terrestrial sediments, a routine check for strong paramagnetism through high field susceptibility measurements is highly valuable.
Particle sizes below 20-30 nm for magnetite are superparamagnetic at 300 K. Conventionally, parameters such as frequency dependence of susceptibility (χfd) measure the relative amount of the SP particles and can distinguish them from thermally stable and larger single domain, pseudo-single domain and multidomain particles in a natural mixture, for example, loess/paleosol. However, we have recently seen that sometimes there is valuable information to be gleaned from identifiable mixtures of two modes of SP size distributions. In environmental magnetism studies, such mixtures may represent records of two diagenetic or chemical change events, or of two types distinguished by their origin: biogenic and inorganic.
Superparamagnetic or thermal relaxation time for magnetization of a uniaxial particle is defined by Néel’s equation (see Chapter 7):
The particle volume distribution f(v) can be estimated if the distribution of microscopic coercivity, f(Hk), is independently known or its approximate form can be assumed. Jackson et al. (2006) have formulated a general method to determine the joint distribution f(v,Hk) when both thermally stable single domain (SD, at 300 K) and thermally unstable smaller particles (SP) are present in a mixure. The raw data for applying the method utilizes the low temperature dependence of back field demagnetization curves of isothermal remanent magnetizations acquired at different back fields at 300 K. To reduce the large amount of data thus acquired, Jackson et al. (2006) apply a “tomographic” reconstruction method that results in f(v,Hk). These are plotted on a Néel diagram as shown in Figure 8.19a for a laboratory-prepared SP + SSD mixture of titanomagnetites obtained from different heights in Tiva mountain volcanic tuff deposit (Schlinger et al., 1991). Note that Néel diagrams (after Néel, 1949) are similar to the K − v diagrams of Chapter 7, but show volume against coercivity instead of the magnetic anisotropy constant. The modes of both size distributions are close to 10 nm and yet the size/coercivity clusters are easily discernible. Direct size variations from Transmission Electron Microscopy (TEM) (Schlinger et al., 1991) confirm the thermal fluctuation tomographic distributions in Figure 8.19a. The advantage of the magnetic method over TEM determination lies in the speed of measurement and in deriving distributions that represent a much larger spread of sizes. Theoretical dM∕dH curves for the tomographic reconstruction underscore the distinction between the two grain size modes and are shown in Figure 8.19b.
Figure 8.20 shows another Néel diagram obtained this time for a natural paleosol specimen from the Chinese loess plateau where the derived, and much wider, volume (10-100 nm) and coercivities are consistent with continuous pedogenic particle formation over tens of thousands of years. As Jackson et al. (2006) point out, however, back field remanence demagnetization curves spread over ∼30 discrete values of temperature from 300 down to 10 K can take 4-6 hours and much liquid helium expenditure.
Here we provide a real life example of recognition of magnetite produced by magnetotactic bacteria in coastal pond sediments. Magnetotactic bacteria produce chains of magnetic particles (see Chapter 6) whose magnetocrystalline easy axes appear to be aligned. Moskowitz et al. (1993) developed a test to detect the presence of aligned chains of magnetite. As was described in Chapter 4, magnetite undergoes a transition from cubic to monoclinic crystal structure as it cools through a temperature near 100-110 K known as the Verwey transition temperature (Tv). This transition results in a loss of magnetization (see Figure 4.3 in Chapter 4). This loss is quantified by:
where Mrs is the saturation IRM remaining at 80 or 150 K while warming from ∼ 20 K. Specimens with intact chains of magnetite (magnetosomes) that are cooled from room temperature in the presence of a saturating field behave differently on warming through the Verwey transition than those cooled in low fields. In other words, the δ for field-cooled specimens (δFC) is larger than that for low (essentially zero) field cooled specimens δZFC. Extracted, and thus disturbed and disordered, magnetosomes and inorganic SD and MD magnetites do not show a difference between δFC and δZFC. Moskowitz et al. (1993) explain this behavior by calling on intact magnetosomes to have  easy axes aligned along the length of the chain. This makes the entire chain act as a uniaxial particle. Near the Verwey transition is the isotropic point (see Figure 4.2 in Chapter 4) at which the magnetocrystalline anisotropy constant (K1) goes through zero and the easy axis changes orientation from the  direction above it to the  below the isotropic point. When intact magnetosomes are cooled through Tv in zero field, the new easy axes are chosen at random from one of the three  directions. When they cool through Tv in a strong magnetic field, the  direction most closely aligned with the direction of the applied field will be chosen, instead of a random choice. Therefore, the magnetization of these field cooled chains is not the sum of randomly selected  directions, but the sum of partially aligned  directions, hence the saturation remanence is enhanced relative to the random case. Warming back through Tv, the ZFC curve joins the FC curve because both are warmed in the absence of a field. Experimentally, the ratio of δFC∕δZFC is about 2 for intact magnetosomes and nearly unity for extracted chains or inorganic magnetite. This is known as the δ − δ test for intact magnetosomes.
Moskowitz et al. (2008) applied the δ − δ test to sediments from a salt water pond to locate the oxic-anoxic interface. The presence of an oxic-anoxic interface (OAI) in lake waters and its environmental effects is usually discovered and studied using a combination of standard microbiological, geochemical and transmission electron microscopic techniques. Magnetic tests can be added to the tool-kit and have the advantages that ‘bulk’, i.e., unseparated material can be analyzed, they are highly sensitive, quick to make and are relatively inexpensive.
Moskowitz et al. (2008) studied the oxic-anoxic interface (OAI) in a salt water pond in Falmouth, Masschusetts. The OAI was between ∼3.1 m and ∼3.5 m below the sediment/water interface. Without knowing its exact location, water samples were collected from 2.5 m to 4.5 m depths and their solid contents filtered out for magnetic measurements. The immediate goal was to discover the presence of highest concentration of magnetite-producing magnetotactic bacteria that preferentially populate OAI. Magnetite (Fe3O4) has both ferric (Fe3+) and ferrous (Fe2+) ions in its structure (see Chapter 3). Thus it is less common either in the fully oxic zone above OAI or in the fully anoxic zone (because of the presence of the reducing compound H2S) below OAI.
Figures 8.21a, b and c show the magnetic data acquired by warming from ∼15 K. sIRM is applied at this temperature to specimens initially cooled from 300 K using two different pre-treatments: the specimen is either cooled in zero field (ZFC), or it is cooled in a large applied field (FC). The thermal demagnetization curves of specimens from above OAI (Figure 8.21a at 2.7 m) and below OAI (Figure 8.21c at 4.0 m) appear to be very similar. Both could be interpreted to contain a weak signature of the presence of very small amounts of magnetite in the form of small drops in sIRM around 95 K. However, the specimen shown in Figure 8.21b from 3.5 m shows incontrovertible evidence for magnetite: the sharp drops in ZFC and FC sIRMs near the Verwey transition temperature.
As defined earlier in this section, the magnitudes of the drops in sIRM can be expressed as δFC and δZFC. Figure 8.22 shows the depth variation of δFC∕δZFC ratio which is known to be equal to 2.0 or higher for live bacteria with chains of magnetite magnetosomes inside (e.g., Moskowitz et al. 1993). As shown here, the ratio is 2.0 or higher for specimens within the chemically determined OAI (shaded zone). The symbol sr refers to short rod-shaped magnetite as discovered by electron microscopy. Taken together, the chemical and microscopic evidence help locate the extent of OAI, important for environmental condition study of this salt pond. But the δFC∕δZFC ratio provides the same information, accurately and with speed, when the ratio rises above 2.0 in a given zone. The speed of analysis is crucial for environmental studies if one wants to survey the height variation of OAI at a dense network of points in the lake leading to information about organic productivity.
There are many other excellent examples of applications of rock magnetic data to solving thorny environmental problems. Papers range from the highly useful to the frankly lunatic. However, the field is alive and new imaginative and extremely clever applications are being published every month.
SUPPLEMENTAL READINGS:Verosub and Roberts (1995).
a) Use the function ipmag.curie. to calculate the Curie temperature of the data contained in the two data files curie_example.dat and curie_example2.dat in the Chapter_8 directory of the data folder (see Preface instructions). This funtion is designed to run in a Jupyter notebook.
b) The way ipmag.curie works is to use a triangular sliding window and average over a range of temperature steps. Then it calculates the first and second derivatives of the data and uses the maximum curvature (maximum in the second derivative) to estimate the Curie Temperature. It can be tricky to get the “right” temperature, especially if there are two inflections and/or the data are noisy. Therefore, the program will scan through a range of smoothing intervals. You can truncate the interval over which you want to look (see the help message for the ipmag.curie) and set the smoothing interval. The program has a default smoothing window width of 3∘, which is usually too small to get an accurate Curie Temperature. The first data file is not very noisy and the second is noisier.
First look at each data file using the defaults. Then, choose the optimal smoothing interval (the smallest interval necessary to isolate the correct peak in the second derivative). Finally, repeat this, but truncate the data set to between 400∘ to 600∘.
What is the Curie Temperature of the two specimens?
Rock magnetic parameters have been used extensively to study the Chinese sequences of loess. Data from one such study (Hunt et al., 1995) is saved in the file loess_rockmag.dat in the Chapter_8 directory. The data columns are: stratigraphic position in meters below reference horizon, total mass normalized magnetic susceptibility (κtotal) in (μm)3kg−1, and sIRM in (mAm)2kg−1. The paramagnetic susceptibility (κp) for the section was relatively constant at about 60 nm3kg−1.
Make plots of total susceptibility, ferromagnetic susceptibility (κf = κtotal −κp), sIRM and the ratio κf∕sIRM/ versus stratigraphic position. The reference horizon was the top of the modern soil, S0.
Magnetic susceptibility is closely linked to lithology, with peaks associated with soil horizons. The triplet of peaks between about 20 and 27 meters are three units in soil S1, which spans the interval 75 ka to 128 ka. The material in between S0 and S1 is the top-most loess horizon L1. The interval below S1 is L2.
The explanation for the high magnetic susceptibility in the soils has been that there is magnetic enhancement caused by growth of superparamagnetic magnetite in the soil horizons. Susceptibility, sIRM and their ratio have all been used as magnetic proxies of past climate changes (mainly rainfall/year). But, only one of them represents best the concentration of the superparamagnetic particle fraction created from iron silicates by rainfall. Which of the profiles you plotted would be the best proxy for the superparamagnetic fraction and why?
The sand on Scripps beach accumulates in the summer when gentle waves drop their load high up on the beach and erodes in the winter when high energy waves strip the sand away, leaving bare rock. Sand accumulation and preservation therefore depends critically on density. The sand can be crudely divided into a light colored fraction, composed of quartz, plagioclase, and feldspar and a darker fraction, composed of magnetite, pyroxene, amphibole, and biotite. Wave action on the beach separates the sand into light and dark stripes with the darker sand being deposited at points when the water velocity slows down (over ripples or around stones, for example). Average density measurements would help sedimentologists predict which beaches are more resistant to erosion during winter storms, but accurate density measurements are time consuming.
As part of a class project, students investigated whether magnetic susceptibility could be used as a proxy for density because it is much quicker and easier to measure. Students collected five test samples of sand ranging from light (#1) to dark (#5). They dried and weighed out sand into 7 cc plastic boxes. The specimens were measured on a Bartington susceptibility meter with units of 10−5 SI, assuming a 10cc specimen. a) Convert the susceptibility in Table 8.2 (also in beach_sand.dat in the Chapter_8 Datafiles folder) into mass normalized units in m3kg−1. Make plots of susceptibility against color (specimen number) and density. b) Is there a relationship? Pose a plausible hypothesis that explains your observations. How would you test it?
There are several goals in paleomagnetic sampling: one is to average out the errors involved in the sampling process itself, and to assess the reliability of the recording medium (recording noise). In addition, we often wish to sample the range of secular variation of the geomagnetic field in order to average it out or characterize its statistical properties. The objectives of averaging recording and sampling “noise” are achieved by taking a number N of individually oriented samples from a single unit (called a site). Samples should be taken such that they represent a single time horizon, that is, they are from a single cooling unit or the same sedimentary horizon. The most careful sample orientation procedure has an uncertainty of several degrees. Precision is gained proportional to , so to improve the precision, multiple individually oriented samples are required. The number of samples taken should be tailored to the particular project at hand. If one wishes to know polarity, perhaps three samples would be sufficient (these would be taken primarily to assess recording noise). If, on the other hand, one wished to make inferences about secular variation of the geomagnetic field, more samples would be necessary to suppress sampling noise.
Some applications in paleomagnetism require that the secular variation of the geomagnetic field (the paleomagnetic “noise”) be averaged in order to determine the time-averaged field direction. The geomagnetic field varies with time constants ranging from milliseconds to millions of years. It is a reasonable first order approximation to assume that, when averaged over, say, 104 or 105 years, the geomagnetic field is similar to that of a geocentric axial dipole (equivalent to the field that would be produced by a bar magnet at the center of the Earth, aligned with the spin axis; see Chapter 2). Thus, when a time-averaged field direction is required, enough sites must be sampled to span sufficient time to achieve this goal. A general rule of thumb would be to aim for about ten sites (each with nine to ten samples), spanning 100,000 years. If the distribution of geomagnetic field vectors is desired, then more like 100 sites are necessary.
Samples can be taken using a gasoline or electric powered drill, as “hand samples” (also known as “block samples” or as “sub-samples”) from a piston core.
The diversity of paleomagnetic investigations and applications makes it hard to generalize about sample collection, but there are some time-honored recommendations. One obvious recommendation is to collect fresh, unweathered samples. Surface weathering oxidizes magnetite to hematite or iron-oxyhydroxides, with attendant deterioration of NRM carried by magnetite and possible formation of modern CRM. Artificial outcrops (such as road cuts) thus are preferred locations, and rapidly incising gorges provide the best natural exposures.
Lightning strikes can produce significant secondary IRM, which can mask the primary remanence. Although partial demagnetization in the laboratory can often erase lightning-induced IRM, the best policy is to avoid lightning-prone areas. When possible, avoid topographic highs, especially in tropical regions. If samples must be collected in lightning-prone areas, effects of lightning can be minimized by surveying the outcrop prior to sample collection to find areas that have probably been struck by lightning. This is done by “mapping” the areas where significant (> 5∘) deflections of the magnetic compass occur. If a magnetic compass is passed over an outcrop at a distance of ∼15 cm from the rock face while the compass is held in fixed azimuth, the strong and inhomogeneous IRM produced by a lightning strike will cause detectable deflections of the compass. These regions then can be avoided during sample collection.
In general, some direction (drill direction, strike and dip, direction of a horizontal line or even just the “up” direction) is measured on the sample. This direction is here called the field arrow. When samples are prepared into specimens for measurement, the field arrow is often replaced by a lab arrow which is frequently in some other direction. Procedures for orienting the field arrow are varied, and no standard convention exists. However, all orientation schemes are designed to provide an unambiguous in situ geographic orientation of each sample. A variety of tools are used including orientation devices with magnetic and sun compasses, levels for measuring angles from the horizontal and even differential GPS devices for establishing the azimuth of a local baseline without the need for magnetic or sun compasses.
If a magnetic compass is used to orient samples in the field, the preferred practice is to set the compass declination to zero. Then, in post-processing, the measured azimuth must be adjusted by the local magnetic declination, which can be calculated from a known reference field (IGRF or DGRF; see Chapter 2). The hade (angle from vertical down) or plunge (angle down [positive] or up [negative] from horizontal) of the sample can also be gotten using an inclinometer (either with a Pomeroy orientation device as shown in Figure 9.1 or with some other inclinometer, such as that on a Brunton Compass.)
Sometimes large local magnetic anomalies, for example from a strongly magnetized rock unit, can lead to a bias in the magnetic direction that is not compensated for by the IGRF magnetic declination. In such cases, some other means of sample orientation is required. One relatively straightforward way is to use a sun compass. Calculation of a direction using a sun compass is more involved than for magnetic compass, however. A dial with a vertical needle (a gnomon) is placed on the horizontal platform shown in Figure 9.5. The angle (α) that the sun’s shadow makes with the drilling direction is noted as well as the exact time of sampling and the location of the sampling site. With this information and the aid of the Astronomical Almanac or a simple algorithm (see Appendix A.3.8), it is possible to calculate the desired direction to reasonable accuracy (the biggest cause of uncertainty is actually reading the shadow angle!).
Another way to avoid the deflection of the compass needle by strong local magnetic anomalies is to check the direction by sighting to known landmarks or by moving a second magnetic compass well away from the outcrop and back-sighting along the drill direction. This is easiest by using the sun-compass gnomon and sighting tip of the original compass as guides (see Figure 9.6). The original magnetic compass direction (near the outcrop) can be compared to the backsighted direction in order to detect and remove any deflection. Of course the compass reading made with the orientation device (near outcrop) is more precise (∼ 3∘), but backsighting can be done with a precision of ∼ 5∘ with care.
A new technique, developed by C. Constable and F. Vernon at Scripps Institution of Oceanography (see Lawrence et al. 2009) uses differential Global Positioning System (GPS) technology (see Figure 9.7) to determine the azimuth of a baseline. Two GPS receivers are attached to either end of a one meter long non-magnetic rigid base. The location and azimuth of the baseline can be computed from the signals detected by the two receivers. The orientation of the baseline is transferred to the paleomagnetic samples using a laser mounted on the base which is focused on a prism attached to the orientation device used to orient the paleomagnetic samples. The orientations derived by the differential GPS are nearly identical to those obtained by a sun compass, although it takes at least an additional half hour and is rather awkward to transport. Nonetheless, achieving sun-compass accuracy in orientations when the sun is unlikely to be readily available is a major breakthrough for high latitude paleomagnetic field procedures.
Samples are brought to the laboratory and trimmed into standard sizes and shapes (see Figure 9.8). These sub-samples are called paleomagnetic specimens. A rule of thumb about terminology is that a sample is something you take and a specimen is something you measure. The two may be the same object, or there may be multiple specimens per sample. A site is a single horizon or instant in time and may comprise multiple samples or may be only a single sample, depending on the application. Multiple specimens from a single site are expected to have recorded the same geomagnetic field.
We measure the magnetic remanence of paleomagnetic specimens in a rock magnetometer, of which there are various types. The cheapest are spinner magnetometers so named because they spin the specimen to create a fluctuating electromotive force (emf). The emf is proportional to the magnetization and can be determined relative to the three axes defined by the sample coordinate system. The magnetization along a given axis is measured by detecting the voltages induced by the spinning magnetic moment within a set of pick-up coils.
Another popular way to measure the magnetization of a specimen is to use a cryogenic magnetometer. These magnetometers operate using so-called superconducting quantum interference devices (SQUIDs). In a SQUID, the flux of an inserted specimen is opposed by a current in a loop of superconducting wire. The superconducting loop is constructed with a weak link which stops superconducting at some very low current density, corresponding to some very small quantum of flux. Thus the flux within the loop can change by discrete quanta. Each incremental change is counted and the total flux is proportional to the magnetization along the axis of the SQUID. Cryogenic magnetometers are much faster and more sensitive than spinner magnetometers, but they cost much more to buy and to operate.
Magnetometers are used to measure the three components of the magnetization necessary to define a vector (e.g., x1,x2,x3 or equivalently x,y,z). These data can be converted to the more common form of D, I and M by methods described in Chapter 2.
Data often must be transformed from the specimen coordinate system into, for example, geographic coordinates. This can be done graphically with a stereonet or by means of matrix manipulation. We outline the general case for transformation of coordinates in Appendix A.3.5. Here we examine the specific cases of the transformation from specimen coordinates to geographic coordinates and the transformation of geographic coordinates to tilt corrected coordinates, the two most commonly used rotations in paleomagnetism.
No matter how the sample was taken, data in the laboratory are measured with respect to the specimen coordinate system, so all the field arrows, no matter how obtained, must be converted into the direction of the lab arrow (x; see example in Figure 9.4 and Figure 9.8a for field drilled samples.) Suppose we measured a magnetic moment m (Figure 9.9a). The components of m in specimen coordinates are x,y,z or equivalently, x1,x2,x3. Ordinarily, this coordinate system is at some arbitrary angle to the geographic coordinate system, but we know the azimuth and plunge (Az,Pl) of the lab arrow with respect to the geographic coordinate system (Figure 9.9b). By substituting Az and Pl for ϕ and λ into Equation A.13, the components of the direction of m in geographic coordinates can be calculated. These then can be converted back into D,I and m using the equations given in Chapter 2. Note that m stays the same during the transformation of coordinates.
To correct for tilt, it is simplest to understand if this is performed as three rotations. This is how it is done graphically with a stereonet and it is possible to do it the same way with a computer. [It can also be done as a single rotation, which would be computationally faster, but much harder to visualize.] First, rotate the direction of magnetic moment in specimen coordinates about a vertical axis by subtracting the dip direction from the declination of the measurement. Then substitute ϕ = 0 and λ = - dip into Equation A.13 to bring the dip back up to horizontal. Finally, rotate the direction back around the vertical axis by adding the dip direction back on to the resulting rotated declination.
Anyone who has dealt with magnets (including magnetic tape, credit cards, and magnets) knows that they are delicate and likely to demagnetize or change their magnetic properties if abused by heat, large magnetic fields or stress. Cassette tapes left on the dashboard of the car in the hot sun never sound the same. Credit cards that have been through the dryer may lead to acute embarrassment at the check-out counter. Magnets that have been dropped, do not work as well afterwards. It is not difficult to imagine that rocks that have been left in the hot sun or buried deep in the crust (not to mention altered by diagenesis or bashed with hammers, drills, pick axes, etc.), may not have their original magnetic vectors completely intact. Because rocks often contain millions of tiny magnets, it is possible that some (or all) of these have become realigned, or that they grew since the rock formed. In many cases, there are still grains that carry the original remanent vector, but there are often populations of grains that have acquired new components of magnetization. The good news is that viscous magnetizations are carried by grains with lower magnetic anisotropy energies (they are “softer”, magnetically speaking), so we expect their contribution to be more easily randomized than the more stable (“harder”) grains carrying the ancient remanent magnetization.
There are several laboratory techniques that are available for separating various components of magnetization. Paleomagnetists rely on the relationship of relaxation time, coercivity, and temperature in order to remove (demagnetize) low stability remanence components. The fundamental principle that underlies demagnetization techniques is that the lower the relaxation time τ, the more likely the grain will carry a secondary magnetization. The basis for alternating field (AF) demagnetization is that components with short relaxation times also have low coercivities. The basis for thermal demagnetization is that these grains also have low blocking temperatures.
In AF demagnetization, an oscillating field is applied to a paleomagnetic specimen in a null magnetic field environment (Figure 7.22 in Chapter 7). All the grain moments with coercivities below the peak AF will track the field. These entrained moments will become stuck as the peak field gradually decays below the coercivities of individual grains. Assuming that there is a range of coercivities in the specimen, the low stability grains will be stuck half along one direction of the AF and half along the other direction; the net contribution to the remanence will be zero. In practice, we demagnetize specimens sequentially along three orthogonal axes, or while “tumbling” the specimen around three axes during demagnetization.
Thermal demagnetization exploits the relationship of relaxation time and temperature. There will be a temperature below the Curie temperature at which the relaxation time is a few hundred seconds. When heated to this temperature, grains with relaxation times this short will be in equilibrium with the field. This is the unblocking temperature. If the external field is zero, then there will be no net magnetization. Lowering the temperature back to room temperature will result in the relaxation times growing exponentially until these moments are once again fixed. In this way, the contribution of lower stability grains to the NRM can be randomized. Alternatively, if there is a DC field applied during cooling, the grains whose unblocking temperatures have been exceeded will be realigned in the new field direction; they will have acquired a partial thermal remanent magnetization (pTRM).
We sketch the principles of progressive (step-wise) demagnetization in Figure 9.10. Initially, the NRM is the sum of two components carried by populations with different coercivities. The distributions of coercivities are shown in the histograms to the left in Figure 9.10. Two components of magnetization are shown as heavy lines in the plots to the right. In these examples, the two components are orthogonal. The sum of the two components at the start (the NRM or demagnetization step ‘0’) is shown as a + on the vector plots to the right. After the first AF demagnetization step, the contribution of the lowest coercivity grains has been erased and the remanence vector moves to the position of the first dot away from the +. Increasing the AF in successive treatment steps (some are numbered in the diagram) gradually eats away at the remanence vectors (shown as dashed arrows and dots in the plots to the right) which eventually approach the origin.
There are four different sets of coercivity spectra shown in Figure 9.10, each with a distinctive behavior during demagnetization. If the two coercivity fractions are completely distinct, the two components are clearly defined (Figure 9.10a) by the progressive demagnetization. If there is some overlap in the coercivity distribution of the components the resulting demagnetization diagram is curved (Figure 9.10b). If the two components completely overlap, both components are removed simultaneously and an apparently single component demagnetization diagram may result (Figure 9.10c). It is also possible for one coercivity spectrum to include another as shown in Figure 9.10d. Such cases result in “S” shaped demagnetization curves. Because complete overlap actually happens in “real” rocks, it is desirable to perform both AF and thermal demagnetization. If the two components overlap completely in coercivity, they might not have overlapping blocking temperature distributions and vice versa. It is unlikely that specimens from the same lithology will all have identical overlapping distributions, so multiple specimens can provide clues to the possibility of completely overlapped directions in a given specimen.
Now we will consider briefly the issue of what to do with the demagnetization data in terms of display and estimating a best-fit direction for various components.
The standard practice in demagnetization is to measure the NRM and then to subject the specimen to a series of demagnetization steps of increasing severity. The magnetization of the specimen is measured after each step. During demagnetization, the remanent magnetization vector will change until the most stable component has been isolated, at which point the vector decays in a straight line to the origin. This final component is called the characteristic remanent magnetization or ChRM.
Visualizing demagnetization data is a three-dimensional problem and therefore difficult to plot on paper. Paleomagnetists often rely on a set of two projections of the vectors, one on the horizontal plane and one on the vertical plane. These are variously called Zijderveld diagrams (Zijderveld, 1967), orthogonal projections, or vector end-point diagrams.
In orthogonal projections, the x1 component is plotted versus x2 (solid symbols) in one projection, and x1 is replotted versus Down (x3) (open symbols) in another projection. The paleomagnetic convention differs from the usual x-y plotting convention because x3 is on a vertical axis which is positive in the downward direction (instead of the usual positive up convention). The choice of axis for the horizontal projection is a little more tricky. x2 is always positive to the right of x1. x1 is frequently plotted along the horizontal axis and x2 would then be on the vertical axis, again positive in the downward direction. The paleomagnetic conventions make sense if one visualizes the diagram as a map view for the solid symbols and a vertical projection for the open symbols.
Because x3 gets plotted against whatever is chosen for the horizontal axis, the angle that the vertical projection makes will only be true inclination if the horizontal axis happens to be parallel to the remanence vector, i.e. directly along x1. For this reason, x2 is sometimes plotted along the horizontal axis if the remanence vector is more parallel to x2. Some people choose to plot the pairs of points (x1,x2) versus (H,x3) where H is the horizontal projection of the vector given by . In this projection, sometimes called a component plot, the coordinate system changes with every demagnetization step because H almost always changes direction, even if only slightly. Plotting H versus x3 is therefore a confusing and misleading practice. The primary rationale for doing so is because, in the traditional orthogonal projection where x3 is plotted against x1 or x2, the vertical component reveals only an apparent inclination. In fact, the choice of horizontal component is arbitrary and could be deliberately chosen to be parallel to the remanence directions. If something close to true inclination is desired, then, instead of plotting H and x3, one can simply rotate the horizontal axes of the orthogonal plot such that it closely parallels the desired declination (Figure 9.11a,b).
In the plots shown in Figure 9.11a,c we have rotated the remanence vector such that the x1 component is parallel to the original NRM direction. In Figure 9.11, we show several general types of demagnetization behavior. In Figure 9.11a, the specimen has a North-Northwest and downward directed NRM (see inset of equal area projection in geographic coordinates.) The direction does not change during demagnetization and the NRM is a single vector. The median destructive field (from Chapter 8) is illustrated in Figure 9.11b. The specimen in Figure 9.11c shows a progressive change in direction from a Westward and up directed component to a North and down direction. The vector continuously changes direction to the end and no final “clean” direction has been confidently isolated. These data are plotted on an equal area projection in the inset along with the trace of the best-fitting plane (a great circle). The most stable component probably lies somewhere near the best-fitting plane. This specimen came from the outcrop depicted in Figure 7.19 in Chapter 7 which had been hit by lightning. The presumptive IRM is much “softer” on demagnetization; the NRM is virtually erased by 40 mT, whereas the mdf of the specimen that had not been hit by lightning is much higher (Figure 9.11a,b). The NRM of the lightning hit specimen is also more than an order of magnitude stronger.
The behavior of the specimen shown in Figure 9.11d is again markedly different in that the intensity, after an initial smooth decrease, begins to climb again at high demagnetizing fields. The direction deflects away from the origin towards a direction that is orthogonal to the last axis to be demagnetized. This behavior is typical of GRM acquisition during demagnetization (see Chapter 7).
When specimens acquire a remanence either along the axis of the oscillating field (an ARM) or orthogonal to it (a GRM as in Figure 9.11d) they require a more complicated demagnetization regime than just along the three axes. In the case of the parallel acquisition, a double demagnetization protocol works well. In double demagnetization (e.g., Tauxe et al., 2004), a specimen is subjected to demagnetization along the three orthogonal axes, say along +X1,+X2,+X3, and is measured, then demagnetized along −X1,−X2,−X3 and remeasured. The two measurements are averaged to give an ARM free vector. In the case of GRM, Stephenson (1993) developed a triple demagnetization protocol whereby specimens are demagnetized along +X1,+X2,+X3 measured, then demagnetized along +X2, measured and finally along +X1 and measured. These three steps are averaged to give a GRM-free vector. This method is a simplified but at times sufficient variation of the six step procedure described by Dankers and Zijderveld (1981). GRMs have been associated with specimens that have a high anisotropy (e.g., Stephenson, 1993; Tauxe et al., 2004; Potter and Stephenson, 2005), or have a greigite magnetic remanence (e.g., Snowball, 1997).
An equal area projection may be the most useful way to present demagnetization data from a specimen with several strongly overlapping remanence components (such as in Figures 9.11c-d). In order to represent the vector nature of paleomagnetic data, it is necessary to plot intensity information. Intensity can be plotted versus demagnetization step in an intensity decay curve (Figure 9.11b). However, if there are several components with different directions, the intensity decay curve cannot be used to determine, say, the blocking temperature spectrum or mdf, because it is the vector sum of the two components. It is therefore advantageous to consider the decay curve of the vector difference sum (VDS) of Gee et al. (1993). The VDS “straightens out” the various components by summing up the vector differences at each demagnetization step, so the total magnetization is plotted, as opposed to the resultant.
Orthogonal vector projections aid in identification of the various remanence components in a specimen. Demagnetization data are usually treated using what is known as principal component analysis (Kirschvink, 1980). This is done by calculating the orientation tensor for the set of data and finding its eigenvectors (Vi) and eigenvalues (τi); see Appendix A.3.5 for computational details. What comes out of the analysis is a best-fit line through a single component of data as in Figure 9.11a,b or a best-fit plane (or great circle, if each point is given unit weight) through multi-component data as in Figure 9.11c,d. Kirschvink  also defined the maximum angle of deviation or (MAD) for each of these.
The best-fit line is given by the principal eigenvector V 1 and its MAD is given by:
If no unique principal direction can be isolated (as for the specimen in Figure 9.11c-d), the eigenvector V3 associated with the least eigenvalue τ3 can be taken as the pole to the best-fit plane wherein the component of interest must lie. The MAD angle for the best-fit plane is given by:
The angle between the best-fitting line through the data and the origin is termed the Deviation ANGle or DANG. The line connecting the data to the origin is taken as the vector from the origin to the center of mass of the data (Equation A.15).
In addition to establishing that a given rock unit retains a consistent magnetization, it is also important to establish when this magnetization was acquired. Arguments concerning the age of magnetic remanence can be built on indirect petrographic evidence as to the relative ages of various magnetic minerals, or by evidence based on geometric relationships in the field. There are two popular field tests that require special sampling strategies: the fold test and the conglomerate test.
The fold test (also known as a tilt test) relies on the tilting or folding of the target geological material. If, for example, one wanted to establish the antiquity of a particular set of directions, one could deliberately sample units of like lithology, with different present attitudes (Figure 9.12). If the recovered directions are more tightly grouped before adjusting for tilt (as in the lower left panel), then the magnetization is likely to have been acquired after tilting. On the other hand, if directions become better grouped in the tilt adjusted coordinates (see upper right panel), one has an argument in favor of a pre-tilt age of the magnetization. Methods for quantifying the tightness of grouping in various coordinate systems will be discussed in later chapters.
In the conglomerate test, lithologies that are desirable for paleomagnetic purposes must be found in a conglomerate bed (Figure 9.13a). In this rare and happy circumstance, we can sample them and show that: 1) the rock magnetic behavior is the same for the conglomerate samples as for those being used in the paleomagnetic study, 2) the directions of the studied lithology are well grouped, (Figure 9.13b) and 3) the directions from the conglomerate clasts are randomly oriented (see Figure 9.13d). If the directions of the clasts are not randomly distributed (Figure 9.13c), then presumably the conglomerate clasts (and, by inference, the paleomagnetic samples from the studied lithology as well) were magnetized after deposition of the conglomerate. We will discuss statistical methods for deciding if a set of directions is random in later chapters.
The baked contact test is illustrated in Figure 9.14. It is similar to the conglomerate test in that we seek to determine whether the lithology in question has undergone pervasive secondary overprinting. When an igneous body intrudes into an existing host rock, it heats (or bakes) the contact zone to above the Curie temperature of the host rock. The baked contact immediately adjacent to the intrusion should therefore have the same remanence direction as the intrusive unit. This magnetization may be in an entirely different direction from the pre-existing host rock. The maximum temperature reached in the baked zone decreases away from the intrusion and remagnetization is not complete. Thus the NRM directions of the baked zone gradually change from that of the intrusion to that of the host rock. Such a condition would argue against pervasive overprinting in the host rock that post-dated the intrusion, and the age of the intrusion would provide an upper bound on the age of remanence in the host rock.
SUPPLEMENTAL READINGS: Collinson (1983), Chapters 8 and 9.
Before you start, make sure you have the most recent distribution of the PmagPy software (see PmagPy website) and see instructions in the Preface for help in accessing the data files. Find the data files for these problems in the Chapter_9 directory.
The remanence vectors in the Chapter_9 directory saved in zijd_example.csv were measured during the thermal demagnetization of a specimen. The first column is the specimen name. The second is the temperature to which the specimen was heated, before cooling in zero field. The next columns are intensity, declination and inclination respectively for each treatment step.
a) Write a python program in a Jupyter notebook to make a Zijderveld diagram.
Follow these steps: 1) Read in the data. 2) Convert the vectors to x,y,z. 3) Plot x versus −y using some solid symbol and then connect those dots with a line. This is the horizontal projection of the vector so x should be on the horizontal axis and −y should be up. (Think about this! You are plotting a map view and Y is the East direction. So +y should be to the right of x.) 4) Now plot x versus −z. Here again the projection is unusual because +z is the down direction. Therefore it should be down. [It is −z that is up!] Use a different (open) symbol for these points and plot them on the same plot as your x,y data.
b) The same data were saved without headers in a file named zijd_example.dat. Plot them using the program ipmag.zeq. [Hint: check the help message by typing help(ipmag.zeq) to figure out how...]. Compare your answer from Problem 1a with that produced by the PmagPy program ipmag.zeq. Re-write your program until it is right; you can cheat by looking in ipmag.zeq and in the two function modules pmag.py and pmagplotlib.py if you have to, but make your program “your own”.
c) Assuming these data have already been converted to geographic coordinates (x = N,y = E,z = V ), what is the approximate direction (e.g. NE and up) of the low stability component of magnetization? The high stability component of magnetization? What is the most likely remanence carrying mineral in this specimen? Thinking about what you learned about VRM in Chapter 7, for the low stability component to be a VRM acquired over the last million years, at what temperature would the rock have to have been held to acquire this component viscously over a million years?
c) Run ipmag.zeq again, this time setting the -begin_pca and -end_pca flags to calculate best-fit lines through the two components and a great circle through all the data except the NRM and last steps. Look at these new images in your notebook. In a markdown code block explain which interpretation makes the most sense?
Use the program pmag.dosundec from within a notebook to estimate what the drilling azimuth was using the following sun compass information: You are located at 35∘ N and 33∘ E. The local time is three hours ahead of Universal Time, so we subtract -3 from local time. The shadow angle for the drilling direction was 68∘ measured at 16:09 on May 23, 1994.
a) The direction of NRM for these problem is given in geographic coordinates along with the attitude of dipping strata from which the site was collected:
D = 336∘, I = -2∘, bedding dip = 41∘, dip direction = 351 ∘.
Plot the NRM direction on an equal-area projection (see Appendix B.1). Then using the procedures outlined in Appendix B1.3 (or slight modifications thereof), to determine the “structurally corrected” direction of NRM that results from restoring the strata to horizontal.
b) Check your answer with the function pmag.dotilt.
Now consider a more complex situation in which a paleomagnetic site has been collected from the limb of a plunging fold. On the east limb of a plunging anticline, a direction of NRM is found to be I = 33∘, D = 309∘. The bedding attitude of the collection site is dip = 29∘, strike = 210∘ (dip direction = 120∘, and the pole to bedding is azimuth = 300∘, inclination = 61∘). The trend and plunge of the anticlinal axis are trend = 170∘, plunge = 20∘. Determine the direction of NRM from this site following structural correction. To do this, first correct the NRM direction (and the pole to bedding) for the plunge of the anticline. Rotate the fold axis to horizontal first. Then complete the structural correction of the NRM direction by restoring the bedding (corrected for plunge) to horizontal. Use the function pmag.dotilt() to do your rotations in an Jupyter notebook.
Write a python program to convert D = 8.1,I = 45.2 into geographic and tilt adjusted coordinates. Use the geographic coordinates as input to the tilt correction program. The orientation of the laboratory arrow on the specimen was: azimuth = 347∘; plunge = 27∘. The strike was 135∘ and the dip was 21∘. (NB: the convention is that the dip direction is to the “right” of the strike). For this it would be handy to use the NumPy module which allows arrays, instead of simple lists. To make an array A of elements aij:
the command would be:
The import command can be put at the beginning of the program as always. Use your programs to convert direction to cartesian coordinates and back again.
Compare your answer to the one given by pmag.dogeo and pmag.dotilt that are callable from within your notebook. Rewrite your code until you have it right. NB: pmag.dotilt.py uses dip and dip direction instead of strike and dip. These are completely interchangeable, but dip and dip direction is unique, while strike and dip requires some convention like “dip to right of strike” and can make for confusion if you are used to a different convention).
An intrepid group called “the red team” sampled a lava flow on Bastille day in 2006. The team, the sampling sites and the notebook page are shown in Figure 9.15a,b and c respectively. In this problem we will look at some real data collected from this lava flow.
a) Make a new directory in your homework directory for this problem. Do not include spaces in the directory name! Run the Pmag GUI graphical user interface by typing pmag_gui.py on the command line. [Note that PC users may have to omit the .py termination.] This problem does not use the Jupyter notebook!
Change directories into your new homework directory and fire up pmag_gui.py. Choose data_model 3
b) Convert your data files to the MagIC format. The measurements were made in the SIO paleomagnetic laboratory in the SIO lab format. Specimens were demagnetized using the AF and thermal methods and the data are in the Chapter_9 directory, named ns_a.mag and ns_t.mag respectively.
c) Click on the button labelled ‘2. Calculate the geographic/tilt-corrected directions’. Here you could fill out the form using the notebook information in Figure 9.15. But someone has typed in all the data you need for you. They are in the ‘Orientation file’ named ‘orient.txt’ in the Chapter_9 directory.
d) Look at the demagnetization data.
e) Explore the MagIC database tables that you have created.
You can explore your handiwork by looking at the files created in your homework directory with Excel or some other spreadsheet program.
In principle, it is possible to determine the intensity of ancient magnetic fields Banc because common mechanisms by which rocks become magnetized (e.g., thermal, chemical and detrital remanent magnetizations) are frequently approximately linearly related to the ambient field for low fields such as the Earth’s (Chapter 7 and Figure 10.1), i.e.,
where νlab and νanc are constants of proportionality. If the two constants are the same, we can divide the two equations and rearrange them to get:
If the laboratory remanence has the same proportionality constant with respect to the applied field as the ancient one, the remanences were linearly related to the applied field, and the NRM comprises a single component, all one need do to get the ancient field is measure the NRM, and determine ν by giving the rock a laboratory remanence in a known field (Blab). Multiplying the ratio of the two remanences by the lab field would give the ancient magnetic field.
The theory just outlined is quite simple, yet, in practice, recovering paleointensity is not simple; there are many causes for concern:
In this chapter we will discuss the assumptions behind paleointensity estimates and outline various experimental and statistical methods involved in getting paleointensity data. We will start by considering thermal remanences and then address depositional ones. To our knowledge, no one has deliberately attempted paleointensity estimation using other remanence types such as chemical or viscous remanences although both are theoretically possible.
The theoretical basis for how ancient magnetic fields might be preserved was laid out by L. Néel (see Chapter 7). We expect thermal remanences of quasi-equant single domain particles to be linearly related to the applied field for low fields like the Earth’s (although elongate particles may not behave linearly even in low fields). Larger particles of magnetite have more complicated remanent states (flower, vortex, multi-domain) and TRM acquisition curves is more difficult to predict from theory. However, empirical studies have shown that TRM acquisition is significantly non-linear even at rather low field strengths and that the departure from non-linearity is grain size dependent; the larger the particle, the lower the field at which non-linearity becomes an issue (e.g., Dunlop and Argyle, 1997). Nonetheless, the largest intensities on the Earth today (∼65 μT) are within the linear region for small equant particles and one could reach several hundred microtesla before having to worry about non-linearity. Therefore the linearity assumption appears to be reasonably well founded for ideal assemblages. Indeed, the linearity assumption is so deeply embedded in paleomagnetic practice that it is almost never tested! However, it has recently become evident that naturally occurring assemblages of single domain magnetite can have significantly non-linear TRM acquisition behavior (Selkin et al., 2007), even for fields as low as the Earth’s (see Figure 7.8). Because the exact form of the TRM acquisition depends critically on the magnetic assemblage, it would be wisest to include a TRM acquisition experiment in any paleointensity experiment.
There are several ways of checking the ability of the specimen to acquire TRM in paleointensity experiments. In Section 10.1.1 we will discuss the step-wise heating and Shaw methods. Other approaches attempt to prevent the alteration from occurring, for example by using microwaves to heat just the magnetic phases, leaving the rest of the specimen cool, or by minimizing the number of heating steps. Some methods attempt to normalize the remanence with IRM and avoid heating altogether. We will briefly describe each of these in turn, beginning with the step-wise heating family of experiments. Regardless of method chosen, it is essential that as many of the assumptions in the experiment be tested as possible. Experiments that skirt the issues involved simply give us data whose reliability can not be verified and, given all the things that can go wrong, such data are essentially useless.
A goal in paleointensity experiments since the earliest days has been the detection of changes in the proportionality constant caused by alteration of the magnetic phases in the rock during heating (e.g., Thellier and Thellier, 1959). The basic idea is to heat specimens up in stages, progressively replacing the natural remanence with partial thermal remanences. The step-wise heating approach is particularly powerful when lower temperature steps are repeated, to verify directly that the ability to acquire a thermal remenance has not changed.
The step-wise heating approach relies on the assumption that partial thermal remanences (pTRMs) acquired by cooling between any two temperature steps (e.g., 500∘ and 400∘C in Figure 7.9 of Chapter 7) are independent of those acquired between any other two temperature steps. This assumption is called the Law of Independence of pTRMs. The approach also assumes that the total TRM is the sum of all the independent pTRMs (see Figure 7.9), an assumption called the Law of Additivity
There are many possible ways to progressively replace the NRM with a pTRM in the laboratory. In the original step-wise heating method (e.g., Königsberger, 1938) the specimen is heated twice and cooled in the laboratory field; we will call this the “infield-infield” or “II” method. The first step is to heat the specimen to some temperature (T1) and cool it in the laboratory field Blab. Measurement of the combined remanence (what is left of the natural remanence plus the new laboratory pTRM) yields:
As magnetic shielding improved, modified protocols were developed. In the most popular paleointensity technique (usually attributed to Coe, 1967), we substitute cooling in zero field for the first heating step. This allows the direct measurement of the NRM remaining at each step. The two equations now are:
The laboratory MpTRM in this “zero-field/in-field” (or ZI) method is calculated by vector subtraction. Alternatively, the first heating and cooling can be done in the laboratory field and the second in zero field (Aitken et al., 1988), here called the “in-field/zero-field” or (IZ) method. As the NRM decays, the pTRM grows (Figure 10.3a). Such data are nowadays plotted against each other in what is usually called an Arai diagram (Nagata et al., 1963) as in Figure 10.3b.
In all three of these experimental designs (II, ZI and IZ), lower temperature in field cooling steps can be repeated to determine whether the remanence carrying capacity of the specimen has changed (e.g., Thellier and Thellier, 1959). These steps are called pTRM checks (triangles in Figure 10.3b). Differences between the first and second MpTRMs at a given temperature indicate a change in capacity for acquiring thermal remanences (e.g., δ300 in Figure 10.3b) and are grounds for suspicion or rejection of the data after the onset of such a change. [Some experiments repeat lower temperature zero field steps but these are not strictly pTRM checks (although they are called that) because they really test whether the NRM remaining at that temperature has been contaminated by unremoved pTRM tails or CRM.]
Despite its huge popularity and widespread use, the approach of progressively replacing the natural remanence with a thermal remanence has several drawbacks. Alteration of the ability to acquire a pTRM is not the only cause for failure of the assumption of equality of νlab and νanc. Single domain theory and the Law of Reciprocity required by all step-wise heating methods assumes that the remanence acquired by cooling through a given temperature interval is entirely removed by re-heating to the same temperature and cooling in zero field. Yet both experiment (Bol’shakov and Shcherbakova, 1979) and theory (e.g., Dunlop and Xu, 1994) suggest that the essential assumption of equivalence of blocking and unblocking temperatures may break down for larger particles.
Dunlop and Özdemir (2001) illustrated the failure of the reciprocity assumption with a suite of specimens whose grains sizes were well known. First, they imparted a pTRM over a narrow temperature interval of 370–350∘C . They then subjected the specimens to step-wise thermal demagnetization, monitoring the remanence remaining after each treatment step (see Figure 10.4a.) The heavy red line labelled “SD” is the prediction from the law of reciprocity. This assumption is not met by any of the specimens (the smallest of which was 0.6 μm, much larger than SD) and the larger the grain size, the larger the deviation from theory. The portion of pTRM lost by heating to temperatures below the blocking temperature is a low-temperature pTRM tail and that above is a high temperature pTRM tail. These tails have a profound affect on the outcome of double heating experiments as shown in Figure 10.4b. The data sag below the ideal line, becoming markedly curved for grains larger than about a micron.
What causes failure of reciprocity? If the particle is large enough to have domain walls in its remanent state, the behavior is not easily understood by theory. At just below its Curie Temperature the particle would be at saturation. As the particle cools, domain walls will begin to form at some temperature. After cooling all the way to room temperature, the remanent state, it will have some net moment because the domain walls will distribute themselves such that there is incomplete cancellation leaving a small net remanence proportional to the applied field for moderate field strengths. As the temperature ramps up again, the walls “walk around” within the particle, perhaps beginning below the blocking temperature as they seek to minimize the magnetostatic energy. If the particle is cooled back to room temperature, there could be a net loss of magnetization, giving rise to low temperature tails. The walls may not actually be destroyed until the temperature is very near the Curie Temperature and some fraction of the pTRM could persist, giving rise to high temperature tails.
A failure of reciprocity means that νlab≠νanc and the key assumptions of the step-wise heating type methods are not met. The Arai plots may be curved as in Figure 10.4b. If any portion of the NRM/TRM data are used instead of the entire temperature spectrum, the result could be biased. For example, the lower temperature portion might be selected on the grounds that the higher temperature portion is affected by alteration. Or, the higher temperature portion might be selected on the grounds that the lower temperature portion is affected by viscous remanence. Both of these interpretations are wrong.
In order to detect inequality of blocking and unblocking and the effect of “pTRM tails”, several embellishments to the step-wise heating experiments have been proposed and more are on the way. One modification is to alternate between the IZ and ZI procedures (the so-called IZZI method of, e.g., Tauxe and Staudigel (2004; see also Ben-Yosef et al., 2008). The protocol shown in Figure 10.5 not only alternates ZI and IZ steps, but embeds a pTRM check step within each ZI step. There is also a third zero field step inserted between the ZI and IZ steps, labelled pTRM-tail check. This step was first described by Dunlop and Özdemir (1997) but is usually attributed to Riisager and Riisager (2001). It was designed to assess whether the partial thermal remanence gained in the laboratory at a given temperature is completely removed by re-heating to the same temperature. The difference between the two zero-field steps is attributed to a “pTRM tail”. In the original application, the absolute value of the difference was plotted on the vertical axis (Dunlop and Özdemir, 1997; see also Riisager and Riisager, 2001) and was interpreted to be a consequence of an inequality of the unblocking temperature Tub and the original blocking temperature Tb in violation of the law of reciprocity. The IZZI method is extremely sensitive to the presence of pTRM tails which make the and/or Zijderveld diagrams “zig-zag” as in the example of a complete IZZI experiment shown in Figure 10.6. The zig-zag behavior was explained by Yu et al. (2004) as the effect of pTRM tails.
In Figure 10.6, we plot the pTRM tail checks from a typical experiment as blue squares along the X axis; note that these are not absolute values, but are the magnitudes of the differences in zero field steps separated by an in-field step at the same temperature. We plot them this way because what is being measured is a difference in the NRM remaining, not the pTRM. It is perhaps surprising that most pTRM tails appear to be negative – not positive, suggesting the dominance of low temperature tails, as opposed to high temperature tails. Note also that the IZ steps are typically farther from the ideal line than are the ZI steps. In any case, significant zig-zagging should raise warning flags about the reliability of data acquired by such non-ideal specimens.
There are several other violations of the fundamental assumptions that require additional tests and/or corrections in the paleointensity experiment besides alteration or failure of reciprocity. For example, if the specimen is anisotropic with respect to the acquisition of thermal remanence (e.g., Aitken et al., 1981), the TRM can be strongly biased (Figure 10.7). If this is the case, the TRM can be corrected by determining the TRM (or the ARM proxy) anisotropy tensor and matrix multiplication to recover the original magnetic vector (see Section 13.7.1 in Chapter 13 and Selkin et al., (2000), for a more complete discussion.) One quick way of detecting if anisotropy might be a problem is to compare the direction of the pTRM acquired in the laboratory with the laboratory field direction, a parameter called γ in Appendix C.3 . If this angle exceeds ∼ 5∘, the anisotropy tensor should be determined. This will not work if the lab field is applied near the principal direction where only a change in magnitude is expected, but does work if the laboratory field is applied at an angle to the principal direction.
Differences in laboratory and ancient cooling rate are also important. The approach to equilibrium is a function of time. Slower cooling results in a larger TRM, hence differences in cooling rate between the original remanence acquisition and that acquired in the laboratory will lead to erroneous results (e.g., Halgedahl et al., 1980). Compensating for differences in cooling rate is relatively straight-forward if the original cooling rate is known or can be approximated and the specimens behave according to single domain theory (see Figure 10.8). Alternatively, one could take an empirical approach in which the rock is allowed to acquire a pTRM under varying cooling rates (e.g., Genevey and Gallet, 2003), an approach useful for cooling rates of up to a day or two.
The previous section was devoted to experiments in which detection of non-ideal behavior is done by repeating various temperature steps. The full IZZI experiment, including TRM acquisition tests and perhaps even TRM anisotropy or non-linear TRM acquisition tests involves many heating steps (as many as 50!). Each time a specimen is heated, it is exposed to the risk of alteration. Some experimental designs focus on reducing the number of heating steps or the type of heating to minimize the frequently catastrophic consequences of laboratory heating on the results.
There are a number of strategies for reducing the effects of laboratory heating. These include using controlled atmospheres, reduced number of heating steps and reduced heating of the matrix with microwaves focussed on the ferromagnetic components of the specimen.
Thellier and Thellier (1959) tried heating specimens in neutral atmospheres. This requires either placing the specimen in a vacuum or a chemically neutral atmosphere. There are technical difficulties and most researchers have found minimal improvement in their results.
Reducing the number of heating steps has been approached in several ways. Kono and Ueno (1977) describe in detail a single heating step per temperature method originally suggested by Kono (1974). Assuming that the specimen has a single component of magnetization, which can be isolated after demagnetizing at some low temperature (100∘C), the specimen is heated in a laboratory field applied perpendicular to the NRM. MpTRM is gotten by vector subtraction. The goal is that by reducing the number of heatings, the alteration can be reduced to some extent. This method requires strictly uni-vectorial NRMs (an assumption that is difficult to test with the data generated by this method) and rather delicate positioning of specimens in the furnace or fancy coil systems that generally have a limited region of uniform field, reducing the number of specimens that can be analyzed in a single batch. Steps like the pTRM checks and pTRM tail checks are possible with this method, but they necessitate additional (zero field) heating steps.
A second strategy for reducing the number of heating steps is to treat multiple specimens from a single cooling unit as a homogeneous set and expose each specimen to a limited subset of all the heating steps required for a complete paleointensity experiment. These “multi-specimen” techniques derive from one proposed by Hoffman et al. (1989). Recent incarnations include Hoffman and Biggin (2005) and Dekkers and Böhnel (2006). The basic idea is to take multiple specimens from a given cooling unit and subject them to a reduced number of heating steps. The data are stacked to yield a single paleofield estimate. The Hoffman-Biggin (2005) method has some estimate of the effects of alteration by including at least one double heating step. The method of Dekkers and Böhnel (2006) is somewhat different in that pTRMs are imparted at a temperature thought to exceed the overprint unblocking but be less than the onset of chemical alteration. Each specimen is treated in different laboratory field strength in a field parallel to the NRM direction. This technique has been sold as being applicable to multi-domain remanences, but the inequality of blocking and unblocking makes this invalid. Moreover, there are few ways to check the assumptions of uni-vectorial NRM, lack of alteration in the lab and the insidious effect of pTRM tails.
The previous sections were devoted to experiments in which detection of non-ideal behavior is done by repeating various temperature steps. In this section we will briefly introduce an alternative approach, long in use in paleointensity studies, the so-called Shaw method (e.g., Shaw, 1974). There are many variants of the Shaw method and the reader is referred to Tauxe and Yamazaki (2007) for a recent review. In its simplest form, we measure the NRM, then progressively demagnetize it with alternating fields (AF) to establish the coercivity spectrum of the specimen prior to heating. The specimen is then given an anhysteretic remanence (MARM1; see Chapter 7). The use of anhysteretic remanence is usually rationalized by pointing out that in many ways it is analogous to the original TRM (see Dunlop and Özdemir, 1997). MARM1 is then progressively demagnetized to establish the relationship between the coercivity spectrum of the MNRM (presumed to be a thermal remanence) and MARM1 prior to any laboratory heating. As with the step-wise heating methods, MNRM is normalized by a laboratory thermal remanence. But in the case of the Shaw type methods, the specimen is given a total TRM, (MTRM1) which is AF demagnetized as well. Finally, the specimen is given a second ARM (MARM2) and AF demagnetized for the last time.
The basic experiment is shown in Figures 10.9a and b. If the first and second ARMs do not have the same coercivity spectrum as in Figure 10.9b, the coercivity of the specimen has changed and the NRM/TRM ratio is suspect.
There are many variants of the Shaw method that seek to improve reliability or success rate and the reader is referred to a review by Tauxe and Yamazaki (2007) for a more complete discussion. The primary reasons stated for using Shaw-type methods as opposed to the theoretically more robust step-wise heating methods are: 1) they are faster, and 2) because the specimen is only heated once (albeit to a high temperature), alteration may be minimized. The first rationale is no longer persuasive because modern thermal ovens have high capacities and step-wise heating methods are certainly not slower than the Shaw method on a per specimen basis, if one analyzes lots of specimens. This is particularly true for the more elaborate Shaw family protocols currently in use. The second rationale may have some validity and warrants further work. The key features of any good experiment are the built-in tests of the important assumptions and current designs of Shaw type experiments do not build in the necessary checks.
Several alternative approaches have been proposed which instead of detecting non-ideal behavior such as alteration, attempt to minimize it (see Tauxe and Yamazaki, 2007 for more complete discussion). These methods include reducing the number of heating steps required (as in the Shaw methods), heating specimens in controlled atmospheres, reducing the time at temperature by for example measuring the specimens at elevated temperature, or using microwaves to excite spin moments as opposed to direct thermal heating. Of these, the microwave paleointensity approach is perhaps the most popular and we will briefly discuss that here.
Until now we have not concerned ourselves with HOW the magnetic moment of a particular grain becomes unblocked. Earlier, we mentioned “thermal energy” and left it at that. But how does thermal energy do the trick?
An external magnetic field generates a torque on the electronic spins, and in isolation, a magnetic moment will respond to the torque in a manner similar in some respects to the way a spinning top responds to gravity: the magnetic moment will precess about the applied field direction, spiraling in and come to a rest parallel to it. Because of the strong exchange or superexchange coupling in magnetic phases, spins tend to be aligned parallel (or antiparallel) to one another and the spiraling is done in a coordinated fashion, with neighboring spins as parallel as possible to one another. This phenomenon is known as a spin wave (see Figure 3.10 in Chapter 3).
Raising the temperature of a body transmits energy (via phonons) to the electronic spins, increasing the amplitude of the spin waves. This magnetic energy is quantized in magnons. In the traditional step-wise heating experiment, the entire specimen is heated and the spin waves are excited to the point that some spin vectors may flip their moments as described in Chapter 7.
As in most kitchens, there are two ways of heating things up: the conventional oven and the microwave oven. In the microwave oven, molecules with certain vibrational frequencies (e.g., water) are excited by microwaves. These heat up, passing their heat on to the rest of the pizza (or whatever). If the right microwave frequency is chosen, ferromagnetic particles can also be excited directly, inviting the possibility of heating only the magnetic phases, leaving the matrix alone (e.g., Walton et al., 1993). The rationale for developing this method is to reduce the degree of alteration experienced by the specimen because the matrix often remains relatively cool, while the ferromagnetic particles themselves get hot. But, the magnons get converted to phonons, thereby transferring the heat from the magnetic particle to the matrix encouraging alteration (even melting sometimes!). So, while alteration may in fact be reduced (see, e.g., Hill et al. 2005), it has not yet been eradicated.
The same issues of non-linearity, alteration, reciprocity, anisotropy and cooling rate differences, etc., arise in the microwave approach as in the thermal approach. Ideally, the same experimental protocol could be carried out with microwave ovens as with thermal ovens. In practice, however, it has been quite difficult to repeat the same internal temperature, making double (or even quadruple) heatings challenging. Yet tremendous strides have been made recently in achieving reproducible multiple heatings steps (e.g., Hill et al., 2005).
It is likely that the issues of reciprocity of blocking and unblocking in the original (thermally blocked) and the laboratory (microwave unblocked) and differences in the rate of blocking and unblocking will remain a problem for some time as they have for thermally blocked remanences. It is also worth noting that the theoretical equivalence between thermal unblocking and microwave unblocking has not yet been demonstrated. Nonetheless, if alteration can be prevented by this method, and the theoretical underpinnings can be worked out, it is well worth pursuing.
Another very important approach to the paleointensity problem has been to find and exploit materials that are themselves resistant to alteration. There are an increasing variety of promising materials, ranging from quenched materials, to single crystals extracted from otherwise alteration prone rocks, to very slowly cooled plutonic rocks (e.g., layered intrusions). Quenched materials include volcanic glasses (e.g., Pick and Tauxe , 1993; Tauxe 2006), metallurgical slag (e.g., Ben-Yosef et al., 2008) and welded tuffs (unpublished results). Single crystals of plagioclase extracted from lava flows (see review by Tarduno et al., 2006) can yield excellent results while the lava flows themselves may be prone to alteration or other non-ideal behavior. Parts of layered intrusions (e.g., Selkin et al., 2000b) can also perform extremely well during the paleointensity experiment.
Sometimes it is difficult or impossible to heat specimens because they will alter in the atmosphere of the lab, or the material is too precious to be subjected to heating experiments (e.g., lunar samples and some archaeological artifacts). If TRM is linear with the applied field, there may be an alternative for order of magnitude guesstimates for paleointensity without heating at all. TRM normalized by a saturation remanence (Mr) can be quasi-linearly related to the applied field up to some value depending on mineralogy and grain size population.
TRM/IRM can at best only give an order of magnitude estimate for absolute paleointensity and that only for ideal, equant, and small SD magnetic assemblages (see Chapter 7 for theoretical treatment). These strict constraints may make even an order of magnitude guess unreliable. Finally, multi-domain TRMs and IRMs do not respond similarly under AF demagnetization, the former being much more stable than the latter. Nonetheless, if magnetic uniformity can be established, it may in fact be useful for establishing relative paleointensity estimates; thisis done routinely in sedimentary paleointensity studies as we shall see later in the chapter. The caveats concerning single component remanences are still applicable and perhaps complete AF demagnetization of the NRM would be better than a single “blanket” demagnetization step. Moreover, we should bear in mind that for larger particles, TRM can be strongly non-linear with applied field at even relatively low fields (30 μT) according to the experimental results of Dunlop and Argyle (1997). The problem with the IRM normalization approach is that domain state, linearity of TRM, and the nature of the NRM cannot be assessed. The results are therefore difficult to interpret in terms of ancient fields.
Given the number of key assumptions in the paleointensity method and the growing complexity of the modern experimental design, there are a bewildering array of statistics that can be calculated to assess the quality of a given data set. Many of these are defined in Appendix C.3 to which the reader is referred for a detailed explanation. There is at present no consensus on which statistics guarantee the reliability of a given result. It is safe to say that the more tests performed (and passed), the greater the confidence in the results. And, the more replicate specimens that are measured and the more samples from different recording media, that are measured yielding consistent results, the more confidence we can have in the conclusions. This is a rapidly developing area of research, so stay tuned!
The principle on which paleointensity studies in sedimentary rocks rests is that DRM is linearly related to the magnitude of the applied field B. We learned in Chapter 7 that this is unlikely to be universally true, yet it is the foundation of all relative paleointensity studies published to date. Forgetting for the moment that non-linear behavior may in fact be frequently found in nature, we will proceed with a discussion of paleointensity in sediments making the first order assumption of linearity.
Following from the introductory discussion of paleointensity in general, we would require a laboratory redeposition experiment that duplicates the natural remanence acquisition process in order to be able to determine absolute paleointensity in sediments. The problem with sedimentary paleointensity data is that laboratory conditions can rarely (if ever) achieve this. Assuming that the remanence is not chemical but depositional in origin, the intensity of remanence is still a complicated function of applied field, magnetic mineralogy, concentration, and even chemistry of the water column.
Under the ideal conditions depicted in Figure 10.10, the initial DRM of a set of specimens deposited under a range of magnetic field intensities (B) is shown as open circles. The relationship is not linear because each specimen has a different response to the applied field (here called magnetic activity [am]) as a result of differences in the amount of magnetic material, magnetic mineralogy, etc. For example, specimens with a higher concentration of magnetic material will have a higher DRM. If [am] can be successfully approximated, for example, by bulk remanences such as IRM or ARM, or by χb (Chapters 7 and Chapter 8), then a normalized DRM (shown as dots in Figure 10.10) will reflect at least the relative intensity of the applied field.
Our theoretical understanding of DRM is much less developed than for TRM (Chapter 7). Because of the lack of a firm theoretical foundation for DRM, there is no simple method for determining the appropriate normalization parameter. In Chapters 7 and 8 we considered a variety of theoretical aspects of DRM and various parameters potentially useful for normalization. Many proxies have been proposed ranging from normalization by bulk magnetic properties such as ARM, IRM, or χb or more complicated proxies involving selective demagnetization of the NRM or normalizer or both. One can imagine that even more sophisticated normalization techniques could be devised by targeting particular coercivity fractions discovered by the IRM component diagrams discussed in Chapter 8.
Tauxe et al. (2006) summarized two major complications in our quest for meaningful relative paleointensity estimates from sediments. First, the size of the floc in which magnetic moments are embedded plays a huge role in the DRM strength, yet estimating original floc size in sediments is a daunting task. Second, DRM is only approximately linearly related to the applied field for the larger floc sizes; small flocs or isolated magnetic particles are likely to be highly non-linear in their magnetic response.
How can sedimentary relative paleointensity data be judged? Here are some thoughts:
SUPPLEMENTAL READINGS: Dunlop and Özdemir (1997), Chapters 8 and 15; Valet (1998); Tauxe and Yamazaki (2007).
a) In this problem, we will use published data to get a feel for “real” paleointensity data. Make sure you have the PmagPy programs working (see Preface). You can find a data set associated with a particular publication (if someone uploaded the data), by using the digital object identifier (DOI) search. For example, the data set of Tauxe et al. (2016) could be located using the syntax:
When you locate the reference, click on the ‘Download Results’ button and select the ‘1 Contribution File’ option and download the file prepared for you. This is a dataset from a bunch of samples that acquired their TRM in known fields.
b) Create a new folder for these data called Myfiles and put the downloaded text file in it. Unpack the data file with ipmag.download_magic from within a Jupyter notebook.
c) Fish out all the data from the 1960 Hawaiian lava flow (sites named ‘Hawaii 1960 Flow’ and ‘hw241’ using Pandas filtering techniques. Now we want to save these data in a MagIC formatted file for use with the PmagPy program Thellier GUI. See the instructions in the notebook _PmagPy_nb.ipynb in the PmagPy data_files/notebooks folder for reading and writing MagIC formatted data files in notebooks and save the data file in a new project directory called ’Myfiles’ and a file name called ‘measurements.txt’. Open a terminal window (command prompt on Windows machines). On the command line, type: pmag_gui.py. [Remember that some PC installations omit the .py termination and some installations require the pmag_gui_anaconda version.] Choose data_model 3 (the default) and then your “Myfiles” directory.
d) Click on the ‘Thellier GUI’ button.
e) Step through the data by clicking on ‘next’.
f) The location of the 1960 lava flow (really close to the 2018 one!) is 19.52 latitude and -154.81 longitude. Figure out what the field was at the site using the tricks you learned in Chapter 2.
g) What would be some reasonable selection criteria that would select for the accurate results and suppress the inaccurate ones? Is there any objective way to tell “good” from “bad”?
a) Go to link for a study by Tauxe and Hartl (1997) using the search by DOI option:
Download the contribution file as in Problem 10.1. Make a new project directory (e.g., Myfiles2) and copy the downloaded file into it. Unpack the data file with program ipmag.download_magic from within a Jupyter notebook.
b) Read in the data into a Pandas DataFrame (with the full hierarchy using pmagpy.new_builder.add_sites_to_meas_table function. Get a list of unique method codes for plotting the measurement data using the df.method_codes.unique() method for the DataFrame. In this case they are ’LT-AF-Z’, ’LT-AF-I’, ’LT-IRM’, ’LP-X’. On the http://earthref.org/MAGIC website, follow the link to “Method Codes”. Examine the available options under “Lab Protocol” (LP-) and “Lab Treatment” (LT-) and find the option that describes these. In this case, we have alternating field in zero lab field (AF demagnetization), alternating field in a lab field (ARM acquisition), IRM and magnetic susceptibility.
c) Get a merged DataFrame with ARM and IRM data for each specimen. You may have to use the function df.dropna to get rid of specimens that do not have both. Then plot the ARM versus IRM data for this data set. Note that the column header for magnetization for this data file is magn_mass and when you merge them, there will be a value for magn_mass_x and magn_mass_y for the ‘left’ and ‘right’ dataframes specified in the merge command.
d) Now plot relative intensity versus age. The relative paleointensity is in the int_rel column of the specimens.txt table and the age information is in the sites.txt table. There are two different versions of this data set, the original one (callled ‘This study’) and one in the compilation of Tauxe & Yamazaki (2007). So, you should filter for the latter using the df.Series.str.contains() method. Then you will have to merge the data in the specimens and sites tables. If you make a new column in the (filtered) specimens DataFrame called ’site’, which is identical to the ’specimen’ column, you can merge on site to pair the age information with the relative paleointensity information.
e) These data are supposedly relative paleointensity data from the Oligocene in the South Atlantic. What would convince you that these were “real”?
We have laid out the need for statistical analysis of paleomagnetic data in the preceding chapters. For instance, we require a method for determining a mean direction from a set of observations. Such a method should provide some measure of uncertainty in the mean direction. Additionally, we need methods for assessing the significance of field tests of paleomagnetic stability. In this chapter, we introduce basic statistical methods for analysis of directional data. It is sometimes said that statistical analyses are used by scientists in the same manner that a drunk uses a light pole: more for support than for illumination. Although this might be true, statistical analysis is fundamental to any paleomagnetic investigation. An appreciation of the basic statistical methods is required to understand paleomagnetism.
Most of the statistical methods used in paleomagnetism have direct analogies to “planar” statistics. We begin by reviewing the basic properties of the normal distribution. This distribution is used for statistical analysis of a wide variety of observations and will be familiar to many readers. We then tackle statistical analysis of directional data by analogy with the normal distribution. Although the reader might not follow all aspects of the mathematical formalism, this is no cause for alarm. Graphical displays of functions and examples of statistical analysis will provide the more important intuitive appreciation for the statistics.
Any statistical method for determining a mean (and confidence limit) from a set of observations is based on a probability density function. This function describes the distribution of observations for a hypothetical, infinite set of observations called a population. The Gaussian probability density function (normal distribution) has the familiar bell-shaped form shown in Figure 11.1a. The meaning of the probability density function f(z) is that the proportion of observations within an interval of incremental width dz centered on z is f(z)dz.
The Gaussian probability density function is given by:
x is the variable measured, μ is the true mean, and σ is the standard deviation. The parameter μ determines the value of x about which the distribution is centered, while σ determines the width of the distribution about the true mean. By performing the required integrals (computing area under curve f(z)), it can be shown that 68% of the readings in a normal distribution are within σ of μ, while 95% are within 1.96σ of μ.
The usual situation is that one has made a finite number of measurements of a variable x. In the literature of statistics, this set of measurements is referred to as a sample. Let us say that we made 1000 measurements of some parameter, say bed thickness (in cm) in a particular sedimentary formation. We plot these in histogram form in Figure 11.1b.
By using the methods of Gaussian statistics, one is supposing that the observed sample has been drawn from a population of observations that is normally distributed. The true mean and standard deviation of the population are, of course, unknown. But the following methods allow estimation of these quantities from the observed sample. A normal distribution can be characterized by two parameters, the mean (μ) and the variance σ2. How to estimate the parameters of the underlying distribution is the art of statistics. We all know that the arithmetic mean of a batch of data x drawn from a normal distribution is calculated by:
The mean estimated from the data shown in Figure 11.1b is 10.09. If we had measured an infinite number of bed thicknesses, we would have gotten the bell curve shown as the dashed line and calculated a mean of 10.
The “spread” in the data is characterized by the variance σ2. Variance for normal distributions can be estimated by the statistic s2:
In order to get the units right on the spread about the mean (cm – not cm2), we have to take the square root of s2. The statistic s gives an estimate of the standard deviation σ and is the bounds around the mean that includes 68% of the values. The 95% confidence bounds are given by 1.96s (this is what a “2-σ error” is), and should include 95% of the observations. The bell curve shown in Figure 11.1b has a σ (standard deviation) of 3, while the s is 2.97.
If you repeat the bed measuring experiment a few times, you will never get exactly the same measurements in the different trials. The mean and standard deviations measured for each trial then are “sample” means and standard deviations. If you plotted up all those sample means, you would get another normal distribution whose mean should be pretty close to the true mean, but with a much more narrow standard deviation. In Figure 11.1c we plot a histogram of means from 100 such trials of 1000 measurements each drawn from the same distribution of μ = 10,σ = 3. In general, we expect the standard deviation of the means (or standard error of the mean, sm) to be related to s by
What if we were to plot up a histogram of the estimated variances as in Figure 11.1c? Are these also normally distributed? The answer is no, because variance is a squared parameter relative to the original units. In fact, the distribution of variance estimates from normal distibutions is expected to be chi-squared (χ2). The width of the χ2 distribution is also governed by how many measurements were made. The so-called number of degrees of freedom (ν) is given by the number of measurements made minus the number of measurements required to make the estimate, so ν for our case is N − 1. Therefore we expect the variance estimates to follow a χ2 distribution with N − 1 degrees of freedom of χν2.
The estimated standard error of the mean, sm, provides a confidence limit for the calculated mean. Of all the possible samples that can be drawn from a particular normal distribution, 95% have means, x, within 2sm of x. (Only 5% of possible samples have means that lie farther than 2sm from x.) Thus the 95% confidence limit on the calculated mean, x, is 2sm, and we are 95% certain that the true mean of the population from which the sample was drawn lies within 2sm of x. The estimated standard error of the mean, sm decreases 1/. Larger samples provide more precise estimations of the true mean; this is reflected in the smaller confidence limit with increasing N.
We often wish to consider ratios of variances derived from normal distributions (for example to decide if the data are more scattered in one data set relative to another). In order to do this, we must know what ratio would be expected from data sets drawn from the same distributions. Ratios of such variances follow a so-called F distribution with ν1 and ν2 degrees of freedom for the two data sets. This is denoted F[ν1,ν2]. Thus if the ratio F, given by:
A related test to the F test is Student’s t-test. This test compares differences in normal data sets and provides a means for judging their significance. Given two sets of measurements of bed thickness, for example in two different sections, the t test addresses the likelihood that the difference between the two means is significant at a given level of probability. If the estimated means and standard deviations of the two sets of N1 and N2 measurements are x1,σ1 and x2,σ2 respectively, the t statistic can be calculated by:
We turn now to the trickier problem of sets of measured vectors. We will consider the case in which all vectors are assumed to have a length of one, i.e., these are unit vectors. Unit vectors are just “directions”. Paleomagnetic directional data are subject to a number of factors that lead to scatter. These include:
Some of these sources of scatter (e.g., items 1, 2 and perhaps 6 above) lead to a symmetric distribution about a mean direction. Other sources of scatter contribute to distributions that are wider in one direction than another. For example, in the extreme case, item four leads to a girdle distribution whereby directions are smeared along a great circle. It would be handy to be able to calculate a mean direction for data sets and to quantify the scatter.
In order to calculate mean directions with confidence limits, paleomagnetists rely heavily on the special statistics known as Fisher statistics (Fisher, 1953), which were developed for assessing dispersion of unit vectors on a sphere. It is applicable to directional data that are dispersed in a symmetric manner about the true direction. We show some examples of such data in Figure 11.2 with varying amounts of scatter from highly scattered in the top row to rather concentrated in the bottom row. All the data sets were drawn from a Fisher distribution with a vertical true direction.
In most instances, paleomagnetists assume a Fisher distribution for their data because the statistical treatment allows calculation of confidence intervals, comparison of mean directions, comparison of scatter, etc. The average inclination, calculated as the arithmetic mean of the inclinations, will never be vertical unless all the inclinations are vertical. In the following, we will demonstrate the proper way to calculate mean directions and confidence regions for directional data that are distributed in the manner shown in Figure 11.2. We will also briefly describe several useful statistical tests that are popular in the paleomagnetic literature.
R. A. Fisher developed a probability density function applicable to many paleomagnetic directional data sets, known as the Fisher distribution (Fisher, 1953). In Fisher statistics each direction is given unit weight and is represented by a point on a sphere of unit radius. The Fisher distribution function PdA(α) gives the probability per unit angular area of finding a direction within an angular area, dA, centered at an angle α from the true mean. The angular area, dA, is expressed in steredians, with the total angular area of a sphere being 4π steredians. Directions are distributed according to the the Fisher probability density, given by:
where α is the angle between the unit vector and the true direction and κ is a precision parameter such that as κ →∞, dispersion goes to zero.
We can see in Figure 11.3a the probability of finding a direction within an angular area dA centered α degrees away from the true mean for different values of κ. κ is a measure of the concentration of the distribution about the true mean direction. The larger the value of κ, the more concentrated the direction; κ is 0 for a distribution of directions that is uniform over the sphere and approaches ∞ for directions concentrated at a point.
If ϕ is taken as the azimuthal angle about the true mean direction, the probability of a direction within an angular area, dA, can be expressed as
The sinα term arises because the area of a band of width dα varies as sinα. It should be understood that the Fisher distribution is normalized so that
Equation 11.4 simply indicates that the probability of finding a direction somewhere on the unit sphere must be unity. The probability Pdα of finding a direction in a band of width dα between α and α + dα is given by:
This probability (for κ = 5,10,50,100) is shown in Figure 11.3b where the effect of the sinα term is apparent. Equation 11.3 for the Fisher distribution function suggests that declinations are symmetrically distributed about the mean. In “data” coordinates, this means that the declinations are uniformly distributed from 0 → 360∘. Furthermore, the probability Pα of finding a direction of α away from the mean decays exponentially.
Because the intensity of the magnetization has little to do with the validity of the measurement (except for very weak magnetizations), it is customary to assign unit length to all directions. The mean direction is calculated by first converting the individual moment directions (mi) (see Figure 11.4), which may be expressed as declination and inclination (Di,Ii), to cartesian coordinates (x1,x2,x3) by the methods given in Chapter 2. Following the logic for vector addition explained in Appendix A.3.2, the length of the vector sum, or resultant vector R, is given by:
The relationship of R to the N individual unit vectors is shown in Figure 11.4. R is always < N and approaches N only when the vectors are tightly clustered. The mean direction components are given by:
These cartesian coordinates can, of course, be converted back to geomagnetic elements (D,I) by the familiar method described in Chapter 2.
Having calculated the mean direction, the next objective is to determine a statistic that can provide a measure of the dispersion of the population of directions from which the sample data set was drawn. One measure of the dispersion of a population of directions is the precision parameter, κ. From a finite sample set of directions, κ is unknown, but a best estimate of κ can be calculated by
where N is the number of data points. Using this estimate of κ, we estimate the circle of 95% confidence (p = 0.05) about the mean, α95, by:
In the classic paleomagnetic literature, α95 was further approximated by:
where Δi is the angle between the ith direction and the calculated mean direction. The estimated circular (or angular) standard deviation is S, which can be approximated by:
which is the circle containing ∼68% of the data.
Some practitioners use the statistic δ given by:
because of its ease of calculation and the intuitive appeal (e.g., Figure 11.4) that δ decreases as R approaches N. In practice, when N >∼ 10 − 20, CSD and δ are close to equal.
When we calculate the mean direction, a dispersion estimate, and a confidence limit, we are supposing that the observed data came from random sampling of a population of directions accurately described by the Fisher distribution. But we do not know the true mean of that Fisherian population, nor do we know its precision parameter κ. We can only estimate these unknown parameters. The calculated mean direction of the directional data set is the best estimate of the true mean direction, while k is the best estimate of κ. The confidence limit α95 is a measure of the precision with which the true mean direction has been estimated. One is 95% certain that the unknown true mean direction lies within α95 of the calculated mean. The obvious corollary is that there is a 5% chance that the true mean lies more than α95 from the calculated mean.
Having buried the reader in mathematical formulations, we present the following illustrations to develop some intuitive appreciation for the statistical quantities. One essential concept is the distinction between statistical quantities calculated from a directional data set and the unknown parameters of the sampled population.
Consider the various sets of directions plotted as equal area projections (see Chapter 2) in Figure 11.2. These are all synthetic data sets drawn from Fisher distributions with means of a single, vertical direction. Each of the three diagrams in a row is a a replicate sample from the same distribution. The top row were all drawn from a distribution with κ = 5, the middle with κ = 10 and the bottom row with κ = 50. For each synthetic data set, we estimated D,I,κ and α95 (shown as insets to the equal area diagrams).
There are several important observations to be taken from these examples. Note that the calculated mean direction is never exactly the true mean direction (I = +90∘). The calculated mean inclination I varies from 78.6∘ to 89.3∘, and the mean declinations fall within all quadrants of the equal-area projection. The calculated mean direction thus randomly dances about the true mean direction and deviates from the true mean by between 0.7∘ and 11.4∘. The calculated k statistic varies considerably among replicate samples as well. The variation of k and differences in angular variance of the data sets with the same underlying distribution are simply due to the vagaries of random sampling.
The confidence limit α95 varies from 19.9∘ to 4.3∘ and is shown by the circle surrounding th