PmagPy Cookbook

Table of Chapters 1 Installing PmagPy
2 Pmag GUI Tutorial
3 MagIC GUI Tutorial
4 Survival computer skills
5 The PmagPy software package
6 The MagIC database and file formats
7 Introduction to Python Programming
8 Jupyter Notebooks
9 Troubleshooting
Bibliography

Contents

Table of Contents 1 Installing PmagPy
 1.1 Standalone GUI download
  1.1.1 OSX Standalone download
  1.1.2 Windows Standalone download
  1.1.3 Linux Standalone download
 1.2 Full PmagPy install
  1.2.1 Installing Python
  1.2.2 Installing PmagPy
  1.2.3 Finishing up
  1.2.4 For Anaconda users on OSX
 1.3 Developer install of PmagPy
  1.3.1 Developer install for Windows
  1.3.2 Developer install for OS X/Linux
 1.4 Next steps
2 Pmag GUI Tutorial
 2.1 Converting magnetometer files to MagIC format
 2.2 Optional: Calculate geographic / tilt-corrected direction
 2.3 Filling in metadata
 2.4 Demag GUI quick start
 2.5 Thellier GUI quick start
 2.6 Upload to the database
 2.7 Downloading data from MagIC
 2.8 Preparing for MagIC
  2.8.1 Field and sampling information
  2.8.2 Supported Rock Magnetometer files
3 MagIC GUI Tutorial
 3.1 Starting with MagIC GUI
 3.2 MagIC GUI example dataset
4 Survival computer skills
 4.1 Finding your command line
 4.2 File systems
 4.3 Moving around in the file system
 4.4 Redirecting input and output
 4.5 Text editors
5 The PmagPy software package
 5.1 Graphical data analysis programs
  5.1.1 demag_gui.py
  5.1.2 thellier_gui.py
  5.1.3 MagIC GUI
 5.2 Command line PmagPy programs
  5.2.1 aarm_magic.py
  5.2.2 angle.py
  5.2.3 ani_depthplot.py
  5.2.4 aniso_magic.py
  5.2.5 apwp.py
  5.2.6 atrm_magic.py
  5.2.7 azdip_magic.py
  5.2.8 b_vdm.py
  5.2.9 basemap_magic.py
  5.2.10 biplot_magic.py
  5.2.11 bootams.py
  5.2.12 cart_dir.py
  5.2.13 chartmaker.py
  5.2.14 chi_magic.py
  5.2.15 combine_magic.py
  5.2.16 common_mean.py
  5.2.17 cont_rot.py
  5.2.18 convert2unix.py
  5.2.19 convert_samples.py
  5.2.20 core_depthplot.py
  5.2.21 curie.py
  5.2.22 customize_criteria.py
  5.2.23 dayplot_magic.py
  5.2.24 demag_gui.py
  5.2.25 di_eq.py
  5.2.26 di_geo.py
  5.2.27 di_rot.py
  5.2.28 di_tilt.py
  5.2.29 di_vgp.py
  5.2.30 dipole_pinc.py
  5.2.31 dipole_plat.py
  5.2.32 dir_cart.py
  5.2.33 dmag_magic.py
  5.2.34 download_magic.py
  5.2.35 eigs_s.py
  5.2.36 eq_di.py
  5.2.37 eqarea.py
  5.2.38 eqarea_ell.py
  5.2.39 eqarea_magic.py
  5.2.40 find_EI.py
  5.2.41 fisher.py
  5.2.42 fishqq.py
  5.2.43 fishrot.py
  5.2.44 foldtest.py
  5.2.45 foldtest_magic.py
  5.2.46 gaussian.py
  5.2.47 gobing.py
  5.2.48 gofish.py
  5.2.49 gokent.py
  5.2.50 goprinc.py
  5.2.51 grab_magic_key.py
  5.2.52 histplot.py
  5.2.53 hysteresis_magic.py
  5.2.54 igrf.py
  5.2.55 install_etopo.py
  5.2.56 incfish.py
  5.2.57 irmaq_magic.py
  5.2.58 k15_magic.py
  5.2.59 k15_s.py
  5.2.60 kly4s_magic.py
  5.2.61 lnp_magic.py
  5.2.62 lowrie.py
  5.2.63 lowrie_magic.py
  5.2.64 magic_select.py
  5.2.65 make_magic_plots.py
  5.2.66 Measurement Import Scripts
  5.2.67 measurements_normalize.py
  5.2.68 mk_redo.py
  5.2.69 nrm_specimens_magic.py
  5.2.70 orientation_magic.py
  5.2.71 parse_measurements.py
  5.2.72 pca.py
  5.2.73 plotxy.py
  5.2.74 plot_cdf.py
  5.2.75 plot_magic_keys.py
  5.2.76 plot_map_pts.py
  5.2.77 plotdi_a.py
  5.2.78 pmag_results_extract.py
  5.2.79 pt_rot.py
  5.2.80 qqlot.py
  5.2.81 quick_hyst.py
  5.2.82 revtest.py
  5.2.83 revtest_magic.py
  5.2.84 revtest_mm1990.py
  5.2.85 s_eigs.py
  5.2.86 s_geo.py
  5.2.87 s_hext.py
  5.2.88 s_tilt.py
  5.2.89 s_magic.py
  5.2.90 scalc.py
  5.2.91 scalc_magic.py
  5.2.92 site_edit_magic.py
  5.2.93 specimens_results_magic.py
  5.2.94 stats.py
  5.2.95 strip_magic.py
  5.2.96 sundec.py
  5.2.97 thellier_magic.py
  5.2.98 thellier_magic_redo.py
  5.2.99 tk03.py
  5.2.100 uniform.py
  5.2.101 update_measurements.py
  5.2.102 upload_magic.py
  5.2.103 vdm_b.py
  5.2.104 vector_mean.py
  5.2.105 vgp_di.py
  5.2.106 vgpmap_magic.py
  5.2.107 watsons_f.py
  5.2.108 watsons_v.py
  5.2.109 zeq.py
  5.2.110 zeq_magic.py
  5.2.111 zeq_magic_redo.py
 5.3 Complaints department
6 The MagIC database and file formats
 6.1 Structure of the database tables
 6.2 A word about method codes
 6.3 Uploading to MagIC
7 Introduction to Python Programming
8 Jupyter Notebooks
9 Troubleshooting
 9.1 Problems with installing Python or PmagPy
 9.2 Problems updating Canopy
 9.3 Error loading dlls (windows only)
 9.4 Problems with a specific program
 9.5 Other problems

PIC

PmagPy Cookbook


April 15, 2018
Dear Reader,

This documentation is updated from that in the book entitled Essentials of Paleomagnetism by Tauxe et al., (2010). This cookbook was designed as a companion website to the book Essentials of Paleomagnetism, 4th Web Edition. Chapter references to this companion book are, for example, “Essentials Chapter 1”.

There are many chefs who contributed to this work, in particular, the MagIC Database Team (Cathy Constable, Anthony Koppers, Rupert Minnett, Nick Jarboe, Ron Shaar, and Lori Jonestrask). Nick Swanson-Hysell (UC Berkeley) contributed the demag_gui and Jupyter notebook documentation. The PmagPy project is supported by grants from the National Science Foundation.

Users of PmagPy should cite the open access article:

Tauxe, L., R. Shaar, L. Jonestrask, N. L. Swanson-Hysell, R. Minnett, A. A. P. Koppers, C. G. Constable, N. Jarboe, K. Gaastra, and L. Fairchild (2016), PmagPy: Software package for paleomagnetic data analysis and a bridge to the Magnetics Information Consortium (MagIC) Database, Geochem. Geophys. Geosyst., 17, doi:10.1002/2016GC006307.

Lisa Tauxe
Scripps Institution of Oceanography
La Jolla, CA 92093
April 15, 2018
http://magician.ucsd.edu/

Chapter 1
Installing PmagPy

You have three options for downloading and using PmagPy.

1.2.2 Installing PmagPy

Here we provide instructions for using pip to install PmagPy from the command line. “Pip” is a package manager that comes standard with any modern Python distribution (NB: you will have to update it to the most recent version). Pip allows you to install and uninstall pmagpy (the package with all the low-level functionality of PmagPy) and pmagpy-cli (which provides the PmagPy command-line-interface and has all the packages you may use to download/upload, visualize, and manipulate data).

To install using pip:

1.2.3 Finishing up

Now your installation should be complete. There are a few more details:

1.2.4 For Anaconda users on OSX

We have attempted to support both Canopy and Anaconda Python. However, there are some differences in the way Anaconda Python interacts with the screen. Because of this, Anaconda users will need to use a different command to access the PmagPy GUIs. All four GUIs – Pmag GUI, MagIC GUI, Demag GUI, and Thellier GUI – can be invoked with on a Mac with Anaconda python. However, instead of calling them in the standard way on the command line:

pmag_gui.py

You will use:

pmag_gui_anaconda

If you try to use the first syntax, you will see an error message like this:

This program needs access to the screen.  
Please run with a Framework build of python, and only when you are  
logged in on the main display of your Mac.

Windows Anaconda users still invoke the GUIs without this trick:

pmag_gui

1.3 Developer install of PmagPy

If you are a developer, and wish to download PmagPy from the Github repository, you may want to add PmagPy directly to your path instead of installing via pip.

*Note*: you cannot have both a developer install and a pip install. If you want to do a developer install, make sure you uninstall pmagpy/pmagpy-cli first if you have already installed them.

If you’re wondering whether you should do a pip install or a developer install, it depends on what you’re hoping to do with PmagPy. The benefit of a regular pip installation is that you will get a tested, stable version of PmagPy. You can install everything you need quickly and easily, and you’ll be able to use the full functionality. The benefit of a developer installation is that you will be able to stay up-to-date with what other developers are working on. You will also be able to make changes and use those changes immediately. The downside is that the developer install may be buggier and requires to edit your $PATH.

Check out our guidelines on how to contribute to PmagPy for guidelines of all sorts, including how to raise issues, request features, and make pull requests.

If you want to contribute to PmagPy or modify PmagPy programs for your own use, you should do a developer install, otherwise go with the regular pip install. The developer install follows these steps:

1.3.1 Developer install for Windows

For Windows users:

  1. Download the zipfile from the Github repository.
  2. Drag the zip file PmagPy-master to your Desktop and extract all files (unzip)
  3. Open your command prompt as an admin (right click on the Command Prompt icon and select “Run as administrator”).
  4. Try running: python dev_setup.py install
  5. If that didn’t work, you will also need to specify the location of your Python installation. To figure that out, you can run the following two commands:
        cd \  
        dir python.exe /s /p

    This can take a while, so be patient. You should get a result that looks something like this:

        C:\Users\USERNAME\AppData\Local\Continuum\Anaconda3

    Once you’ve done that, navigate back into your PmagPy directory and run dev_setup.py again:

        python dev_setup.py install -p path_to_python

  6. dev_setup.py edits your $PATH and $PYTHONPATH variables. If you want to look at or edit those variables directly, see this information about setting your PATH.
  7. After completing the developer install, you will need to restart your command line. After restarting, you should be able to use all PmagPy functionality as if you had done a full pip install. The only difference is that you will be able to stay up-to-date with PmagPy development and make edits in PmagPy code which will be immediately available in your system.

1.3.2 Developer install for OS X/Linux

For Mac/Linux users:

  1. Download the zipfile from the Github repository.
  2. Drag PmagPy-master to your Desktop and unzip.
  3. Open your command line and navigate to your PmagPy directory:
      cd ~\Desktop\PmagPy-master

  4. Run python dev_setup.py install For more information on how to install, run python dev_setup.py -h
  5. After completing the developer install, you will need to restart your command line. After restarting, you should be able to use all PmagPy functionality as if you had done a full pip install. The only difference is that you will be able to stay up-to-date with PmagPy development and make edits in PmagPy code which will be immediately available in your system.

If you care about the details, here is what the developer install script actually does. It adds these lines editing $PATH to your .bashrc file:

    export PATH=/Users/***/Desktop/PmagPy/programs:$PATH  
    export PATH=/Users/***/Desktop/PmagPy/programs/conversion_scripts:$PATH

And this line to your editing $PYTHONPATH to your .bashrc file:

    export PYTHONPATH=$PYTHONPATH:/Users/***/Desktop/PmagPy/

where *** is your username. (Of course, these lines will depend on exactly where you have put your PmagPy folder – if it lives on your home directory, your $PATH will point there:

export PATH=/Users/***/PmagPy/programs

, and so on.)

1.4 Next steps

Where do you want to go from here?

Chapter 2
Pmag GUI Tutorial

Pmag GUI is a Graphical User Interface (GUI) written by Lori Jonestrask and Ron Shaar that provides a quick path to the main PmagPy programs. The work flow is illustrated schematically here:

PIC

Note: Pmag GUI is available for use with both 3.0 and 2.5 data. All new contributions should be built up using 3.0, but 2.5 may be used for legacy data sets and then later upgraded using the MagIC site. You may specify which data model you want to use on the command-line using the switch “-DM”, or in the window that pops up when you first open Pmag GUI.

PIC

  1. Make sure you have followed all of the steps in the Installing PmagPy section.

  2. When you start Pmag GUI, the first step is to change directories into a ‘Project Directory’. For each study, create a directory with a name that relates to that study. Here we will call it ThisProject. This is where you will collect and process all the rock and paleomagnetic data for a given study, usually a publication. The project directory name should have NO SPACES and be placed on the hard drive in a place that has NO spaces in the path. Under certain Windows versions, this means you should not use your home directory, but create a directory called for example:
      D:\MyPmagProjects

    and put ThisProject there.

  3. copy example files: Navigate to the PmagPy datafiles (developer install), or use move_data_files.py to move the datafiles to an accessible location (pip install). Use:
     
      move_data_files.py -h

    on the command line for help). In the data_files folder you will find a subfolder named Pmag_GUI. Copy the contents of the ThisProject directory into your own Project Directory folder.

  4. Start Pmag GUI either by typing pmag_gui.py on your command line, or (if you have the standalone), by double-clicking the Pmag GUI icon on your Desktop.
  5. Change directories by clicking on the “change directory” button and select your own ThisProject directory.

2.1 Converting magnetometer files to MagIC format

2.2 Optional: Calculate geographic / tilt-corrected direction

PIC

2.3 Filling in metadata

Filling in metadata is a critical part of building a MagIC Project Directory. The data relevant to this example are arranged in five tables: specimens, samples, sites, locations, ages. To complete the data, click the button, and follow the directions in the help window in order:

2.4 Demag GUI quick start

Next, click the [Demag GUI] button. This is the main panel of the demag GUI:

PIC

Use of the Demag GUI is described in more detail in the Demag GUI section below. Here are a few instructions that can be used as a quick start for using Demag GUI. Note: If at any point you require help with a particular aspect of Demag GUI clicking on [Help] [Usage and Tips] (hotkey: ctrl-h) then clicking on the item you wish to know more about will provide a pop-up box with information for most all aspects of the GUI and Interpretation Editor, see additional help for details.

Before you start your analysis, you can choose your coordinate system. From the ‘coordinate system’ drop-down menu you can choose the coordinate system in which you wish to view the data (e.g. ‘geographic’).

To analyze the data in the example, follow these steps for each specimen:

To calculate Fisher mean for the site: choose from the ‘mean options’ drop-down menus. First select: component=All, then select: mean=Fisher.

All of the fits can be viewed and modified using the Interpretation Editor which can be selected from the Tools menu in the top menubar. (hotkey: ctrl-e)

To permanently save all of the specimen interpretations, choose from the menubar [File] [Save MagIC tables]. This saves all the interpretations in specimen/sample/site tables in your ThisProject directory. Fits that are saved this way will be loaded into demag_gui the next time it is launched. Click through the dialog boxes and fill out choices for generating MagIC results.

If you are partway through your analysis, you may want to save your place without outputting to MagIC tables. To save temporary analysis data use [File] [Save interpretations to a redo file]. This saves all interpretation data to a redo file which will not load immediately when the GUI starts, but preserves aesthetic aspects of interpretations such as color as well as the specimen you were on when you saved to keep your place in analysis and allows rebooting of session without full export of MagIC tables.

Close the Demag GUI.

2.5 Thellier GUI quick start

Next, click the [Thellier GUI] button. This is the main panel of the thellier GUI:

This image shows the main panel of Thellier GUI:

PIC

Use of the Thellier GUI is described in more detail in the Thellier GUI section below and in this PDF document: Thellier GUI manual. Here are a few instructions that can be used as a quick start for using Thellier GUI.

You can customize which selection criteria you need under Preferences => Specimen paleointensity statistics (from SPD list). This uses the Paterson et al. (2014) definitions of paleointensity statistices. Next, select bounds for the statistics under [Analysis] =>[Acceptance criteria] => [Change acceptance criteria].

The default of the program is to calculate sample means. To change it to site level mean, choose from the menubar: [Analysis] [Acceptance criteria] [Change acceptance criteria]. Find the ‘average by sample/site’ dropdown menu in the third row and change it to [site]. Click OK. The site mean will appear in the sample/site results box (top right).

Then, to analyze the data, follow these steps for each specimen:

After all specimen interpretations are saved in memory, choose from the menubar [File] [Save MagIC pmag tables]. This save all the interpretations in MagIC formatted pmag_* files in the MagIC Project Directory. Close the Thellier GUI.

2.6 Upload to the database

To create a MagIC-format file for upload, you first click on the green Create MagIC txt file for upload button on the main page of Pmag GUI. A file will be created in your ThisProject directory. Now, go to the MagIC interface. Click on the ‘Upload Tool’ button and upload your file by draging and droping the upload file onto the ‘Drop and drop files here to uypload’ window. Congratulations. Your data are now in the database under a Private Contribution. But, they are not yet activated and cannot be until they are at least accepted for publication. After you have a suitable reference, you can Activate your contribution. Once you activate an uploaded dataset (only for published papers), it will be publicly available.

2.7 Downloading data from MagIC

Data can be downloaded from the MagIC database and examined with PmagPy tools. The MagIC search interface provides a rich variety of search filters available by clicking on the ‘Filter the MagIC Search Results’ text box. To locate data from a particular reference, simply substitute the digital object identifier (DOI) in your browser window:

http://earthref.org/MAGIC/doi/10.1029/2003GC000661

The above DOI will find the data for the paper by Tauxe et al. (2004). [This may fail in Safari; if so, use an alternative browser like Firefox or Chrome.] To download the data, simply click on the file icon labeled “Download”. This will save a file to your downloads folder. To unpack this file after downloading it from the database, open Pmag GUI and click “unpack downloaded txt file“.

2.8 Preparing for MagIC

2.8.1 Field and sampling information

There is an astounding number of different ways that paleomagnetists document data in the field and in the lab. This variety is met with a large number of method codes that describe sampling and orientation procedures (see https://earthref.org/MagIC/method-codes for a complete description). The MagIC database expects sample orientations to be the azimuth and plunge of the fiducial arrow used for measurement (see [Essentials, Chapter 9] ) and the orientation of the bedding to be dip direction and downward dip so no matter what your own preference is, it must be translated into the standard MagIC convention for use with the PmagPy programs and with Pmag GUI.

Pmag GUI supports two different ways of getting orientation and other sampling related information into a MagIC usable format. The first way is through step 2 on the GUI front panel and filling in the data from within the GUI. That way will work for many applications, but it may be desirable to fill the spreadsheet in separately from the GUI by using a tab delimited file (orient.txt format). By clicking on step 2 on the GUI front panel you create a file named demag_orient.txt which has all of your sample names in it. Each orient.txt file should have all the information for a single location sensu MagIC.

PIC

The next row has the names of the columns. The required columns are: sample_name, mag_azimuth, field_dip, date, lat, long, sample_lithology, sample_type, sample_class) but there are a number of other possible columns (e.g., Optional Fields in orient.txt formatted files are: [date, shadow_angle, hhmm], date, stratigraphic_height, [bedding_dip_direction, bedding_dip], [image_name, image_look, image_photographer], participants, method_codes, site_name, and site_description, GPS_Az]). Column names in brackets must be supplied together and the data for stratigraphic_height are in meters. Also note that if these are unoriented samples, just set mag_azimuth and field_dip to 0.

It is handy to document the lithology, type and material classification information required by MagIC. These are all controlled vocabularies listed at http://earthref.org/MAGIC/shortlists.htm. For archaeological materials, set the lithology to “Not Specified”.

Put in stratigraphic height, sun compass, differential GPS orientation information under the appropriate column headings. You can also flag a particular sample orientation as suspect, by having a column ’sample_flag’ and setting it to either ’g’ for good or ’b’ for bad. Other options include documenting digital field photograph names and who was involved with the sampling.

For Sun Compass measurements, supply the shadow_angle, date and time. The date must be in mm/dd/yy format. If you enter the time in local time, be sure you know the offset to Universal Time as you will have to supply that when you import the file. Also, only put data from one time zone in a single file. The shadow angle should follow the convention shown in this figure (from Tauxe et al., 2010):

PIC

Supported sample orientation schemes:

There are options for different orientation conventions (drill direction with the Pomeroy orientation device [drill azimuth and hade] is the default), different naming conventions and a choice of whether to automatically calculate the IGRF value for magnetic declination correction, supply your own or ignore the correction. The program generates er_samples.txt and er_sites.txt files. Be warned that existing files with these names will be overwritten.

All images, for example outcrop photos are supplied as a separate zip file. image_name is the name of the picture you will import, image_look is the “look direction“ and image_photographer is the person who took the picture. This information will be put in a file named er_images.txt and will ultimately be read into the er_image table in the console where additional information must be entered (keywords, etc.).

Often, paleomagnetists note when a sample orientation is suspect in the field. To indicate that a particular sample may have an uncertainty in its orientation that is greater than about 5, enter SO-GT5 in the method_codes column and any other special codes pertaining to a particular sample from the method codes table. Other general method codes can be entered later. Note that unlike date and sample_class, the method codes entered in orient.txt pertain only to the sample on the same line.

Samples are oriented in the field with a “field arrow“ and measured in the laboratory with a “lab arrow“. The lab arrow is the positive X direction of the right handed coordinate system of the specimen measurements. The lab and field arrows may not be the same. In the MagIC database, we require the orientation (azimuth and plunge) of the X direction of the measurements (lab arrow). Here are some popular conventions that convert the field arrow azimuth (mag_azimuth in the orient.txt file) and dip (field_dip in orient.txt) to the azimuth and plunge of the laboratory arrow (sample_azimuth and sample_dip in er_samples.txt). The two angles, mag_azimuth and field_dip are explained below.

[1] Standard Pomeroy convention of azimuth and hade (degrees from vertical down) of the drill direction (field arrow). sample_azimuth = mag_azimuth; sample_dip =-field_dip.

PIC

2] Field arrow is the strike of the plane orthogonal to the drill direction, Field dip is the hade of the drill direction. Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = -field_dip

PIC

[3] Lab arrow is the same as the drill direction; hade was measured in the field. Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip.

PIC

[4] Lab arrow orientation same as mag_azimuth and field_dip.

PIC

[5] Lab arrow azimuth is mag_azimuth and lab arrow dip is the field_dip-90

PIC

[6] Lab arrow azimuth is mag_azimuth-90, Lab arrow dip is 90-field_dip, i.e., the field arrow was strike and dip of orthogonal face:

PIC

Structural correction conventions:

Because of the ambiguity of strike and dip, the MagIC database uses the dip direction and dip where dip is positive from 0 180. Dips>90 are overturned beds.

Supported sample naming schemes:

            [1] XXXXY: where XXXX is an arbitrary length site designation and Y  
                is the single character sample designation.  e.g., TG001a is the  
                first sample from site TG001.    [default]  
            [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)  
            [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)  
            [4-Z] XXXX[YYY]:  YYY is sample designation with Z characters from site XXX  
            [5] site name = sample name  
            [6] site name entered in site_name column in the orient.txt format input file  
            [7-Z] [XXX]YYY:  XXX is site designation with Z characters from samples  XXXYYY

When you are finished with editing the orient.txt file, return to step 2 on the GUI front panel.

Data files downloaded from the IODP (LIMS) database

There are two types of files that help in plotting of IODP paleomagnetic data sets: the core summaries with depth to core top information and the sample information that contains lists of samples taken. Visiting the IODP science query website at http://web.iodp.tamu.edu/WTR/html/sci-data.html allows you to select ’SRM - Remanence of magnetization’ under the Analysis scroll down menu. By picking the expedition, site, hole, etc. you can download a .csv format (comma separated values) for the expedition data. (Be aware that this is the rawest form of the data, including disturbed intervals, bad measurements, core ends, etc. and may not be exactly what ended up getting published!). First click on the “Show Report“ button, then, “Expand Table”, then “Get File”:

PIC

This can take a very long time, so get yourself a cup of tea.

You can also (while you’re at it) click on the ’Summaries’ tab and download the coring summaries:

PIC

Place both of the downloaded files in your MyFiles directory.

2.8.2 Supported Rock Magnetometer files

The MagIC database is designed to accept data from a wide variety of paleomagnetic and rock magnetic experiments. Because of this the magic_measurements table is complicated. Each measurement only makes sense in the context of what happened to the specimen before measurement and under what conditions the measurement was made (temperature, frequency, applied field, specimen orientation, etc). Also, there are many different kinds of instruments in common use, including rock magnetometers, susceptibility meters, Curie balances, vibrating sample and alternating gradient force magnetometers, and so on. We have made an effort to write translation programs for the most popular instrument and file formats and continue to add new supported formats as the opportunity arises. Here we describe the various supported data types and tell you how to prepare your files for importing. In general, all files for importing should be placed in the MyFiles directory or in subdirectories therein as needed. If you don’t see your data type in this list, please send an example file and a request to: ltauxe@ucsd.edu and we’ll get it in there for you.

The supported file formats are:

Rock Magnetometer Files:

Anisotropy of Magnetic Susceptibility files:

Hysteresis file formats

Pmag GUI will import hysteresis data from room temperature Micromag alternating gradient magnetometers (AGM) in several different ways. You can import either hysteresis loops or backfield curves, or you can import whole directories of the same. In the latter case, the file endings must be either .agm (.AGM) or .irm (.IRM) and the first part of the file must be the specimen name. See the documentation for agm_magic.py for examples.

Now you’ve collected together all the files you need, we can start importing them into MagIC directory with Step 1 in Pmag GUI.

Chapter 3
MagIC GUI Tutorial

MagIC GUI is a Graphical User Interface written by Lori Jonestrask. It is meant for efficiently compiling a contribution for uploading to the MagIC database. MagIC GUI is specifically designed for making a contribution without measurement data; if you ARE including measurement data, we recommend that you use Pmag GUI instead. MagIC GUI allows you to add data at the location, site, sample, and specimen levels. You can also add results and ages. The program uses an Excel-like grid interface for entering and editing data. Useful features include: pop-up menus with controlled vocabularies, multi-cell pasting from external spreadsheets, and built-in validations for MagIC database requirements. Note: magic_gui.py uses the current MagIC data model, 3.0. For working with legacy data in the 2.5 format, you should instead use magic_gui2.py. The tutorial below was written specifically for the older magic_gui2.py but will soon be updated with illustrations and instructions from magic_gui.py. The two programs do work almost identically, so the tutorial should still point you in the right direction even if you are working with 3.0 data.

3.1 Starting with MagIC GUI

Start up MagIC GUI. You’ll do this by clicking on the icon (if you downloaded the standalone version) or entering ‘magic_gui.py’ on your command line (if you downloaded the full PmagPy installation). If you are using Anaconda Python, you will type ‘magic_gui_anaconda’ on your command line instead.

When you first start magic_gui.py, you will change directories into a ‘Project Directory’. For each study, create a directory with a name that relates to that study. Here we will call it MyProject. This is where you will collect and process all the rock and paleomagnetic data for a given study, usually a publication. The project directory name should have NO SPACES and be placed on the hard drive in a place that has NO spaces in the path. Under certain Windows versions, this means you should not use your home directory, but create a directory called for example: D:MyPmagProjects and put MyProject there.

PIC

3.2 MagIC GUI example dataset

Now we’ll walk through a simple data input process, with fake data.

Chapter 4
Survival computer skills

The ‘Py’ part of ‘PmagPy’ stands for Python, the language in which all the code is written. It is not essential, but it is helpful, to understand a bit about computer operating systems and the Python language when using PmagPy because no one should be using programs as black boxes without understanding what they are doing. As all the programs are open source, you have the opportunity to look into them. If you understand a bit about how computers work yourself, you will be able to follow along what the programs are doing and even modify them to work better for you. In this chapter, you will find a brief introduction to the computer skills necessary for using the programs properly. We have tried to make this tutorial operating system independent. All the examples should work equally well on Mac OS, Windows and Unix-type operating systems. For a more complete explanation of the marvelous world of UNIX, refer to the website at http://www.tutorialspoint.com/unix/unix-quick-guide.htm. For handy tricks in DOS, try this link: http://www.c3scripts.com/tutorials/msdos. For an introduction to programming in Python, see the Python Programming Chapter. For now, we are interested in having the skills to find a command line and navigate the file system in order to get started with PmagPy.

4.1 Finding your command line

If you are not using a Unix-like computer (*NIX), you may never have encountered a command line. Using any of the command line programs requires accessing the command line. If you are using the MacOS operating system, look for the Terminal application in the Utilities folder within the Applications folder. When the Terminal application is launched, you will get a terminal window. The Unix (and MacOS) Bash shell has a $ sign as a prompt. Other shells have other command line prompts, such as the antiquated ‘C-shell’ used by Lisa Tauxe (don’t ask) which has a % prompt which is used in the examples here.

PIC

Under the Windows operating system, you can find your command line by searching for the “cmd” application. You’ll use this if you are using the full pip installation of PmagPy. If you are doing the standard PmagPy install, you will create a shortcut to the PmagPy command window on your desktop. Double clicking on that will give you a window in which you can type PmagPy commands. You will use the PmagPy prompt instead of the Windows command line from there on.

PIC

Note that the location of this program varies on different computers, so you may have to hunt around a little to find yours. Also, the actual “prompt” will vary for different machines.

4.2 File systems

When you first open a terminal window, you are in your “home” directory. Fundamental to all operating systems is the concept of directories and files. On windows-based operating systems (MacOS or Windows), directories are depicted as “folders” and moving about is accomplished by clicking on the different icons. In the world of terminal windows, the directories have names and are arranged in a hierarchical sequence with the top directory being the “root” directory, known as “/” (or C:’backslash’ in Windows) and the file system looks something like this:

PIC

Within the root directory, there are subdirectories (e.g. Applications and Users in bold face). In any directory, there can also be “files” (e.g. dir_cart_example.dat in italics). To refer to directories, the operating system relies on what is called a “pathname”. Every object has an “absolute” pathname which is valid from anywhere on the computer. The absolute pathname in *NIX always begins from the root directory / and in DOS (the operating system working in the Windows command line window), it is C:’backslash’.

The absolute pathname to the home directory lisa in the figure is /Users/lisa. Similarly, the absolute pathname to the directory containing PmagPy scripts would be /Users/lisa/PmagPy. There is also a “relative” pathname, which is in reference to the current directory (the one you are ‘sitting’ in). If user “lisa” is sitting in her home directory, the relative pathname for the file dir_cart_example.dat in the directory data_files would be data_files/dir_cart/dir_cart_example.dat. When using relative pathnames, it is useful to remember that ./ refers to the current directory and ../ refers to the directory “above”. Also, lisa’s home directory would be lisa, or if you are logged in as lisa yourself, then it is just .

4.3 Moving around in the file system

Now that you have found your command line and are comfortable in your home directory, you can view the contents of your directory with the Unix command ls or the DOS command dir. You can make a new directory with the command

% mkdir NEW_DIRECTORY_NAME

This command works in both Unix and DOS environments) and you can move into your new directory with the command

% cd NEW_DIRECTORY_NAME

To move back up into the home directory, just type cd .. remembering that .. refers to the directory above. Also, cd by itself will transport you home from where ever you are (there’s no place like home....). You can also change to any arbitrary directory by specifying the full path of the destination directory.

4.4 Redirecting input and output

Programs that operate at the command line level print output to the screen and read input from the keyboard. This is known as “standard input and output” or “standard I/O”. One of the nicest things about working at the command line level is the ability to redirect input and output. For example, instead of typing input to a program with the keyboard, it can be read from a file using the symbol <. Output can either be printed to the screen (standard output), redirected into a file using the symbol >, appended to the end of a file with >> or used as input to another program with the pipe operator (|).

4.5 Text editors

There are many ways of editing text and the subject is beyond the scope of this documentation. Text editing is a blessing and a curse. You either love it or hate it and in the beginning, and if you are used to programs like Word, you will certainly hate it. (And if you are used to a decent text editor, you will hate Word!). But you can’t use Word because the output is in a weird format that no scripting languages read easily. So you have to use text editor that will produce a plain (ascii) file, like Notepad, TextWrangler, Sublime Text or Atom. TextWrangler is free software available for Macs, the notepad comes standard in the Windows operating system and the Atom text editor is a free cross-platform option with lots of nice packages available that extend its functionality. Enthought Python’s Canopy programming environment comes with its own text editor.

Chapter 5
The PmagPy software package

The PmagPy software package is a comprehensive set of programs for paleomagnetists and rock magnetists. For installation, follow the instructions in the Installing PmagPy Chapter in this cookbook.

When you type something on your command line, your operating system looks for programs of the same name in special places. These are special “paths” so the directory with your Python scripts has to “be in your path”. To inform the operating system of the new directory, you need to “set your path”. It should have been set when you installed PmagPy.

5.1 Graphical data analysis programs

5.1.1 demag_gui.py

The Demag GUI (demag_gui.py) program enables the display and analysis of paleomagnetic demagnetization data. The software will display specimen level data within the chosen directory as a Zijderveld plot, equal area plot and intensity plot. Interpretations can be made to the data as least-squares line or plane fits. Mean directions can be calculated and displayed for these interpretations. These interpretations can be exported as MagIC pmag files from the program.

Launching

The best way to launch the Demag GUI application is through Pmag GUI. If you have installed PmagPy using pip, (or if PmagPy has been added to your PATH), you can type pmag_gui.py at the command line to launch it. Anaconda users will instead type ‘pmag_gui_anaconda’.

Alternatively, you can navigate to the PmagPy home directory and type python ./pmag_gui.py. Within Pmag GUI, data can be converted from the format of a particular lab into MagIC format so that it can be displayed and analyzed within Demag GUI. The program can be started by clicking on the Demag GUI button in Pmag GUI.

If you want to launch Demag GUI directly, (assuming PmagPy was properly installed using pip or has been added to your PATH), you can simply type demag_gui.py at the command line. Anaconda users will type ‘demag_gui_anaconda’ instead. Note: on OSX it is recommended to launch through Pmag GUI as on wxpython 2.9 the drop-down boxes behave better when Demag GUI is launched this way.

PIC

Demag GUI can also be launched through the command line by navigating to the directory containing demag_gui.py and running it with:

% python ./demag_gui.py

Adding and Editing Specimen Interpretations:

A new fit can be added by clicking the ‘add fit’ button or if no fit has yet been created for the current specimen by double clicking on two measurements in the list of measurements to the left or by double clicking on the data points on the zijderveld plot. It is also possible to add interpretations in mass using the interpretation editor tool described bellow. Additionally, you can select the fit you would like to edit or view by using the drop-down menu under the add fit button. Once you have selected a fit, the shape of the end points of the selected fit will turn to diamond shapes on all plots to distinguish them from the other data points.

PIC

Once the desired fit is selected, its bounds can be edited using the drop-down boxes under the bounds header.

PIC

Another way to select bounds is to double-click the list of measurement steps in the list on the left. The included steps in the currently selected interpretation are shown in highlighted in blue on the measurement list and the measurements marked “bad” are shown in highlighted in red. In the case of duplicate measurements, the first good measurement with the same treatment is used as a bound. All points between the selected bounds that are flagged good (i.e. not flagged bad and marked in red), including duplicate measurements, will be included in the interpreted fit.

PIC

Finally, you can select the bounds of an interpretation directly off the Zijderveld plot by hovering your mouse over a measurement (should change to a hand shape) and double clicking.

When first created, the fit will be given a generic name such as Fit 1. The name of the fit can be changed from the default by typing into the drop-down box containing fit name and then pressing enter. The default fit type is a least-squares line. You can choose different fits, such as a line anchored to the origin or a plane, by using the drop-down menu under the label ‘interpretation type’. Plane fits can be plotted as either poles, full planes, partial planes, best fit vectors, or best fit vectors and full plane (Note: plane poles will be displayed as squares and best fit vectors will display as sideways triangles on high level mean plot). This display option can be changed in the second drop-down menu under interpretation type.

PIC

The properties of the currently selected fit to the data can be seen in the upper center of the GUI in a box labeled ‘Interpretation Directions and Statistics’.

PIC

Deleting Specimen Interpretations

If you would like to delete a single interpretation, select the one you wish to delete from the interpretation drop-down menu and click delete. If you wish to clear all interpretations you may go into the interpretation editor located under the tools menu, select the fits you wish to delete and click the “delete selected” button.

Changing Specimen, Coordinate System, and Zijderveld Direction:

You can switch current specimen by clicking the next or previous button under the specimen box in the side bar (hotkey: ctrl-right and ctrl-left respectively). You can also select specimen from the drop-down menu or type the name of the specimen directly into the specimen box and hit enter to go directly to that specimen. Finally, you can double click on any of the points on the higher level means plot to switch directly to that specimen and interpretation.

The choice between coordinate systems (i.e. specimen, geographic or tilt-corrected) is available on the left above the list of steps. The data list and the plots will update to reflect the chosen coordinate system.

You can alter the X axis of the Zijderveld plot using the Zijderveld plot interactions box to set X=North, East, or NRM dec.

PIC

Flagging Bad Measurement Data

Due to flux jumps or other such errors, individual measurements should sometimes be excluded from interpretation. Such measurements can be flagged as “bad” by right clicking them within the measurement list and the measurement will then be highlighted in red. Additionally, you can double right click on the point you want to make bad in the Zijderveld plot to toggle it bad. The measurement_flag in the magic_measurements file will be change from “g” to “b” when a measurement is marked as bad the step will not be included in fits that are made to the data. Any measurement marked as bad will be colored red in the step list and will be shown as an empty rather than filled circle on the Zijderveld, equal area and M/M_0 plots. To change a bad measurement back to being good, one can right click on it again. Upon doing so, the red highlighting will go away, the data will be shown colored in within the plots and any fit that spans that data point will be recalculated to include it.

Acceptance criteria can be set by using the menu option [Analysis] [Acceptance Criteria] [Change Acceptance Criteria]. These criteria will be written to a pmag_criteria.txt table. These criteria can then be used to exclude interpretations that fail checks against this criteria during export.

Plot Interface

The four plots that take up the majority of the center of the GUI are where data and their interpretations are displayed. All plots are initially set to zoom mode and this is signified by a cross shaped cursor when you mouse over them. To zoom simply click and drag the rectangle to the desired area. You can switch to pan mode by right clicking on any one of the graphs and then clicking and dragging will pan around the plot. Finally, to return to the original plot zoom level and position simply click the middle mouse button to return home. Note: In the absence of a middle mouse button pressing both right and left mouse buttons at the same time works on most laptops in the case of Macbooks clicking with two fingers should work, and if using Apple’s magic mouse we recommend you download the MagicPrefs program which will allow you to configure your mouse however you prefer.

On the Zijderveld plot you have the additional option to select the current interpretation’s bounds by double clicking on a measurement point. You can also double right click on a measurement point in the zijderveld plot to mark it bad.

On the equal area plots, both the specimen and high level means, you can double click on any interpretation to switch to that specimen and interpretation immediately.

Saving Specimen Interpretations

Once you have picked out your interpretations, you can save the session data in two different ways: (1) as a .redo file which will allow you to have the fits preserved to be view again with Demag GUI or (2) as MagIC pmag_* tables to be uploaded to the MagIC database or otherwise processed. In addition, you may save image files of the plots.

The .redo File: You can use the menu option [File] [Save current interpretations to a redo file] to create this file type, you can just click the save button next to add fit, or you can use the hotkey ctrl-s. The advantage of the .redo file type being that it is designed to save your place when analysing a large dataset. Loading a redo file will reload all interpretations previously created any special colors assigned to them and take you to the specimen you saved the redo file on allowing you to pick up where you left off. Note: This file type does NOT load previous interpretations on start up you must go to the menu option [File] [Import previous interpretations from a redo file] (hotkey: ctrl-r) to restore your previous session.

The Pmag Tables: By going to the menu [File] [Save MagIC pmag tables] you can export your interpretations made in Demag GUI to the MagIC pmag tables which can then be used by other MagIC programs or uploaded to the MagIC database. You can export any or all of the three coordinate systems upon selecting this option and you may choose to save pmag_samples, pmag_sites, and pmag_results tables in addition to the pmag_specimens table that is output. If you choose to output additional information you will be prompted by a pop up window for additional information. Note: This save format loads on start up of the GUI immediately restoring your interpretations. Selection of this option will overwrite your demag_gui.redo file in the working directory.

Images of Plots: Select the menu option [File] [Save plot] [Save all plots] to save all plots, or you can save any of the plots individually. If you zoom or pan any of the plots the shifted image will be saved not the originally plotted image although the plot will redraw and reset to the original image in the GUI.

Flag Specimen Interpretations Good or Bad

You can flag the current specimen interpretation (marked by large diamonds on all plots) good or bad by using the menu option [Analysis] [Flag Interpretations]. The list of interpretations in the interpretation editor tool of Demag GUI can also be used to toggle interpretations good or bad in the same way that measurements can be marked good or bad in the measurement list, by right clicking on the entry you want toggled. This will change the shape of the interpretation to a small diamond on all plots, remove it from use in any higher level means, and mark the entry specimen˙flag in the pmag tables b instead of g to signify this.

Sample Orientation

You can check sample orientation by using the menu option [Analysis] [Sample Orientation] [Check Sample Orientations] (hotkey: ctrl-o). This function will set your mean options to fisher of all components at the current site level and display the wrong arrow (up triangle), wrong compass (down triangle), and rotated sample for declanation incraments of 5 degrees (dotted circle). This allows you to check if the sample orientation is correct and thus can be used in analysis. If you determine the current sample orientation to be bad you can mark it as such using the menu option [Analysis] [Sample Orientation] [Mark Sample Bad] (hotkey: ctrl-.). This will change the sample˙orientation˙flag in the er˙samples file to b not g and will prevent you from marking the specimen interpretations good in that sample so you do not use the improperly oriented data in your final results. If you later realize this was a mistake you can mark the sample orientation good again using [Analysis] [Sample Orientation] [Mark Sample Good] (hotkey: ctrl-,). Finally, to turn off the check sample orientations data simply select the [Check Sample Orientations] option again and it will be removed. Note: The current sample is specified as the sample of the current specimen.

PIC

High Level Means Plot and Statistics

The set of drop-down boxes to the right of the interpretation data are there to determine what level you want to analyse in the high level means plot and are grouped into the Display Level and Mean Options boxes.

PIC

The Display Level boxes consist of the upper drop-down menu which allows you to select the level at which interpretations are displayed options being all interpretations in the current: sample, site, location, or study. The lower drop-down menu lets you select what the current sample, site, location, or study is.

The top drop-down menu in the Mean Options box lets you chose what kind of mean you would like to take of the specimen components currently displayed. The lower drop-down menu lets you pick which specimen components to display allowing you to display All components, No components, or any single component.

The mean statistics for the chosen high level mean are displayed in the lower right of the GUI and can be cycled through using the arrow buttons next to the statistics boxes in the case of multiple high level means.

It is possible to toggle on and off displaying any one of the means in the high level plot which can be useful in the case of a cluttered graph of all components. This can be done by going to the menu option [Analysis] [Toggle Mean Display] and selecting the name of the component you would like to toggle.

All interpretations marked bad will appear as small diamonds regardless of type on the high level mean plot. The below gives examples for a number of plane display options of bad interpretations (the symbols off to the side), best fit vectors to the means (sideways triangles), plane poles (squares), and the planes themselves.

PIC

Interpretation Editor

In order to more easily view and edit specimen interpretation data there is a specimen interpretation editor which can be launched using the menu option [Tools] [Interpretation Editor] (hotkey: ctrl-e). Note: If you would like more help than provided here the interpretation editor has in context help same as the main GUI, see additional help for details.

List of Interpretations: This panel contains a list which details the fits made to the data and their parameters from which you can select which interpretation to view by double clicking it. In the list, the currently selected interpretation is highlighted blue as shown in the image below. You can mark interpretations as bad which removes them from any Fisher means or other high level means by right clicking on their entry in the list. All interpretations marked bad are colored red in the list and marked as a small diamond on the plot. The specimen entry associated with this fit will be given a bad (‘b’) flag within the pmag_specimens table. You can search through interpretations by using the search bar above the list. Finally, interpretations can be highlighted by clicking on the list and holding the shift or ctrl/command key to select multiple interpretations.

Buttons and Boxes: Highlighting entries allows you to delete or alter the characteristics of multiple interpretations at once without having to select each one in turn. This mass alteration is allowable using the the Name/Color/Bounds boxes to input the changes and then clicking the “apply changes to highlighted fits” button. You can delete highlighted fits using the “delete highlighted fits” button. The “add fit to highlighted specimens” button in the interpretation editor adds a fit to all highlighted specimens in the list on the left. You can use the “add new fit to all specimens“ as a convenient option to add a new interpretation with all the attributes described in the above Name/Color/Bounds boxes to every specimen with measurement data. This is a useful method for quickly analysing a new dataset by checking for components between common unblocking steps like giving every specimen a general magnetite interpretation inferring about where that should be (e.g. bounds=300C to 580C, name=MAG, color=violet).

Additional High Level Means Options: The interpretation editor also allows the displaying of site and sample means as the primary points on the high level mean plot by changing the bottom left display options drop-down box. Though it does not yet allow taking fisher means of sample means or site means so the mean type box will be forced to read ”None” if this option is changed from specimens.

PIC

VGP Viewer

Another tool offered by Demag GUI is the VGP (Virtual Geomagnetic Pole) Viewer which allows you to view your VGPs before exporting to MagIC tables. This tool can be opened by using the menu option [Tools] [View VGPs] (hotkey: ctrl-shift-v). The VGP viewer requires latitude and longitude data for sites and locations in order to calculate all VGPs from specimen interpretations, if this data is not already contained in the MagIC tables imported when the GUI started then it will be asked for during calculation so have it ready. The VGP viewer allows you to select viewing samples, sites, or location VGPs from the drop-down menu at the top. Plot interactions are the same here as in the main GUI and can be zoomed, panned, and points selected in the same manner. The list on the left shows all the data for the currently displayed VGPs.

PIC

Current Warnings Box

The white box in the far top right of the GUI is there to provide relevant warnings about aspects of the current specimen and interpretation such as missing data or duplicate data and can be useful in debugging and analysis.

Additional Help

Finally, if you need more help working with Demag GUI it offers in context assistance with the menu option [Help] [Usage and Tips] (hotkey: ctrl-h) which will change your cursor and then let you click on whichever aspect of the GUI you want help with. Once you have clicked a yellow pop-up box with information should appear in most cases, not all features have information but most should.

PIC

5.1.2 thellier_gui.py

[Essentials Chapter 10] and [MagIC] [Thellier_GUI_full_manual.pdf]

The program Thellier GUI (thellier_gui.py) combines functions from thellier_magic.py and new tools described by Shaar and Tauxe (2013) in a user-friendly graphical user interface (GUI).

As with Demag GUI, Thellier GUI can be called from the command line or from within Pmag GUI.

To launch Thellier GUI independently, find your command line and enter:

thellier_gui.py

If using Anaconda Python, you will instead use:

thellier_gui_anaconda

Once open, Thellier GUI loads files already prepared in a This Project directory and the interpretations from Thellier GUI are part of the workflow of Pmag GUI. This section is a brief introduction on how to use Thellier GUI as a stand alone application. Much more information is available within this manual: Thellier GUI full manual.

A complete list of the definitions for paleointensity statistics used by Thellier GUI is available as a supplement to the article by Paterson et al., 2014 and available for download here:

http://onlinelibrary.wiley.com/store/10.1002/2013GC005135/asset/supinfo/ggge20412-sup-0001-suppinfoCORRECTED.pdf?v=1&s=e1c3ab0a86c942d1039f6d2e15496aa172dc86ec

After launching the program, a “choose project directory” dialog window will appear as soon as the GUI is started. Your ThisProject directory should include a file named magic_measurements.txt (created for example by Pmag GUI. If a file named rmag_anisotropy.txt is in the project directory, then the program reads in the anisotropy data. Reading and processing the measurements files may take several seconds, depending on the number of the specimens.

Reading and compiling measurements data

When your ThisProject project directory is selected, the program reads all the measurement data, checks them, processes them and sorts them. If non-linear-TRM (NLT) data exist in magic_measurement.txt then the program tries to process the data using Equations (1)-(3) in Shaar et al., 2010. The program reads magic_measurement.txt, and processes the measurements for presentation as Arai and Zijderveld plots. We recommend that you check all the warnings and errors in Thellier_GUI.log before starting to interpret the data. For details about warnings and error messages during these steps, consult the tutorial document in the thellier_GUI folder in data_files. Also, consult the Preferences to change certain plotting options.

Main panel

This figure shows a snapshot of the main panel.

PIC

The top field in the panel includes the following buttons/controls (from left to right):

The center of the main panel has these elements:

The bottom of the main panel include paleointensity statistics. The first line has the threshold values (empty if N/A). The second line is the specimen’s statistics. For details see Appendix A1 in Shaar and Tauxe (2013).

5.1.3 MagIC GUI

This application provides a simple interface for transforming data from a paleomagnetic or rock magnetic study into the format necessary for contribution to the MagIC database.

5.2 Command line PmagPy programs

PmagPy scripts work by calling them on a command line. The python scripts must be placed in a directory that is in your “path”. To see if this has been properly done, type dir_cart.py -h on the command line and you should get a help message. If you get a “command not found” message, you need to fix your path; check the “installing python” page on the software website. Another possible cause for failure is that somehow, the python scripts are no longer executable. To fix this, change directories into the directory with the scripts, and type the command: chmod a+x *.py

For people who hate command line programs and prefer graphical user interfaces with menus, etc., some of the key programs for interpreting paleomagnetic and rock magnetic data are packaged together in a program called Pmag GUI. This can be invoked by typing pmag_gui.py on the command line. The Pmag GUI program generates the desired commands for you behind the scenes, so you do not have to learn UNIX or how to use the command line (except to call Pmag GUI itself). Nonetheless, some understanding of what is actually happening is helpful, because Pmag GUI is more limited than full range of PmagPy programs. So, here is a brief introduction to how the PmagPy programs work.

All PmagPy programs print a help message out if you type: program_name.py -h on the command line. Many have an “interactive” option triggered by typing program_name.py -i. Many also allow reading from standard input and output. The help message will explain how each particular program functions. There are some common features for the command line options:

The PmagPy scripts call on two special modules, the pmag and the pmagplotlib modules. These contain most of the calculations and plotting functions.

The source code and the help messages for all programs in the PmagPy package are also available online at:

https://github.com/PmagPy/PmagPy

A link to these source code/help menu files is provided for each program listed below with the format [program_name_docs].

Examples of how to use command line PmagPy programs are given below. In all examples, the ’$’ prompt stands for whatever command line prompt you have. The example data files referred to in the following are located in the data_files directory bundled with the PmagPy software distribution.

5.2.1 aarm_magic.py

[Essentials Chapter 13] [MagIC Database] [aarm_magic docs]

Anisotropy of anhysteretic or other remanence can be converted to a tensor and used to correct natural remanence data for the effects of anisotropy remanence acquisition. For example, directions may be deflected from the geomagnetic field direction or intensities may be biased by strong anisotropies in the magnetic fabric of the specimen. By imparting an anhysteretic or thermal remanence in many specific orientations, the anisotropy of remanence acquisition can be characterized and used for correction. We do this for anisotropy of anhysteretic remanence (AARM) by imparting an ARM in 9, 12 or 15 positions. Each ARM must be preceded by an AF demagnetization step. The 15 positions are shown in the k15_magic.py example.

For the 9 position scheme, aarm_magic.py assumes that the AARMs are imparted in positions 1,2,3, 6,7,8, 11,12,13. Someone (a.k.a. Josh Feinberg) has kindly made the measurements and saved them an SIO formatted measurement file named arm_magic_example.dat in the datafile directory called aarm_magic. Note the special format of these files - the treatment column (column #2) has the position number (1,2,3,6, etc.) followed by either a “00” for the obligatory zero field baseline step or a “10” for the in-field step. These could also be ‘0‘ and ‘1’.

We need to first import these into the magic_measurements format and then calculate the anisotropy tensors. These can then be plotted or used to correct paleointensity or directional data for anisotropy of remanence.

So, first use the program sio_magic.py to import the AARM data into the MagIC format. The DC field was 50 μT, the peak AC field was 180 mT, the location was ‘Bushveld’ and the lab protocol was AF and Anisotropy. The naming convention used Option # 3 (see help menu).

Then use the program aarm_magic.py to calculate the best-fit tensor and write out the MagIC tables: rmag_anisotropy and rmag_results. These files can be used to correct remanence data in a pmag_specimens format table (e.g, intensity data) for the effects of remanent anisotropy (e.g., using the program thellier_magic_redo.py.

Here is a transcript of a session that works. Note that the sio_magic.py command is all on one line (which is what the terminal backslash means).

$ sio_magic.py -f arm_magic_example.dat -loc Bushveld -LP AF:ANI -F aarm_measurements.txt \  
      -ncn 3 -ac 180 -dc 50 -1 -1  
 
results put in  aarm_measurements.txt  
 
 
$ aarm_magic.py  
specimen tensor elements stored in  ./arm_anisotropy.txt  
specimen statistics and eigenparameters stored in  ./aarm_results.txt  

5.2.2 angle.py

[Essentials Appendix A.3.4] [angle docs]

Use the program angle.py to calculate the angle (α) between two directions D = 350.2,I = 26.8;D = 98.6,I = 67.3.

$ angle.py -i  
Declination 1: [cntrl-D  to quit] 350.2  
Inclination 1: 26.8  
Declination 2: 98.6  
Inclination 2: 67.3  
   72.1  
Declination 1: [cntrl-D  to quit] ^D  
Good bye

[NB: PC users will get a more angry sounding exit message]

You can also use this program by reading in a filename using the ’-f’ option or from standard input (with <). Try this out with the test file in the angle directory (angle.dat). First examine the contents of the input file using “cat” (or “ type” on a DOS prompt). Then use angle.py to calculate the angles. You can also save your output in a file angle.out with the ’-F’ option:

 $ cat angle.dat  
    11.2    32.9      6.4   -42.9  
   11.5    63.7      10.5   -55.4  
   11.9    31.4     358.1   -71.8  
  349.6    36.2     356.3   -45.0  
   60.3    63.5      58.9   -56.6  
  351.8    37.6      55.0   -45.8  
  345.5    53.1      44.8   -26.9  
   12.2    20.1      13.6   -54.0  
  352.1    37.6       5.1   -39.9  
  341.2    53.8      25.1   -61.1  
....  
 
 $ angle.py -f angle.dat  
    75.9  
  119.1  
  103.7  
   81.4  
  120.1  
  100.9  
   95.1  
   74.1  
   78.4  
  120.1  
......  
.

5.2.3 ani_depthplot.py

[Essentials Chapter 13]; [MagIC Database] [ani_depthplot docs]

Anisotropy data can be plotted versus depth. The program ani_depthplot.py uses MagIC formatted data tables of the rmag_anisotropy.txt and er_samples.txt types. rmag_anisotropy.txt stores the tensor elements and measurement meta-data while er_samples.txt stores the depths, location and other information. Bulk susceptibility measurements can also be plotted if they are available in a magic_measurements.txt formatted file.

In this example, we will use the data from Tauxe et al. (2012) measured on samples obtained during Expedition 318 of the International Ocean Drilling Program. To get the entire dataset, go to the MagIC data base at: http://earthref.org/MAGIC/ and find the data using the search interface. As a short-cut, you can use the “permalink”:

http://earthref.org/MAGIC/m000629dt20120607193954.

Download the text file by clicking on the icon under the red arrow in:

PIC

Unpack the data using the program download_magic.py. This will unpack the data into the different holes. Change directories into Location_2 (which contains the data for Hole U1359A). Or, you can use the data in the ani_depthplot directory of the example data files.

$ ani_depthplot.py

or for Anaconda users:

$ ani_depthplot_anaconda

will create the plot:

PIC

5.2.4 aniso_magic.py

[Essentials Chapter 13]; [MagIC Database] [aniso_magic docs]

Samples were collected from the eastern margin a dike oriented with a bedding pole declination of 110 and dip of 2. The data have been imported into a rmag_anisotropy formatted file named dike_anisotropy.txt.

Make a plot of the data using aniso_magic.py. Use the site parametric bootstrap option and plot out the bootstrapped eigenvectors. Draw on the trace of the dike.

These things are done in this session:

$ aniso_magic.py -f dike_anisotropy.txt -gtc 110 2 -par -v -crd g  
Doing bootstrap - be patient  
Boostrap Statistics:  
 tau_i, V_i_D, V_i_I, V_i_zeta, V_i_zeta_D, V_i_zeta_I, V_i_eta, V_i_eta_D, V_i_eta_I  
0.34040287    29.6    14.5     6.9    168.4     67.4     32.7    295.9     14.2  
0.33536589   166.3    70.5    10.3     26.1     13.0     23.9    293.3     12.1  
0.32423124   296.2    12.8     5.2    180.5     62.4     10.4     32.0     24.0  
compare with [d]irection  
 plot [g]reat circle,  change [c]oord. system, change [e]llipse calculation,  
     s[a]ve plots, [q]uit  

which produced these plots:

PIC

The specimen eigenvectors are plotted in the left-hand diagram with the usual convention that squares are the V 1 directions, triangles are the V 2 directions and circles are the V 3 directions. All directions are plotted on the lower hemisphere. The bootstrapped eigenvectors are shownin the middle diagram. Cumulative distributions of the bootstrapped eigenvalues are shown to the right with the 95% confidence bounds plotted as vertical lines. It appears that the magma was moving in the northern and slightly up direction along the dike.

There are more options to aniso_magic.py that come in handy. In particular, one often wishes to test if a particular fabric is isotropic (the three eigenvalues cannot be distinguished), or if a particular eigenvector is parallel to some direction. For example, undisturbed sedimentary fabrics are oblate (the maximum and intermediate directions cannot be distinguished from one another, but are distinct from the minimum) and the eigenvector associated with the minimum eigenvalue is vertical. These criteria can be tested using the distributions of bootstrapped eigenvalues and eigenvectors.

The following session illustrates how this is done, using the data in the test file sed_anisotropy.txt in the aniso_magic directory.

$ aniso_magic.py -f sed_anisotropy.txt -d 3 0 90 -v -crd g  
 
Doing bootstrap - be patient  
Boostrap Statistics:  
 tau_i, V_i_D, V_i_I, V_i_zeta, V_i_zeta_D, V_i_zeta_I, V_i_eta, V_i_eta_D, V_i_eta_I  
0.33673453   359.5     7.9     9.8    156.0     81.6      3.2    269.3      3.3  
0.33546972    89.5     0.0    37.5     11.5     53.7     80.3    178.0     35.5  
0.32779574   179.6    82.1     4.1      4.7      7.6      3.2    274.6      1.2  
compare with [d]irection  
 plot [g]reat circle,  change [c]oord. system, change [e]llipse calculation,  
      s[a]ve plots, [q]uit  
 

which makes these plots:

PIC

The top three plots are as in the dike example before, showing a clear triaxial fabric (all three eigenvalues and associated eigenvectors are distinct from one another. In the lower three plots we have the distributions of the three components of the chosen axis, V 3, their 95% confidence bounds (dash lines) and the components of the designated direction (solid line). This direction is also shown in the equal area projection above as a red pentagon. The minimum eigenvector is not vertical in this case.

5.2.5 apwp.py

[Essentials Chapter 16] [apwp docs]

The program apwp.py calculates paleolatitude, declination, inclination from a pole latitude and longitude based on the paper Besse and Courtillot (2002; see Essentials Chapter 16 for complete discussion). Use it to calculate the expected direction for 100 million year old rocks at a locality in La Jolla Cove (Latitude: 33N, Longitude 117W). Assume that we are on the North American Plate! (Note that there IS no option for the Pacific plate in the program apwp.py, and that La Jolla was on the North American plate until a few million years ago (6?).

$ apwp.py -i  
 
 Welcome to paleolatitude calculator - CTRL-D to quit  
 
pick a plate: NA, SA, AF, IN, EU, AU, ANT, GL  
 
Plate  
NA  
Site latitude  
33  
 Site longitude  
-117  
 Age  
100  
Age  Paleolat.  Dec.  Inc.  Pole_lat.  Pole_Long.  
   100.0    38.8   352.4    58.1    81.5   198.3  

Note that as with many PmagPy programs, the input information can be read from a file and the output can be put in a file. For example, we put the same information into a file, apwp_example.dat and use this syntax:

$  apwp.py -f apwp_example.dat  
   100.0    38.8   352.4    58.1    81.5   198.3

5.2.6 atrm_magic.py

[Essentials Chapter 13] [MagIC Database] [atrm_magic docs]

Anisotropy of thermal remanence (ATRM) is similar to anisotropy of anhysteretic remanence (AARM) and the procedure for obtaining the tensor is also similar. Therefore, the program atrm_magic.py is quite similar to aarm_magic.py. However, the SIO lab procedures for the two experiments are somewhat different. In the ATRM experiment, there is a single, zero field step at the chosen temperature which is used as a baseline. We use only six positions (as opposed to nine for AARM) because of the additional risk of alteration at each temperature step. The positions are also different:

PIC

The file atrm_magic_example.dat in the atrm_magic directory is an SIO formatred data file containing ATRM measurement data done in a temperature of 520C. Note the special format of these files - the treatment column (column #2) has the temperature in centigrade followed by either a “00” for the obligatory zero field baseline step or a “10” for the first postion, and so on. These could also be ‘0‘ and ‘1’, etc..

Use the program sio_magic.py to import the ATRM data into the MagIC format. The DC field was 40 μT. The naming convention used option # 1 (see help menu). Then use the program atrm_magic.py to calculate the best-fit tensor and write out the MagIC tables: rmag_anisotropy and rmag_results formatted files. These files can be used to correct remanence data in a pmag_specimens format table (e.g, intensity data) for the effects of remanent anisotropy (e.g., using the program thellier_magic_redo.py.

Here is an example transcript:

$  sio_magic.py -f atrm_magic_example.dat -loc Africa -LP T:ANI -F atrm_measurements.txt  
      -ncn 1  -dc 40 -1 -1  
 
results put in  atrm_measurements.txt  
 
 
$  atrm_magic.py  
 
specimen tensor elements stored in  ./trm_anisotropy.txt  
specimen statistics and eigenparameters stored in  ./atrm_results.txt

5.2.7 azdip_magic.py

[Essentials Chapter 9] and [MagIC Database] [azdip_magic docs]

Many paleomagnetists save orientation information in files in this format: Sample Azimuth Plunge Strike Dip (AZDIP format), where the Azimuth and Plunge are the declination and inclination of the drill direction and the strike and dip are the attitude of the sampled unit (with dip to the right of strike). The MagIC database convention is to use the direction of the X coordinate of the specimen measurement system. To convert an AzDip formatted file (azdip˙magic˙example.dat) for samples taken from a location name “Northern Iceland” into the MagIC format and save the information in the MagIC samples file format, use the program azdip_magic.py:

$  azdip_magic.py -f azdip_magic_example.dat -ncn 1 -mcd FS-FD:SO-POM  
      -loc ‘‘Northern Iceland‘‘  
 
Data saved in  samples.txt

Note that there are many options for relating sample names to site names and we used the first convention that has a single character at the end of the site name to designate each sample (e.g., is132d is sample ’d’ from site is132). We have also specified certain field sampling and orientation method codes (-mcd), here field sampling-field drilled (FS-FD) and sample orientation-Pomeroy (SO-POM). The location was “Northern Iceland”. See the help menu for more options.

Another way to do this is to use the orientation_magic.py program which allows much more information to be imported.

Note: azdip_magic.py uses MagIC 3 by default, but can also output MagIC 2 files with the -DM switch (azdip_magic.py -DM 2).

5.2.8 b_vdm.py

[Essentials Chapter 2] [b_vdm docs]

Use the program b_vdm to convert an estimated paleofield value of 33 μT obtained from a lava flow at 22 N latitude to the equivalent Virtual Dipole Moment (VDM) in Am2. Put the input information into a file called vdm_input.dat and read from it using standard input :

$  cat > b_vdm_example.dat  
33 22  
^D  
$  b_vdm.py < b_vdm_example.dat  
 7.159e+22

5.2.9 basemap_magic.py

[MagIC Database] and high resolution instructions [basemap_magic docs]

NB: this program no longer maintained - use plot_mapPTS.py for greater functionality.

Python has a complete map plotting package and PmagPy has a utility for making simple base maps for projects. Site location data imported for example using orientation_magic.py into an er_sites formatted text file can be plotted using basemap_magic.py. There are many options, so check the help message for more details. Note that if you want to use high resolution datafiles or the etopo20 meshgrid (-etp option), you must install the high resolution continental outlines. You can use install_etopo.py for that. As an example, use the program basemap_magic.py to make a simple base map with site locations in a MagIC er_sites.txt formatted file named basemap_example.txt.

$  basemap_magic.py -f basemap_example.txt

which makes this plot:

PIC

Use the buttons at the bottom of the plot to resize or save the plot in the desired format.

5.2.10 biplot_magic.py

[Essentials Chapter 8] and [MagIC Database] [biplot_magic docs]

It is often useful to plot measurements from one experiement against another. For example, rock magnetic studies of sediments often plot the IRM against the ARM or magnetic susceptibility. All of these types of measurements can be imported into a single magic_measurements formatted file, using magic method codes and other clues (lab fields, etc.) to differentiate one from another. Data were obtained from a Paleogene core from 28S for a relative paleointensity study. IRM, ARM, magnetic susceptibility and remanence data were uploaded to the MagIC database. The magic_measurements formatted file for this study is saved in core_measurements.txt.

Use the program biplot_magic.py to make a biplot of magnetic susceptibility against ARM. Note that the program makes use of the MagIC method codes which are LT-IRM for IRM, LT-AF-I for ARM (AF demagnetization, in a field), and LP-X for magnetic susceptibility.

First, to find out which data are available, run the program like this:

$  biplot_magic.py -f biplot_magic_example.txt  
   which responds with this:  
   [’LT-AF-Z’, ’LT-AF-I’, ’LT-IRM’, ’LP-X’]  
   

These are the method codes for AF demagnetization of NRM, ARM, IRM and susceptibility measurements respectively. So to make a plot of susceptibility against ARM, we would run the program again:

$  biplot_magic.py -f biplot_magic_example.txt -x LP-X -y LT-AF-I  
LP-X  selected for X axis  
LT-AF-I  selected for Y axis  
All  
measurement_magn_mass  being used for plotting Y  
measurement_chi_mass  being used for plotting X.  
 
S[a]ve plots, [q]uit,  Return for next plot

which makes the plot:

PIC

5.2.11 bootams.py

[Essentials Chapter 13] [bootams docs]

The program bootams.py calculates bootstrap statistics for anisotropy tensor data in the form of:

x11 x22 x33 x12 x23 x13

It does this by selecting para-data sets and calculating the Hext average eigenparameters. It has an optional parametric bootstrap whereby the σ for the data set as a whole is used to draw new para data sets. The bootstrapped eigenparameters are assumed to be Kent distributed and the program calculates Kent error ellipses for each set of eigenvectors. It also estimates the standard deviations of the bootstrapped eigenvalues.

Use this to calculate the bootstrapped error statistics for the data in file bootams_examples.data:

$  bootams.py -par -f bootams_example.dat  
Doing bootstrap - be patient  
 
tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc  
 
0.33505 0.00021     5.3    14.7    11.7   267.1    28.0    27.0   122.6    56.8  
0.33334 0.00023   124.5    61.7    19.6   244.0    14.4    23.5   340.1    22.1  
0.33161 0.00026   268.8    23.6    10.4     8.6    21.8    20.4   137.0    57.3  

Note that every time bootams gets called, the output will be slightly different because this depends on calls to random number generators. If the answers are different by a lot, then the number of bootstrap calculations is too low. The number of bootstraps can be changed with the -nb option.

5.2.12 cart_dir.py

[Essentials Chapter 2] [cart_dir docs]

Use the program cart_dir.py to convert these cartesian coordinates to geomagnetic elements:




x1 x2 x3



0.3971 -0.1445 0.9063
-0.5722 0.0400 -0.8192



To use the interactive option:

$ cart_dir.py -i  
X: [cntrl-D  to quit] 0.3971  
Y: -0.1445  
Z: 0.9063  
  340.0    65.0  1.000e+00  
X: [cntrl-D  to quit] -.5722  
Y: 0.0400  
Z: -0.8192  
  176.0   -55.0  1.000e+00

To read from a file:

$ cart_dir.py -f cart_dir_example.dat  
  340.0    65.0  1.000e+00  
  176.0   -55.0  1.000e+00

5.2.13 chartmaker.py

[Essentials Chapter 10] [chartmaker docs]

Paleointensity experiments are quite complex and it is easy to make a mistake in the laboratory. The SIO lab uses a simple chart that helps the experimenter keep track of in-field and zero field steps and makes sure that the field gets verified before each run. You can make a chart for an infield-zerofield, zerofield-infield (IZZI) experiment using the program chartmaker.py. Make such a chart using 50C steps up to 500C followed by 10C steps up to 600C.

$ chartmaker.py  
    Welcome to the thellier-thellier experiment automatic chart maker.  
    Please select desired step interval and upper bound for which it is valid.  
    e.g.,  
    50  
    500  
    10  
    600  
 
    a blank entry signals the end of data entry.  
    which would generate steps with 50 degree intervals up to 500,  
     followed by 10 degree intervals up to 600.  
 
    chart is stored in:  chart.txt  
 
 Enter desired treatment step interval: <return> to quit 50  
 Enter upper bound for this interval: 500  
 Enter desired treatment step interval: <return> to quit 10  
 Enter upper bound for this interval: 600  
 Enter desired treatment step interval: <return> to quit  
output stored in: chart.txt  
 
%cat chart.txt  
file:_________________    field:___________uT  
 
 
                                 date | run# | zone I | zone II | zone III | start | sp | cool |  
________________________________________________________  
       0.0                |           |           |               |                |         |      |         |  
________________________________________________________  
   Z   100.0           |           |           |               |                |         |      |         |  
________________________________________________________  
   I   100.1           |           |           |               |                |         |      |         |  
________________________________________________________  
   T   100.3           |           |           |               |                |         |      |         |  
________________________________________________________  
   I   150.1           |           |           |               |                |         |      |         |  
.  
.  
.  

The chart allows you to fill in the file name in which the data were stored and the field value intended for the infield steps. The designations ‘Z’, ‘I’, ’T’, ’and ’P’ are for zero-field, in-field, pTRM tail checks and pTRM checks respectively. There are fields for the date of the runs, the fields measured in different zones in the oven prior to the start of the experiment, and the start and stop times. The numbers, e.g., 100.1 are the treatment temperatures (100) followed by the code for each experiment type. These get entered in the treatment fields in the SIO formatted magnetometer files (see sio_magic.py).

5.2.14 chi_magic.py

[Essentials Chapter 8] & [MagIC Database] [chi_magic docs]

It is sometimes useful to measure susceptibility as a function of temperature, applied field and frequency. Here we use a data set that came from the Tiva Canyon Tuff sequence (see Carter-Stiglitz, 2006). Use the program chi_magic.py to plot the data in the magic_measurements formatted file: chi_magic_example.dat.

$ chi_magic.py -f chi_magic_example.dat  
IRM-Kappa-2352 1 out of  2  
IRM-OldBlue-1892 2 out of  2  
Skipping susceptibitily - AC field plot as a function of temperature  
enter s[a]ve to save files,  [return] to quit  
 

produced this plot:

PIC

You can see the dependence on temperature, frequency and applied field. These data support the suggestion that there is a strong superparamagnetic component in these specimens.

5.2.15 combine_magic.py

[MagIC Database] [combine_magic docs]

MagIC tables have many columns only some of which are used in a particular instance. So combining files of the same type must be done carefully to ensure that the right data come under the right headings. The program combine_magic.py can be used to combine any number of MagIC files from a given type. For an example of how to use this program, see agm_magic.py.

5.2.16 common_mean.py

[Essentials Chapter 12] [common_mean docs]

Most paleomagnetists use some form of Fisher Statistics to decide if two directions are statistically distinct or not (see Essentials Chapter 11 for a discussion of those techniques. But often directional data are not fisher distributed and the parametric approach will give misleading answers. In these cases, one can use a boostrap approach, described in detail in [Essentials Chapter 12]. Here we use the program common_mean.py for a bootstrap test for common mean to check whether two declination, inclination data sets have a common mean at the 95% level of confidence. The data sets are: common_mean_ex_file1.dat and common_mean_ex_file2.dat. But first, let’s look at the data in equal area projection using the program eqarea.py.

The session:

$ eqarea.py -f common_mean_ex_file1.dat -sav  
1  saved in  common_mean_ex_file1.dat.svg  
 
$  eqarea.py -f common_mean_ex_file2.dat -sav  
1  saved in  common_mean_ex_file2.dat.svg

generates two .svg formatted files that look like these:

PIC

Now let’s look at the common mean problem using common_mean.py.

$ common_mean.py -f common_mean_ex_file1.dat -f2 common_mean_ex_file2.dat  
Doing first set of directions, please be patient..  
Doing second set of directions, please be patient..  
S[a]ve plots, <Return> to quit

The three plots are:

PIC

These suggest that the two data sets share a common mean.

Now compare the data in common_mean_ex_file1.dat with the expected direction at the 5N latitude that these data were collected (Dec=0, Inc=9.9).

To do this, follow this transcript:

$ common_mean.py -f common_mean_ex_file1.dat -dir 0 9.9  
Doing first set of directions, please be patient..  
S[a]ve plots, <Return> to quit

PIC

Apparently the data (cumulative distribution functions) are entirely consistent with the expected direction (dashed lines are the cartesian coordinates of that).

5.2.17 cont_rot.py

[Essentials Chapter 16 and Essentials Appendix A.3.5.] [cont_rot docs]

Use the program cont_rot.py to make an orthographic projection with latitude = -20 and longitude = 0 at the center of the African and South American continents reconstructed to 180 Ma using the Torsvik et al. (2008) poles of finite rotation. Do this by first holding Africa fixed. Move the output plot to fixed_africa.svg. Then make the plot for Africa adjusted to the paleomagnetic reference frame. Make the continental outlines in black lines and set the resolution to ’low’.

$ cont_rot.py -con af:sam -prj ortho -eye -20 0 -sym k-  1 -age 180 -res l  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Cont_rot.pdf  
$ mv Cont_rot.pdf fixed_africa.pdf  
$ cont_rot.py -con af:sam -prj ortho -eye -20 0 -sym  k- 1 -age 180 \  
     -res l -sac  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Cont_rot.pdf

These commands generated the following plots (first on left, second on right):

PIC

5.2.18 convert2unix.py

[convert2unix docs]

This is a handy little script that turns Windows or Mac file formatted files (not Word or other proprietary formats) into Unix file format. It does the change in place, overwriting the original file.

5.2.19 convert_samples.py

[MagIC Database] [convert_samples docs]

If one of the MagIC related programs in PmagPy created a very lean looking er_samples.txt file (for example using azdip_magic.py) and you need to add more information (say, latitude, longitude, lithology, etc.), you can convert the er_samples.txt file into an orient.txt file format, edit it in, for example Excel, and then re-import it back into the er_samples.txt file format. Try this on the er_samples formatted file in the convert_samples directory, convert_samples_example.dat.

$ convert_samples.py -f convert_samples_example.dat  
Data saved in:  orient_Northern_Iceland.txt

5.2.20 core_depthplot.py

[[Essentials Chapter 15] [core_depthplot docs]

Use the program core_depthplot.py to plot various measurement data versus sample depth. The data must be in the MagIC data formats. The program will plot whole core data, discrete sample at a bulk demagnetization step, data from vector demagnetization experiments, and so on. There are many options, so check the help menu before you begin.

We can try this out on some data from DSDP Hole 522, measured by Tauxe and Hartl (1997). These can be downloaded and unpacked (see download_magic.py for details), or you can try it out on the data files in the directory core_depthplot. You must specify a lab-protocol (LP) for plotting. In this example, we will plot the alternating field (AF) data after the 15 mT step. The magnetizations will be plotted on a log scale and, as this is a record of the Oligocene, we will plot the Oligocene time scale, using the calibration of Gradstein et al. (2004), commonly referred to as “GTS04” for the the Oligocene. We are only interested in the data between 50 and 150 meters (the -d option sets this) and we will suppress the declinations (-D).

$ core_depthplot.py -fsa samples.txt -LP AF 15 -log -d 50 150 -ts gts04 23 34 -D

Anaconda Python users who installed with pip must use:

$ core_depthplot_anaconda -fsa samples.txt -LP AF 15 -log -d 50 150 -ts gts04 23 34 -D

In either case, the program will produce the plot:

PIC

Note: to run core_depthplot.py with old MagIC 2.5 files, make sure to use the -DM switch:

$ core_depthplot.py -fsa er_samples.txt -LP AF 15 -log -d 50 150 -ts gts04 23 34 -D -DM 2

5.2.21 curie.py

[Essentials Chapter 6] [curie docs]

Use the program curie.py to interpret curie temperature data in the example file curie_example.dat. Use a smoothing window of 10.

%curie.py -f curie_example.dat -w 10  
second deriative maximum is at T=552

which generates these plots:

PIC

5.2.22 customize_criteria.py

[MagIC Database] [customize_criteria docs]

NB: The best place to customize your selection criteria is within Demag GUI or Thellier GUI. But if you want to do it on the command line, then here’s how.

The MagIC database allows documentation of which criteria were used in selecting data on a specimen, sample or site level and to easily apply those criteria (and re-apply them as they change) in preparing the different tables. These choices are stored in the pmag_criteria table in each MagIC project directory (see Pmag GUI documentation).

Certain PmagPy programs use the datafile pmag_criteria.txt to select data (for example thellier_magic.py and specimens_results_magic.py). To customize these criteria for your own data sets, you can use the program customize_criteria.py. This program is also called by the Pmag GUI under the Utilities menu. Try it out on pmag_criteria.txt. This is a “full vector” set of criteria - meaning that it has both directions and intensity flags set. Change the specimen_alpha95 cutoff to 180. from whatever it is now set to. Save the output to a new file named new_criteria.txt.

$ customize_criteria.py -f pmag_criteria.txt -F new_criteria.txt  
Acceptance criteria read in from  pmag_criteria.txt  
 [0] Use no acceptance criteria?  
 [1] Use default criteria  
 [2] customize criteria  
 2  
Enter new threshold value.  
 Return to keep default.  
 Leave blank to not use as a criterion  
 
sample_alpha95 10  
new value: 5  
sample_int_n 2  
new value: 2  
sample_int_sigma 5e-6  
new value:  
sample_int_sigma_perc 15  
new value:  
.......  
 
Criteria saved in pmag_criteria.txt  
 
 Pmag Criteria stored in  pmag_criteria.txt  

Note that the default place for the PmagPy programs to look for criteria is in pmag_criteria.txt, so you should probably rename the new one that for it to take effect as your new default.

5.2.23 dayplot_magic.py

[Essentials Chapter 5] [dayplot_magic docs]

Use the program dayplot_magic.py to make Day (Day et al., 1977) , or Squareness-Coercivity and Squareness-Coercivity of Remanence plots (e.g., Tauxe et al., 2002) from the rmag_hyseresis formatted data in dayplot_magic_example.dat.

The session:

  $ dayplot_magic.py -f dayplot_magic_example.dat  
 
 S[a]ve to save plots, return to quit:

gives the plots:

PIC

5.2.24 demag_gui.py

[Essentials Chapter 7] [demag_gui docs]

The program demag_gui.py is used by Pmag GUI for interpreting demagnetization experimental data. See Demag GUI for guidance.

5.2.25 di_eq.py

[Essentials Appendix B] [di_eq docs]

Paleomagnetic data are frequently plotted in equal area projection. PmagPy has several plotting programs which do this (e.g., eqarea.py, but occasionally it is handy to be able to convert the directions to X,Y coordinates directly, without plotting them at all. The program di_eq.py does this. Here is an example transcript of a session using the datafile di_eq_example.dat:

$ di_eq.py -f di_eq_example.dat  
-0.239410 -0.893491  
0.436413 0.712161  
0.063844 0.760300  
0.321447 0.686216  
0.322720 0.670562  
0.407412 0.540654  
.  
.  
.

5.2.26 di_geo.py

[Essentials Chapter 9] and Changing coordinate systems [di_geo docs]

Use the programs di_geo.py to convert D = 8.1,I = 45.2 from specimen coordinates to geographic adjusted coordinates. The orientation of laboratory arrow on the specimen was: azimuth = 347; plunge = 27. di_geo.py works in the usual three ways (interactive data entry, command line file specification or from standard input . So for a quickie answer for a single specimen, you can use the interactive mode:

$ di_geo.py -i  
Declination: <cntrl-D> to quit  81  
Inclination: 45.2  
Azimuth: 347  
Plunge: 27  
   94.8    43.0

which spits out our answer of Declination = 5.3 and inclination = 71.6.

For more data, it is handy to use the file entry options. There are a bunch of declination, inclination, azimuth and plunge data in the file di_geo_example.dat in the di_geo directory. First look at the data in specimen coordinates in equal area projection, using the program eqarea.py. Note that this program only pays attention to the first two columns so it will ignore the orientation information.

$ eqarea.py -f di_geo_example.dat

which should look like this:

PIC

The data are highly scattered and we hope that the geographic coordinate system looks better! To find out try:

di_geo.py -f di_geo_example.dat >di_geo.out;  eqarea.py -f di_geo.out  

which looks like this:

PIC

These data are clearly much better grouped.

5.2.27 di_rot.py

[Essentials Chapter 11] [di_rot docs]

Generate a Fisher distributed set of data from a population with a mean direction of D = 0,I = 42 using the program fishrot.py. Calculate the mean direction of the data set using gofish.py. Now use the program di_rot.py to rotate the set of directions to the mean direction. Look at the data before and after rotation using eqarea.py.

$ fishrot.py -I 42 >fishrot.out  
$ gofish.py <fishrot.out  
    1.7    42.4    100    95.3720     21.4     3.1  
$ di_rot.py -f fishrot.out -F dirot.out -D 1.7 -I 42.4  
$ eqarea.py -f fishrot.out  
 $ eqarea.py -f dirot.out

which generates plots like these:

PIC

Note that every instance of fisher.py will draw a different sample from a Fisher distribution and your exact plot and average values will be different in detail every time you try this (and that’s part of the fun of statistical simulation.)

5.2.28 di_tilt.py

[Essentials Chapter 9] and Changing coordinate systems [di_tilt docs]

Use the program di_tilt.py to rotate a direction of Declination = 5.3 and Inclination = 71.6 to “stratigraphic” coordinates. The strike was 135 and the dip was 21. The convention in this program is to use the dip direction, which is to the “right” of this strike.

Here is a session with di_tilt.py using the interactive option:

$ di_tilt.py -i  
Declination: <cntl-D> to quit 5.3  
Inclination: 71.6  
Dip direction: 225  
Dip: 21  
  285.7    76.6  
Declination: <cntl-D> to quit ^D  
 Good-bye

Try the same on the data file saved as di_tilt_example.dat using the command line -f switch:

5.2.29 di_vgp.py

[Essentials Chapter 2] [di_vgp docs]

Use the program di_vgp to convert the following:





D I λs (N) ϕs (E)




11 63 55 13
154 -58 45.5 -73




Here is a transcript of a typical session using the command line option for file name entry:

$ di_vgp.py -f di_vgp_example.dat  
  154.7    77.3  
    6.6   -69.6

5.2.30 dipole_pinc.py

[Essentials Chapter 2] [dipole_pinc docs]

Calculate the expected inclination at a paleolatitude of 24S.

$ dipole_pinc.py -i  
Paleolat for converting to inclination: <cntl-D> to quit -24  
  -41.7  
Paleolat for converting to inclination: <cntl-D> to quit ^D  
 Good-bye  

5.2.31 dipole_plat.py

[Essentials Chapter 2] [dipole_plat docs]

Calculate the paleolatitude for an average inclination of 23.

$ dipole_plat.py -i  
Inclination for converting to paleolatitude: <cntl-D> to quit 23  
   12.0  
Inclination for converting to paleolatitude: <cntl-D> to quit ^D  
 Good-bye

5.2.32 dir_cart.py

[Essentials Chapter 2]] [dir_cart docs]

Use the program dir_cart.py to convert the following data from declination D, inclination I and intensity M to x1,x2,x3.




D I M (μAm2)



20 46 1.3
175 -24 4.2



You can enter D,I,M data into data file, then running the program by typing what is after the prompts (%) [ the other stuff is computer responses] :

$ cat > dir_cart_example.dat  
 20 46 1.3  
175 -24 4.2  
^D  
$ dir_cart.py <dir_cart_example.dat  
8.4859e-01 3.0886e-01 9.3514e-01  
-3.8223e+00 3.3441e-01 -1.7083e+00

Or you could use dir_cart.py interactively as in:

 $ dir_cart.py -i  
 
Declination: [cntrl-D  to quit]  
 Good-bye  
 
$ dir_cart.py -i  
Declination: [cntrl-D  to quit] 20  
Inclination: 46  
Intensity [return for unity]: 1.3  
8.4859e-01 3.0886e-01 9.3514e-01  
Declination: [cntrl-D  to quit] 175  
Inclination: -24  
Intensity [return for unity]: 4.2  
-3.8223e+00 3.3441e-01 -1.7083e+00  
Declination: [cntrl-D  to quit] ^D  
 Good-bye

5.2.33 dmag_magic.py

[Essentials Chapter 9] and [MagIC Database] [dmag_magic docs]

Use dmag_magic.py to plot out the decay of all alternating field demagnetization experiments in the magic_measurements formatted file in dmag_magic_example.dat. These are from Lawrence et al. (2009). Repeat for all thermal measurements, but exclude all the data acquired during the thermal but not paleointensity experiments. Try this at the location level and then at the site level.

Here is a transcript of a session:

$  dmag_magic.py -f dmag_magic_example.dat -LT AF  
5921  records read from  dmag_magic_example.dat  
McMurdo plotting by:  er_location_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  McMurdo_LT-AF-Z.svg  
$ dmag_magic.py -f dmag_magic_example.dat -LT T -XLP PI  
5921  records read from  dmag_magic_example.dat  
McMurdo plotting by:  er_location_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  McMurdo_LT-T-Z.svg  
$ dmag_magic.py -f dmag_magic_example.dat -LT AF -obj sit  
5921  records read from  dmag_magic_example.dat  
mc20 plotting by:  er_site_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  
mc200 plotting by:  er_site_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  mc200_LT-AF-Z.svg  

which produced these plots:

PIC

5.2.34 download_magic.py

[MagIC] [download_magic docs]

This program unpacks .txt files downloaded from the MagIC database into individual directories for each location into which the individual files for each table (e.g., er_locations.txt, magic_measurements.txt, pmag_results.txt and so on) get placed. As an example, go to the MagIC data base at http://earthref.org/MAGIC/search. Enter “Tauxe and 2004” into the Reference Text Search field will show you several references. Look for the one for Tauxe, L., Luskin, C., Selkin, P., Gans, P. and Calvert, A. (2004). Download the text file under the “Data” column and save it to your desktop. Make a folder into which you should put the downloaded txt file called MagIC_download and move the file into it. Now use the program download_magic.py to unpack the .txt file (zmab0083201tmp03.txt).

$ cd MagIC_download  
$ download_magic.py -f zmab0083201tmp03.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
er_samples  data put in  ./er_samples.txt  
working on:  ’er_specimens’  
.  
.  
.  
location_1:  Snake River  
unpacking:  ./er_locations.txt  
1  read in  
1  stored in  ./Location_1/er_locations.txt  
unpacking:  ./er_sites.txt  
27  read in  
27  stored in  ./Location_1/er_sites.txt  
unpacking:  ./er_samples.txt  
271  read in  
271  stored in  ./Location_1/er_samples.txt  
unpacking:  ./er_specimens.txt  
.  
.  
.

You can change directories into each Location directory (in this case only one) and examine the data using the PmagPy GUI programs (e.g., pmag_gui.py).

5.2.35 eigs_s.py

[Essentials Chapter 13] [eigs_s docs]

Print out the eigenparameters in the file eigs_s_example.dat and then convert them to tensor data in the .s format (x11,x22,x33,x12,x13,x23).

This session uses the unix utility cat to print the data. [You could use the Ms-Dos form type in a Windows command line window.] Then, it prints the tensor data to the screen.

$ cat eigs_s_example.dat  
0.33127  239.53   44.70 0.33351  126.62   21.47 0.33521  19.03   37.54  
0.33177  281.12    6.18 0.33218  169.79   73.43 0.33603   12.82   15.32  
...  
$ eigs_s.py -f eigs_s_example.dat  
0.33416328 0.33280227 0.33303446 -0.00016631 0.00123163 0.00135521  
0.33555713 0.33197427 0.33246869 0.00085685 0.00025266 0.00098151  
0.33585301 0.33140355 0.33274350 0.00132308 0.00117787 0.00000455  
...

5.2.36 eq_di.py

[Essentials Appendix B] [eq_di docs]

Data are frequently published as equal area projections and not listed in data tables. These data can be digitized as x,y data (assuming the outer rim is unity) and converted to approximate directions with the program eq_di.py. To use this program, install a graph digitizer (GraphClick from http://www.arizona-software.ch/graphclick/ works on Macs).

Digitize the data from the equal area projection saved in the file eqarea.png in the eq_di directory. You should only work on one hemisphere at a time (upper or lower) and save each hemisphere in its own file. Then you can convert the X,Y data to approximate dec and inc data - the quality of the data depends on your care in digitizing and the quality of the figure that you are digitizing.

Try out eq_di.py on your datafile, or use eq_di_example.dat which are the digitized data from the lower hemisphere and check your work with eqarea.py. You should retrieve the lower hemisphere points from the eqarea.py example.

$ eq_di.py -f eq_di_example.dat >tmp  
$ eqarea.py -f tmp

NB: To indicate that your data are UPPER hemisphere (negative inclinations), use the -up switch.

You can verify the process by comparing the plot generated for these data using eqarea.py with the original png file.

5.2.37 eqarea.py

[Essentials Chapter 2] and [Essentials Appendix B.1] [eqarea docs]

Use the program fishrot.py to generate a Fisher distiributed set of data drawn from a distribution with mean declination of 42 and a mean inclination of 60. LSave this to a file called it fishrot.out. Use eqarea.py to plot an equal area projection of the data.

$  fishrot.py -D 42 -I 60 > fishrot.out  
$ eqarea.py -f fishrot.out

which produces the plot:

PIC

5.2.38 eqarea_ell.py

[Essentials Chapters 11] and [Essentials Chapter 12] [eqarea_ell docs]

Use the program tk03.py to generate a set of simulated data for a latitude of 42N including reversals. Then use the program eqarea_ell.py to plot an equal area projection of the directions in di_example.txt and plot confidence ellipses. Here is an example for Bingham ellipses.

$ tk03.py -lat 42 -rev >tk03.out  
$ eqarea_ell.py -f tk03.out -ell B  
     Zdec    21.0  
     Edec   105.7  
     Eta     4.5  
     n        20  
     Einc    10.0  
     Zinc   -27.6  
     Zeta     4.5  
     dec   357.8  
     inc    60.3  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  tk03.out_eq.svg

which produces a plot like this:

PIC

Other ellipses are Kent, Fisher and bootstrapped ellipses. Check the documentation for details.

5.2.39 eqarea_magic.py

[MagIC Database] [eqarea_magic docs]

Follow the instructions for downloading and unpacking a data file from the MagIC database or use the file in the download_magic directory already downloaded from the MagIC website. Plot the directional data for the study from the pmag_results.txt file along with the bootstrapped confidence ellipse.

$ eqarea_magic.py -obj loc -crd g -f pmag_results.txt -ell Be  
24  records read from  ./pmag_results.txt  
All  
sr01 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   330.1    64.9  
sr03 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   151.8   -57.5  
sr04 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    16.5    54.6  
.  
.  
.  
sr31 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    23.2    54.0  
sr34 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   205.8   -49.4  
sr36 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   197.6   -65.5  
sr39 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   188.1   -47.2  
sr40 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   192.9   -60.7  
Normal Pole GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T     6.1    59.6  
Reverse pole GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   185.1   -58.8  
Grand Mean pole GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T     5.7    59.3  
mode  1  
     Zdec   111.1  
     Edec   206.0  
     Eta     7.8  
     n        1  
     Einc    28.9  
     Zinc     8.7  
     Zeta     3.3  
     dec     6.0  
     inc    59.6  
mode  2  
     Zdec   248.0  
     Edec   150.1  
     Eta     4.3  
     n        1  
     Einc    26.4  
     Zinc    15.4  
     Zeta     8.0  
     dec   185.1  
     inc   -58.8  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  LO:_Snake River_SI:__SA:__SP:__CO:_gu_TY:_eqarea_.svg

makes this plot:

PIC

The information printed to the window is the pmag_result_name in the data table, the method codes (here the geochronology method, the type of demagnetization code and the types of demagnetization experiments), and each site mean declination inclination. The information following “mode 1” are the bootstrapped ellipse parameters.

Some data are study averages and some are individual sites.

Use magic_select.py to select only the individual site data. Try the

 $ magic_select.py -f pmag_results.txt -key data_type i ’T’ -F site_results.txt  
$ eqarea_magic.py -f site_results.txt -ell Bv  
21  records read from  ./site_results.txt  
All  
sr01 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   330.1    64.9  
sr03 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   151.8   -57.5  
sr04 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    16.5    54.6  
sr09 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    14.6    77.9  
.  
.  
.  
sr36 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   197.6   -65.5  
sr39 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   188.1   -47.2  
sr40 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   192.9   -60.7  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  LO:_Snake River_SI:__SA:__SP:__CO:_g_TY:_eqarea_.svg  
2  saved in  LO:_Snake River_SI:__SA:__SP:__CO:_g_TY:_bdirs_.svg

which produces plots like:

PIC

5.2.40 find_EI.py

[Essentials Chapter 14][find_EI docs]

A data file was prepared using tk03.py to simulate directions at a latitude of 42. Use the program dipole_pinc.py to find what the expected inclination at this latitude is!

$ dipole_pinc.py -i  
Paleolat for converting to inclination: <cntl-D> to quit 42  
   61.0  
^D  
Paleolat for converting to inclination: <cntl-D> to quit ^D  
 Good-bye

The data were “flattened” with the program squish.py which uses the formula tanIo = f tanIf to simulate inclination error and saved in a data file find_EI_example.dat in the find_EI directory.

$ tk03.py -lat 42 > tk03.out  
$ squish.py -flt .4 -f tk03.out -F find_EI_example.dat

Use the program find_EI.py to find the optimum flattening factor f which, when used to “unflatten” the data yields inclination and elongation (ratio of major and minor eigenvalues of orientation matrix, see the section on eigenvalues in the textbook) most consistent with the TK03.GAD paleosecular variation model of Tauxe and Kent (2004).

$  find_EI.py -f find_EI_example.dat  
Bootstrapping.... be patient  
25  out of  1000  
50  out of  1000  
75  out of  1000  
100  out of  1000  
.  
.  
.  
Io Inc  I_lower, I_upper, Elon, E_lower, E_upper  
   38.9  =>     58.8 _    48.4 ^    67.0:  1.4679 _ 1.2912 ^ 1.7337  
S[a]ve plots - <return> to quit:  a  
2  saved in  findEI_ei.svg  
3  saved in  findEI_cdf.svg  
1  saved in  findEI_eq.svg  
4  saved in  findEI_v2.svg  
 

which produces these plots:

PIC

In this example, the original expected inclination at paleolatitude of 42 (61) is recovered within the 95% confidence bounds.

Note that the correction implemented by find_EI.py is, by default, meant to be a “study” level correction in which the distribution of data is determined primarily by secular variation (the TK03.GAD model). Alternatively, if you wish to correct “flattened” data to a Fisher distribution (a “site level” correction; see section 4.1 of Tauxe and Kent, 2004) you can specify this with the -sc flag like so:

$  find_EI.py -f find_EI_example.dat -sc

Note that many directions (100) are needed for this correction-by-site to be reliable.

5.2.41 fisher.py

[Essentials Chapter 11] [fisher docs]

Draw a set of 10 directions from a Fisher distribution with a κ of 30 using fisher.py:

$ fisher.py -k 30 -n 10  
  233.7    81.4  
  357.0    76.4  
  272.5    62.8  
  137.0    70.0  
   83.7    71.2  
...  
  

You could plot the output with, e.g., eqarea.py.

Note that every instance of this program draws a different distribution, so yours will look different in detail.

5.2.42 fishqq.py

[Essentials Chapter 11] [fishqq docs]

Test whether a set of 100 data points generated with fisher.py are in fact Fisher distributed by using a Quantile-Quantile plot:

$ fisher.py -k 30 -n 100 >fishqq_example.txt  
$ fishqq.py -f fishqq_example.txt

produces these plots:

PIC

which support a Fisher distribution for these data.

5.2.43 fishrot.py

[Essentials Chapter 11] [fishrot docs]

Draw a set of 5 directions drawn from a Fisher distribution with a true mean declination of 33, a true mean inclination of 41, and a κ of 50:

$ fishrot.py -n 5 -D 33 -I 41 -k 50  
   35.8    32.8  
   36.2    30.2  
   37.5    41.8  
   28.6    23.9  
   26.1    32.8  
   

5.2.44 foldtest.py

[Essentials Chapter 12] [foldtest docs]

Use foldtest.py to perform the Tauxe and Watson (1994) foldtest on the data in foldtest_example.dat.

$ foldtest.py -f foldtest_example.dat  
doing  1000  iterations...please be patient.....  
0  
50  
100  
.  
.  
.  
 
82 - 117 Percent Unfolding  
range of all bootstrap samples:  70  -  137  
S[a]ve all figures, <Return> to quit   a  
3  saved in  fold_ta.svg  
2  saved in  fold_st.svg  
1  saved in  fold_ge.svg  
 

which gives the plots:

PIC

Apparently these directions were acquired prior to folding because the 95% confidence bounds on the degree of untilting required for maximizing concentration of the data (maximum in principle eigenvalue) τ1 of orientation matrix (see the section on eigenvalues in the textbook) includes 100%.

5.2.45 foldtest_magic.py

[Essentials Chapter 12] and [MagIC Database] [foldtest_magic docs]

This program performs the same test as foldtest.py. The only difference is that it reads MagIC formatted input files and allows the application of selection criteria as specified in the pmag_criteria.txt formatted file.

To begin, download the file from this references: Halls et al. (1974; doi: 10.1139/e74-113) at http://dx.doi.org/10.7288/V4/MAGIC/13267 and unpack it with download_magic.py. Then run foldtest_magic.py for the fold test.

$ download_magic.py -f magic_contribution_11087.txt  
$ foldtest_magic.py -f pmag_sites.txt -fsi er_sites.txt  
doing  1000  iterations...please be patient.....  
0  
50  
....  
900  
950  
96 - 149 Percent Unfolding  
S[a]ve all figures, <Return> to quit  
 a  
3  saved in  foldtest_ta.svg  
2  saved in  foldtest_st.svg  
1  saved in  foldtest_ge.svg  

which gives the plots:

PIC

5.2.46 gaussian.py

[Essentials Chapter 11] [gaussian docs]

Use gaussian.py to generate a set of 100 normally distributed data points drawn from a population with a mean of 10.0 and standard deviation of 30. Save it to a file named gauss.out.

$ gaussian.py -s 3 -n 100 -m 10. -F gauss.out

You can check the sample mean and standard deviation with stats.py or make a histogram of the data with histplot.py

5.2.47 gobing.py

[Essentials Chapter 12] [gobing docs]

Use the dataset generated in the eqarea_ell.py example. Calculate Bingham parameters using gobing.py instead of within the plotting program:

$ gobing.py -f tk03.out  
  357.8    60.3     4.5   105.7    10.0     4.5    21.0   -27.6 20

which according to the help message from gobing.py is:

mean dec, mean inc, Eta, Deta, Ieta, Zeta, Zdec, Zinc, N

5.2.48 gofish.py

[Essentials Chapter 11] [gofish docs]

Draw a set of 10 directions drawn from a Fisher distribution with a true mean declination of 15, a true mean inclination of 41, and a κ of 50 and save it to a file, then use gofish.py to calculate the Fisher parameters:

$  fishrot.py -n 10 -D 15 -I 41 -k 50 >fishrot.out  
$ gofish.py -f  fishrot.out  
   10.8    39.6    10     9.8484     59.4     6.3    10.5

which according to the help message from gofish.py -h is: mean dec, mean inc, N, R, k, a95, csd. Your results will vary because every instance of fishrot.py draws a different sample from the Fisher distribution.

5.2.49 gokent.py

[Essentials Chapter 12] [gokent docs]

Draw a set of 20 data points from a TK03.GAD distribution predicted for a latitude of 42N (see 14), without reversals. Calculate kent parameters using gokent.py

$ tk03.py -n 20 -lat 42 >tk03.out; gokent.py -f  tk03.out  
   359.2    55.0     9.3   147.7    30.8     7.8   246.8    14.9 20

which according to the help message from gokent.py is: mean dec, mean inc, Eta, Deta, Ieta, Zeta, Zdec, Zinc, N

5.2.50 goprinc.py

[Essentials Chapter 12] [goprinc docs]

Draw a set of 20 data points from a TK03.GAD distibution predicted for a latitude of 42N (see 14), including reversals. Calculate the eigenparameters of the orientation matrix (the principal components) using goprinc.py

$ tk03.py -n 20 -lat 42 -rev >tk03.out; goprinc.py -f tk03.out  
0.93863    11.0    58.6 0.04258   226.4    26.5 0.01879   128.3    15.6 20

which according to the help message from goprinc.py is: τ1V 1D,V 1I2V 2DV 2Iτ3V 3DV 3I,N.

5.2.51 grab_magic_key.py

[MagIC Database] [grab_magic_key docs]

grab_magic_key.py is a utility that will print out any column (-key option) from any [MagIC Database] formatted file. For example, we could print out all the site latitudes from the er_sites.txt file down loaded in the download_magic.py example:

$ grab_magic_key.py -f er_sites.txt -key site_lat  
42.60264  
42.60264  
42.60352  
42.60104  
...

You could save the data in a file with the output redirect function (> ) and plot them with, say plot_cdf.py.

$ grab_magic_key.py -f er_sites.txt -key site_lat > lats  
$ plot_cdf.py -f lats

which produces the fascinating (NOT!) plot:

PIC

5.2.52 histplot.py

[histplot docs]

Make a histogram of the data generated with the gaussian.py program.

$ histplot.py -f gauss.out

which makes a plot similar to:

PIC

5.2.53 hysteresis_magic.py

[Essentials Chapters 5], [Essentials Chapter 7], [Essentials Appendix C.1] & [MagIC Database] [hysteresis_magic docs]

Plot the data in hysteresis_magic_example.txt, (from Ben-Yosef et al., 2008) which were imported into MagIC using agm_magic.py. Use the program hysteresis_magic.py to plot the data.

$ hysteresis_magic.py -f hysteresis_magic_example.dat  
 
IS06a-1 1 out of  8  
S[a]ve plots, [s]pecimen name, [q]uit, <return> to continue  
 a  
1  saved in  _IS06a-1_hyst.svg  
3  saved in  _IS06a-1_DdeltaM.svg  
2  saved in  _IS06a-1_deltaM.svg  
5  saved in  _IS06a-1_irm.svg

which makes the plots:

PIC

5.2.54 igrf.py

[Essentials Chapter 2][Essentials Chapter 2] [igrf docs]

Use the program igrf.py to estimate the field on June 1, 1995 in Amsterdam, The Netherlands (52.5 N, 5E).

$ igrf.py -i  
Decimal year: <cntrl-D to quit> 1995.5  
Elevation in km [0]  
Latitude (positive north) 52.5  
Longitude (positive east) 5  
  357.9    67.4    48534  
 
Decimal year: <cntrl-D to quit> ^D  
 Good-bye

Now read from the file igrf_example.dat which requests field information for near San Diego for the past 3000 years. The program will evaluate the field vector for that location for dates from 1900 to the present using the IGRF/DGRF coefficients in the IGRF-11 model available from: http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html. For the period from 1000 BCE to 1800, the program uses the CALS3k.4b of Korte and Constable (2011) available at: http://earthref.org/ERDA/1142/. Prior to that and back to 8000 BCE, the program uses the coefficients of Korte et al., 2011 for the CALS10k.1b, available at: http://earthref.org/ERDA/1403/

$ igrf.py -f igrf_example.dat -F igrf.out

You can also make a plot of secular variation of field elements by specifying the age range and location on the command line. Try this for the location of San Diego for the age range: -3000 1925 in increments of 50 years:

$ igrf.py -ages -3000 1950 50 -loc 33 -117 -mod pfm9k -plt  
     S[a]ve to save figure, <Return>  to quit

and get this plot:

PIC

5.2.55 install_etopo.py

Some mapping utilities use the Etopo package for topography and these data sets do not come standard with the Python installation. To install these properly, simply type:

$ install_etopo.py

5.2.56 incfish.py

[ Essentials Chapter 11] [incfish docs]

Use the program incfish.py to calculate average inclination for inclination only data simulated by fishrot.py for an average inclination of 60. If you save the declination, inclination pairs, you can compare the incfish.py answer with the Fisher mean. The datafile incfish_example_di.dat has the declination, inclination pairs and incfish_example_inc.dat has just the inclinations.

$ fishrot.py -I 60 -k 10 > incfish_example_di.dat  
$ awk ’{print $2}’ incfish_example_di.dat > incfish_example_inc.dat  
$ incfish.py -f incfish_example_inc.dat  
   57.1    61.0  100     92.9    13.9     1.0  
$ gofish.py -f incfish_example_di.dat  
    1.8    62.9    100    91.2727     11.3     4.4    24.0

The output for incfish.py is: [gaussian] mean inc, Fisher inc, N,R,k,α95. You can see that the incfish.py result is much closer to the Fisherian result than the gaussian mean is.

5.2.57 irmaq_magic.py

MagIC [irmaq_magic docs]

Someone (Saiko Sugisaki) measured a number of samples from IODP Expedition 318 Hole U1359A for IRM acquisition curves. She used the ASC impulse magnetizer’s coils # 2 and 3 and saved the data in the SIO format. Convert these to the MagIC measurements format, combine them in a single magic_measurements.txt file with combine_magic.py and plot the data with irmaq_magic.py.

With MagIC data model 2.5:

$ sio_magic2.py -f U1359A_IRM_coil2.txt -LP I -V 2 -F U1359A_IRM_coil2.magic \  
   -loc U1359A -spc 0 -ncn 5  
$ sio_magic2.py -f U1359A_IRM_coil3.txt -LP I -V 3 -F U1359A_IRM_coil3.magic \  
    -loc U1359A -spc 0 -ncn 5  
$ combine_magic.py -F magic_measurements.txt -f U1359A_IRM_coil2.magic \  
    U1359A_IRM_coil3.magic -DM 2  
$ irmaq_magic.py -DM 2  
559  records read from  magic_measurements.txt  
U1359A  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  U1359A_LP-IRM.svg

which produces the plot:

PIC

To run irmaq_magic.py with MagIC data model 3:

$ sio_magic.py -f U1359A_IRM_coil2.txt -LP I -V 2 \  
    -F U1359A_IRM_coil2.magic -loc U1359A -spc 0 -ncn 5 \  
    -Fsp coil2_specimens.txt -Fsa coil2_samples.txt -Fsi coil2_sites.txt -Flo coil2_locations.txt  
 
$ sio_magic.py -f U1359A_IRM_coil3.txt -LP I -V 3 \  
    -F U1359A_IRM_coil3.magic -loc U1359A -spc 0 -ncn 5 \  
    -Fsp coil3_specimens.txt -Fsa coil3_samples.txt -Fsi coil3_sites.txt -Flo coil3_locations.txt  
 
$ combine_magic.py -F measurements.txt \  
    -f U1359A_IRM_coil2.magic \  
    U1359A_IRM_coil3.magic -dm 3  
 
$ combine_magic.py -F measurements.txt \  
    -f U1359A_IRM_coil2.magic U1359A_IRM_coil3.magic -dm 3  
 
$ combine_magic.py -F specimens.txt \  
    -f coil2_specimens.txt coil3_specimens.txt  
 
$ combine_magic.py -F samples.txt \  
    -f coil2_samples.txt coil3_samples.txt  
 
$ combine_magic.py -F sites.txt \  
    -f coil2_sites.txt coil3_sites.txt  
 
$ combine_magic.py -F locations.txt \  
    -f coil2_locations.txt coil3_locations.txt  
 
$ irmaq_magic.py -fmt png -sav -DM 3 -obj loc  

The plotting results will be the same with MagIC 2 or MagIC 3!

5.2.58 k15_magic.py

[Essentials Chapter 13]; [MagIC Database] [k15_magic docs]

Someone took a set of samples from a dike margin in the Troodos Ophiolite and measured their anisotropy of magnetic susceptibility on an a Kappabridge KLY 2.0 instrument in the SIO laboratory:

tr245f     80.00   -46.0 204 25  
   995.     999.     993.     995.    1000.  
  1004.     999.    1001.    1004.     999.  
   998.     997.    1002.     998.     997.  
tr245g     52.00   -23.0 204 25  
  1076.    1066.    1072.    1077.    1067.  
  1076.    1068.    1072.    1076.    1067.  
  1068.    1075.    1071.    1068.    1076.  
tr245h    123.00   -29.0 204 25  
  1219.    1231.    1218.    1220.    1232.  
  1230.    1231.    1234.    1230.    1231.  
  1221.    1221.    1225.    1221.    1221.  
tr245i1    68.00   -26.0 204 25  
  1031.    1027.    1025.    1032.    1028.  
  1035.    1031.    1035.    1035.    1032.  
  1030.    1027.    1032.    1030.    1027.  
  

The first line of each set of four has the specimen name, azimuth, plunge, and bedding strike and dip the next three lines are sets of five measurements in the 15 positions recommended by Jelinek (1977):

PIC

The 15 measurements for each specimen, along with orientation information and the specimen name were saved in the file k15_example.dat. Convert these to the MagIC format using the program k15_magic.py:

$ k15_magic.py -spc 0 -f k15_example.dat -loc ‘‘Troodos Ophiolite‘‘  
Data saved to:  ./er_samples.txt ./rmag_anisotropy.txt ./rmag_results.txt ./k15_measurements.txt  
 

You can plot the output of this example (default file rmag_anisotropy.txt) using the program aniso_magic.py.

5.2.59 k15_s.py

[Essentials Chapter 13]; [MagIC Database] [k15_s docs]

Use k15_s.py to calculate the best-fit tensor elements and residual error for the data in the file k15_example.dat (same file as for k15_magic.py. These are: the specimen name, azimuth and plunge and the strike and dip, followed by the 15 measurements made using the Jelinek 1977 scheme shown in the k15_magic.py example. Calculate the .s data in specimen, geographic and tilt adjusted coordinates:

$ k15_s.py -f k15_example.dat  
0.33146986 0.33413991 0.33439023 0.00075095 -0.00083439 -0.00016688 0.00008618  
0.33335925 0.33335925 0.33328149 -0.00155521 -0.00132193 0.00116641 0.00017193  
0.33097634 0.33573565 0.33328801 0.00163177 0.00013598 0.00000000 0.00018131  
...  
$ k15_s.py -f k15_example.dat -crd g  
0.33412680 0.33282733 0.33304587 -0.00015289 0.00124840 0.00135721 0.00008618  
0.33556300 0.33198264 0.33245432 0.00087259 0.00024141 0.00096166 0.00017193  
0.33584908 0.33140627 0.33274469 0.00131844 0.00118816 0.00002987 0.00018131  
...  
$ k15_s.py -f k15_example.dat -crd t  
0.33455712 0.33192658 0.33351630 -0.00043563 0.00092770 0.00105006 0.00008618  
0.33585501 0.33191562 0.33222935 0.00055960 -0.00005314 0.00064730 0.00017193  
0.33586666 0.33084926 0.33328408 0.00142269 0.00013235 0.00009201 0.00018131  
...

5.2.60 kly4s_magic.py

[Essentials Chapter 13]; [MagIC Database] [KLY4S_magic docs]

The program AMSSpin available for downloading from http://earthref.org/ERDA/940/ generates data for the Kappabridge KLY4S spinning magnetic susceptibility instrument as described by Gee et al. (2008).

The files have the format:

318-U1359B-011H-1-W-75  0.3415264 0.3403534 0.3181202 -0.0006608 0.0000948 0.0023186  2.284E-4 1.26728E-4 1.31105E-8 -21.81   300.0   10.00 -5.13E-6 11/04/11 12:54 Saiko  
318-U1359B-011H-2-W-76  0.3369955 0.3361130 0.3268916 0.0000845 -0.0005236 0.0003290  3.500E-4 7.85130E-5 3.16381E-8 -21.81   300.0   10.00 -4.11E-6 11/04/11 12:57 Saiko  
318-U1359B-011H-3-W-78  0.3373485 0.3364144 0.3262371 -0.0002281 0.0005881 -0.0011413  3.523E-4 1.04700E-4 8.77716E-9 -21.81   300.0   10.00 -4.11E-6 11/04/11 13:00 Saiko  
318-U1359B-011H-4-W-96  0.3343428 0.3323376 0.3333195 0.0007210 -0.0005503 0.0011256  
 

where the columns are:

Specimen S_1 S_2 S_3 S_4 S_5 S_6 χb(μSI) date time user

Output files are in the format of the file KLY4S_magic_example.dat (found in the Measurement_Import/KLY4S_magic folder). One option is for orientation information to be output as an azdip formatted file (see azdip_magic.py.) Try it out on some data from IODP Expedition 318 (Hole U1359B) published by Tauxe et al., (2012). Note that the sample naming convention is # 5 (-ncn 5) and the sample name is the same as the site (-spc 0) using KLY4S_magic.py as follows:

%KLY4S_magic.py -f KLY4S_magic_example.dat  -spc 0 -ncn 5

This command will create the files needed by the MagIC database and the data can be plotted using aniso_magic.py. If you were to import the sample files from the LIMS data base for these samples, you could plot them versus depth, or as equal area projections using ani_depthplot.py and aniso_magic.py respectively.

5.2.61 lnp_magic.py

[Essentials Chapter 11], [Essentials Appendix C2.2] & [MagIC Database] [lnp_magic docs]

This program will take pmag_specimen formatted MagIC files (for example, generated by zeq_magic.py) and plot data by site, combining best-fit lines and best-fit planes using the method described in Essentials Appendix C2.2. Try this out on the data from the San Francisco Volcanics, notorious for lightning strikes, published by Tauxe et al., 2003. These can be downloaded from the MagIC database from http://earthref.org/MAGIC/m000629dt20061213090720/ and unpacked with download_magic.py.

$ download_magic.py -f zmab0001193tmp02.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
er_samples  data put in  ./er_samples.txt  
...  
$ lnp_magic.py -f pmag_specimens.txt -crd g  
sv01  
Site lines planes  kappa   a95   dec   inc  
sv01 0  5     286      6.6    179.0    -54.3  4.9948  
$ tilt correction:  0  
s[a]ve plot, [q]uit, <return> to continue:  
 a  
1  saved in  sv01_g_eqarea.svg  

which generated this figure:

PIC

5.2.62 lowrie.py

[Essentials Chapter 8] [lowrie docs]

Someone (Saiko Sugisaki) subjected a number of specimens from IODP Expedition 318 Hole U1359A specimens to a 3-D IRM experiment and saved the data in the SIO format. Use lowrie.py to make plots of blocking temperature for the three coercivity fractions.

 
$ lowrie.py -f lowrie_example.dat  
318-U1359A-002H-1-W-109  
S[a]ve figure? [q]uit, <return> to continue   a  
1  saved in  lowrie:_318-U1359A-002H-1-W-109_.svg  
318-U1359A-002H-4-W-65  
S[a]ve figure? [q]uit, <return> to continue   q

which produces plots like this:

PIC

5.2.63 lowrie_magic.py

[Essentials Chapter 8] and [MagIC Database] [lowrie_magic docs]

This program works exactly like lowrie.py, but works on magic_measurements.txt formatted files. Use sio_magic.py to import the lowrie_example.dat data file into the MagIC format. Then use lowrie_magic.py to plot the data:

$ sio_magic.py -f lowrie_example.dat -LP I3d -loc U1359A -F lowrie_magic_example.dat  
averaging: records  11 12  
318-U1359A-004H-2-W-18 started with  15  ending with  14  
Averaged replicate measurements  
averaging: records  13 14  
318-U1359A-004H-2-W-44 started with  15  ending with  14  
Averaged replicate measurements  
averaging: records  14 16  
318-U1359A-012H-5-W-70 started with  16  ending with  14  
Averaged replicate measurements  
averaging: records  14 15  
$ lowrie_magic.py -f lowrie_magic_example.dat  
318-U1359A-002H-1-W-109  
S[a]ve figure? [q]uit, <return> to continue

5.2.64 magic_select.py

[MagIC Database] [magic_select docs]

This program takes any MagIC formatted file and selects lines that match the search criteria and saves the output as a new MagIC formatted file of the same type. Selection criteria match whole or partial strings, avoid whole or partial strings or evaluate minimum or maximum values. The syntax is to specify the input file with the -f option, the key you wish to search on with -key, followed by the string you wish to use as a criterion, followed with a ’T’ for match exactly, ’F’ avoid entirely, ’has’ for matching partially and ’not for avoiding any partial match, and min, max and eval for evaluating the numerical value. The -F option sets the output file.

For example, you could pick out all the records that match a certain site name string. You could select out records with specified method codes. You could get all records with inclinations that are positive.

Use magic_select.py to pick out all the best-fit lines based on AF demagnetization data from the pmag_specimens.txt file unpacked from the MagIC database in the download_magic.py example.

$ magic_select.py -f pmag_specimens.txt -key magic_method_codes LP-DIR-AF has -F AF_specimens.txt  
76  records written to  ./AF_specimens.txt  
  $ magic_select.py -f AF_specimens.txt -key magic_method_codes DE-BFL has -F AF_BFL_specimens.txt  
69  records written to  ./AF_BFL_specimens.txt  
 
 

5.2.65 make_magic_plots.py

[MagIC Database] [make_magic_plots docs]

This program will inspect the files in the current directory and autogenerate some standard plots.

5.2.66 Measurement Import Scripts

_2g_bin_magic.py

[_2g_bin_magic.py docs]

This program imports the binary format generated by the 2G proprietary software. It will place the orientation information in the er_samples.txt file and er_sites.txt file by default. Naming and orientation conventions can be specified as in the sections on Naming conventions and Orientation conventions. You can also specify sampling method codes that pertain to ALL samples (see section on field information.)

agm_magic.py

[Essentials Chapter 5] & [MagIC Database] [agm_magic docs]

This program imports Micromag hysteresis files into magic_measurements formatted files. Because this program imports data into the MagIC database, specimens need also to have sample/site/location information which can be provided on the command line. If this information is not available, for example if this is a synthetic specimen, specify -syn SYN_NAME for synthetic on the command line instead of the -spn SPEC_NAME switch.

Someone named Lima Tango has measured a synthetic specimen named myspec for hysteresis and saved the data in a file named agm_magic_example.agm. The backfield IRM curve for the same specimen was saved in data_files/Measurement_Import/agm_magic˙example.irm. Use the program AGM_magic.py to import the data into a measurements formatted output file. These were measured using cgs units, so be sure to set the units switch properly. Combine the two output files together using combine_magic.py. [This can be plotted using hysteresis_magic.py or quick_hyst.py.] hysteresis_magic.py will calculate various hysteresis parameters and put them in the relevant magic tables for you. ] s

$ agm_magic.py -spn myspec --usr ‘‘Lima Tango‘‘ -f agm_magic_example.agm -u cgs  
 
results put in  ./agm_measurements.txt  
$ agm_magic.py -spn myspec --usr ‘‘Lima Tango‘‘ -f agm_magic_example.irm -u cgs  -bak  
 
results put in  ./irm_measurements.txt  
$ combine_magic.py -F magic_measurements.txt -f agm_measurements.txt irm_measurements.txt  

Note: agm_magic.py does not currently support MagIC 3, but will soon.

cit_magic.py

[cit_magic.py docs]

Craig Jones’ PaleoMag software package (http://cires.colorado.edu/people/jones.craig/PMag3.html) imports various file formats, including the ’CIT’ format developed for the Caltech lab and now utilized in magnetometer control software that ships with 2G magnetometers that utilized a vertical sample changer system. The documentation for the CIT sample format is here: http://cires.colorado.edu/people/jones.craig/PMag_Formats.html#SAM_format. Demagnetization data for each specimen are in their own file in a directory with all the data for a site or study. These files are strictly formatted with fields determined by the character number in the line. A datafile generated in the UC Berkeley paleomagnetism lab, for a specimen published in Fairchild et al., (2016) (available in the MagIC database: (https://earthref.org/MagIC/11292/) has this format:

PI47- 9a mag compass orientation (IGRF corrected)  
      0 328.1  21.0  0.0  0.0  1.0  
NRM    103.2  66.0 103.2  66.0 8.81E-05 001.7 205.8  48.5 0.074307 0.216786 0.124038 hargrave 2015-09-16 18:32:38  
TT  50 103.2  66.5 103.2  66.5 8.74E-05 001.4 205.5  49.1 0.104540 0.153198 0.116564 hargrave 2015-09-19 18:43:09  
TT  75 106.2  65.9 106.2  65.9 8.53E-05 001.6 207.6  49.0 0.073179 0.193176 0.109758 hargrave 2015-09-21 22:26:07  
TT 100 107.6  64.5 107.6  64.5 8.18E-05 001.9 209.2  47.9 0.050586 0.250049 0.109796 hargrave 2015-09-24 10:00:09  
TT 125 103.9  63.0 103.9  63.0 7.49E-05 001.1 207.9  45.9 0.048136 0.090764 0.100753 hargrave 2015-09-24 15:53:38  
TT 150 107.5  60.3 107.5  60.3 7.01E-05 001.4 211.6  44.1 0.048329 0.118106 0.116416 hargrave 2015-09-25 12:36:42  
TT 175 114.0  57.5 114.0  57.5 6.55E-05 001.4 217.4  42.8 0.094454 0.066698 0.103792 hargrave 2015-09-26 14:01:11  
TT 200 110.5  57.5 110.5  57.5 6.33E-05 001.5 215.0  42.1 0.117744 0.073840 0.087541 hargrave 2015-09-26 19:58:01  
TT 225 114.7  55.8 114.7  55.8 6.05E-05 002.9 218.7  41.4 0.163297 0.240928 0.100647 hargrave 2015-09-28 19:49:38  
TT 250 110.5  56.1 110.5  56.1 5.88E-05 001.3 215.7  40.8 0.085600 0.041973 0.101365 hargrave 2015-10-01 12:08:19  
TT 275 114.0  54.3 114.0  54.3 5.35E-05 002.9 219.0  39.8 0.121358 0.235405 0.081647 hargrave 2015-10-05 19:00:28  
TT 300 113.7  53.9 113.7  53.9 4.88E-05 001.2 219.0  39.4 0.044699 0.063687 0.074005 hargrave 2015-10-06 02:35:35

There must be a file with the suffix ‘.sam’ in the same directory as the specimen data files which gives details about the specimens and a list of the specimen files in the directory. The .sam file has the form:

PI47-  
 48.7 -87.0   0.0  
PI47-1a  
PI47-2a  
PI47-3a  
PI47-4a  
PI47-5a  
PI47-6a  
PI47-7a  
PI47-8a  
PI47-9a

The first line is a comment (in this case the site name), the second is the latitude and longitude followed by a declination correction. In these data, the declination correction was applied to the specimen orientations so the value of the declination correction is set to be 0.

For detailed description of the file format, check the PaleoMag website.

Use the program CIT_magic.py to import the data files from the example data files in the CIT_magic directory of the Measurement_Import directory of the data_files directory. The location name was “Slate Islands”, the naming convention was #2, the specimen name is specified with 1 character (-spc 1), we don’t wish to average replicate measurements (-A) and they were collected by drilling and with a magnetic compass (-mcd ”FS-FD:SO-MAG”).

PI47- Laurentia$ cit_magic.py  -f PI47-.sam -loc "Slate Islands" -spc 1 -ncn 2 -mcd "FS-FD:SO-MAG" -A  
 
PI47-  
 
9  records written to file  ./er_specimens.txt  
specimens stored in  ./er_specimens.txt  
9  records written to file  ./er_samples.txt  
samples stored in  ./er_samples.txt  
1  records written to file  ./er_sites.txt  
sites stored in  ./er_sites.txt  
266  records written to file  ./magic_measurements.txt  
data stored in  ./magic_measurements.txt

These data can now be viewed and interpreted using, for example zeq_magic.py.

generic_magic.py

[generic_magic.py docs]

If you have a data file format that is not supported, you can relabel column headers to fit the generic format in generic_magic.py:

specimentreatmenttreatment_typemomentdec_sinc_s
sr01e2 0 A 2.93E-02200.6 27.9
sr01e2 10 A 2.83E-02 201 27.7
sr01e2 20 A 2.61E-02200.5 27.4
sr01e2 30 A 2.37E-02 199.9 27.7
sr01e2 60 A 1.85E-02202.5 27.9
.
.

For more options, here is the help message for generic_magic.py :

        A generic file is a tab-delimited file. Each columns should have a header.  
        The file must include the follwing headers (the order of the columns is not important):  
 
            specimen  
                string specifying specimen name  
 
            treatment:  
                a number with one or two decimal point (X.Y)  
                coding for thermal demagnetization:  
                    0.0 or 0 is NRM.  
                    X is temperature in celcsius  
                    Y is always 0  
                coding for AF demagnetization:  
                    0.0 or 0 is NRM.  
                    X is AF peak field in mT  
                    Y is always 0  
                coding for Thellier-type experiment:  
                    0.0 or 0 is NRM  
                    X is temperature in celcsius  
                    Y=0: zerofield  
                    Y=1: infield  
                    Y=2: pTRM check  
                    Y=3: pTRM tail check  
                    Y=4: Additivity check  
                    # Ron, Add also 5 for Thellier protocol  
 
            treatment_type:  
                N: NRM  
                A: AF  
                T: Thermal  
 
            moment:  
                magnetic moment in emu !!  
 
        In addition, at least one of the following headers are required:  
 
            dec_s:  
                declination in specimen coordinate system (0 to 360)  
            inc_s:  
                inclination in specimen coordinate system (-90 to 90)  
 
            dec_g:  
                declination in geographic coordinate system (0 to 360)  
            inc_g:  
                inclination in geographic coordinate system (-90 to 90)  
 
            dec_t:  
                declination in tilt-corrected coordinate system (0 to 360)  
            inc_t:  
                inclination in tilt-corrected coordinate system (-90 to 90)  
 

This program is called within Pmag GUI but could be used as a stand alone program if you wish.

huji_magic.py

[huji_magic.py docs]

This program was developed for the late Prof. Hagai Ron at the Hebrew University, Jerusalem for files with formats like this:

mk118 07R-08329 5/10/2007 7:35 N 0 10.4 73.9 10.4 73.9 2.23E-03  
mk118 07R-08330 5/10/2007 7:37 A 5 359.2 60 359.2 60 2.19E-03  
mk118 07R-08331 5/10/2007 7:38 A 10 358.4 48 358.4 48 1.50E-03  
mk118 07R-08332 5/10/2007 7:40 A 15 357.7 43.5 357.7 43.5 8.34E-04  
mk118 07R-08333 5/10/2007 7:42 A 20 358.4 42.5 358.4 42.5 4.89E-04  
mk118 07R-08334 5/10/2007 7:44 A 25 358.8 40.6 358.8 40.6 3.06E-04  
mk118 07R-08335 5/10/2007 7:46 A 30 356.6 41.1 356.6 41.1 2.16E-04  
mk118 07R-08336 5/10/2007 7:48 A 40 357 39.3 357 39.3 1.39E-04  
mk118 07R-08337 5/10/2007 7:50 A 50 357 40 357 40 1.03E-04  
mk118 07R-08338 5/10/2007 7:53 A 60 351.7 46.9 351.7 46.9 9.64E-05  
mk118 07R-08339 5/10/2007 7:55 A 70 350.7 48.7 350.7 48.7 8.89E-05  
mk118 07R-08340 5/10/2007 7:58 A 80 353.8 38.8 353.8 38.8 6.77E-05

huji_magic.py works in a similar fashion to sio_magic.py.

ldeo_magic.py

[MagIC Database] [ldeo_magic docs]

This program works in much the same fashion as sio_magic.py but the file formats are different. Here is a typical example:

isaf2.fix  
LAT:   .00  LON:    .00  
    ID     TREAT  I  CD    J    CDECL CINCL  GDECL GINCL  BDECL BINCL  SUSC  M/V  
________________________________________________________________________________  
is031c2       .0  SD  0 461.600 163.9  17.5  337.1  74.5  319.1  74.4    .0   .0

The first line is the file name and the second has the latitude and longitude information. The columns are:

        ID: specimen name  
        TREAT:  treatment step  
        I:  Instrument  
        CD:  Circular standard devation  
        J: intensity.  assumed to be total moment in 10^-4 (emu)  
        CDECL:  Declination in specimen coordinate system  
        CINCL:  Declination in specimen coordinate system  
        GDECL:  Declination in geographic coordinate system  
        GINCL:  Declination in geographic coordinate system  
        BDECL:  Declination in bedding adjusted coordinate system  
        BINCL:  Declination in bedding adjusted coordinate system  
        SUSC:  magnetic susceptibility (in micro SI)a  
        M/V: mass or volume for nomalizing (0 won’t normalize)

Use the program to import the file ldeo_magic_example.dat into the MagIC format. It was an AF demagnetization experiment.

$ ldeo_magic.py -f ldeo_magic_example.dat -LP AF  
results put in  magic_measurements.txt

iodp_csv_magic.py

Just for fun, download the whole core data from IODP expedition 318, Site U1359, Hole A. (Section Half set to ’A’) from the IODP LIMS database using WebTabular. These data were used in Tauxe et al. (2012) after some editing (removal core ends and disturbed intervals). Convert them into the MagIC format using ODP_csv_magic.py:

$ iodp_csv_magic.py -f SRM_318_U1359_A_A.csv  
processing:  SRM_318_U1359_A_A.csv  
specimens stored in  ./er_specimens.txt  
samples stored in  ./er_samples.txt  
......  
Averaged replicate measurements  
data stored in  ./magic_measurements.txt

This takes a while so be patient....

You can also downloaded the core summary data from the LIMS database and convert it to the MagIC format using Pmag GUI. Now you can plot the data and the core tops using core_depthplot.py

pmd_magic.py

[pmd_magic.py docs]

This format is the one used to import .PMD formatted magnetometer files (used for example in the PaleoMac software of Cogné, 2003) into the MagIC format. (See http://www.ipgp.fr/~cogne/pub/paleomac/PMhome.html for the PaleoMac home page. The version of these files that PMD_magic.py expects (UCSC version) contains demagnetization data for a single specimen and have a format like this:

Strat post corrected - Summit Springs, -  
ss0101a   a= 93.9   b= 61.0   s=  0.0   d=  0.0   v=11.0E-6m3  06/17/2003 12:00  
STEP  Xc [Am2]  Yc [Am2]  Zc [Am2]  MAG[A/m]   Dg    Ig    Ds    Is  a95  
H000 -7.46E-06 -1.61E-05 +5.31E-05 +5.09E+00  73.3  35.2  73.3  35.2  0.5  cryoSlug  
H002 -7.57E-06 -1.53E-05 +5.03E-05 +4.83E+00  73.2  35.7  73.2  35.7  0.7  cryoSlug  
H005 -7.35E-06 -1.16E-05 +3.62E-05 +3.52E+00  71.5  38.3  71.5  38.3  0.7  cryoSlug  
H008 -7.35E-06 -6.27E-06 +1.92E-05 +1.95E+00  68.5  47.1  68.5  47.1  0.7  cryoSlug  
H012 -6.84E-06 -3.32E-06 +1.20E-05 +1.29E+00  69.1  56.1  69.1  56.1  0.6  cryoSlug  
H018 -6.00E-06 -2.26E-06 +8.96E-06 +1.00E+00  69.3  60.5  69.3  60.5  0.6  cryoSlug  
H025 -5.16E-06 -1.74E-06 +7.18E-06 +8.19E-01  69.2  62.5  69.2  62.5  0.5  cryoSlug  
H050 -3.12E-06 -9.23E-07 +4.03E-06 +4.71E-01  69.3  64.6  69.3  64.6  0.5  cryoSlug  
H090 -1.80E-06 -5.32E-07 +2.23E-06 +2.65E-01  67.6  65.6  67.6  65.6  0.5  cryoSlug  
H180 -9.49E-07 -2.94E-07 +1.21E-06 +1.42E-01  67.7  64.8  67.7  64.8  0.5  cryoSlug

The first line is a comment line. The second line has the specimen name, the core azimuth (a=) and plunge (b=) which are assumed to be the lab arrow azimuth and plunge (Orientation scheme #4)D. The third line is a header explaining the columns in the file.

Use pmd_magic.py to convert the file ss0101a.pmd in the directory ’PMD’ in the ’PMD_magic’ folder of the Measurement_import directory in the example data_files directory. These were taken at a location named ’Summit Springs’ and have a naming convention of the type XXXX[YYY], where YYY is sample designation with Z characters from site XXX, ornaming convention # 4-2. A single character distinguishes the specimen from the sample (-spc 1). All samples were oriented with a magnetic compass.

$ pmd_magic.py -f ss0101a.pmd -loc ‘‘Summit Springs‘‘ -ncn 4-2 -spc 1 -mcd SO-MAG -F ss0101a.magic  
 results put in  ./magic_measurements.txt  
sample orientations put in  ./er_samples.txt

Because each file must be imported separately, you should use a different name for the output file for each input file (otherwise you will overwrite the default each time) and set the switch for sample file to append for subsequent imports:

$ pmd_magic.py -f ss0102a.pmd -loc ‘‘Summit Springs‘‘ -ncn 4-2 -spc 1 -mcd SO-MAG -F ss0102a.magic-Fsa er_samples.txt  
sample information will be appended to  ./er_samples.txt  
results put in  ./magic_measurements.txt  
sample orientations put in  ./er_samples.txt

After you finish importing all the data, combine the individual files together with combine_magic.py and look at them with, for example, zeq_magic.py.

sio_magic.py

[sio_magic docs]

The program sio_magic.py allows conversion of the SIO format magnetometer files to the MagIC common measurements format. It allows various experiment types so read the help message. The SIO format is a space delimited file:

sc12b1      0.00        0.3      2.878E-2      185.0      9.3  
sc12b1      5.00        0.3      2.864E-2      187.7      9.6  
sc12b1      10.00      0.3      2.833E-2      185.5      10.0  
sc12b1      15.00      0.2      2.771E-2      188.2      10.0  
sc12b1      20.00      0.4      2.644E-2      187.6      10.1  
sc12b1      30.00      0.2      2.182E-2      186.0      9.9  
sc12b1      40.00      0.4      1.799E-2      185.3      9.9  
sc12b1      50.00      0.2      1.570E-2      187.9      9.6  
sc12b1      60.00      0.2      1.423E-2      185.6      10.0  
sc12b1      80.00      0.1      1.275E-2      187.4      9.5

The columns are:

Specimen treatment intensity declination inclination optional_string

The treatment field is the temperature (in centigrade), the AF field (in mT), the impulse field strength, etc. For special experiments like IRM acquisition, the coil number of the popular ASC impulse magnetizer can be specified if the treatment steps are in volts. The position for anisotropy experiments or whether the treatment is “in-field” or in zero field also require special formatting. The units of the intensity field are in cgs and the directions are relative to the ‘lab arrow’ on the specimen. Here are some examples of commonly used specimens and conversions from field arrow to lab arrow.

PIC

As an example, we use data from Sbarbori et al. (2009) done on a set of samples from the location “Socorro”, including AF, thermal, and thellier experimental data. These were saved in sio_af_example.dat, sio_thermal_example.dat, and sio_thellier_example.dat respectively. The lab field for the thellier experiment was 25 μT and was applied along the specimen’s Z axis (phi=0,theta=90).] Convert the example files into magic_measurement formatted files with names like af_measurements.txt, etc. Then combine them together with combine_magic.py:

 
Note:  all of these should actually be on ONE line:  
 
$ sio_magic.py -f sio_af_example.dat -F af_measurements.txt  
    -LP AF -spc 1 -loc Socorro  
 
averaging: records  11 13  
averaging: records  14 16  
averaging: records  17 19  
averaging: records  20 22  
sc12b1 started with  22  ending with  14  
Averaged replicate measurements  
results put in  af_measurements.txt  
 
 
    $ sio_magic.py -f sio_thermal_example.dat -F thermal_measurements.txt  \  
    -LP T -spc 1 -loc Socorro  
averaging: records  6 7  
sc12b2 started with  23  ending with  22  
Averaged replicate measurements  
results put in  thermal_measurements.txt  
 
 
$ sio_magic.py -f sio_thellier_example.dat -F  thellier_measurements.txt  \  
      -LP T -spc 1 -loc Socorro -dc 25 0 90  
averaging: records  10 11  
sc12b2 started with  56  ending with  55  
Averaged replicate measurements  
results put in  thellier_measurements.txt  
 
 
$ combine_magic.py -F magic_measurements.txt -f  thellier_measurements.txt   \  
       thermal_measurements.txt  af_measurements.txt  
 
File  ./thermal_measurements.txt  read in with  22  records  
File  ./thellier_measurements.txt  read in with  55  records  
File  ./af_measurements.txt  read in with  14  records  
All records stored in  ./magic_measurements.txt  
 
 

The data in these files can be plotted and interpreted with dmag_magic.py, zeq_magic.py, or thellier_magic.py depending on the experiment.

Note that there are more examples of data file formats and import schemes in the sections on anisotropy of anhysteretic and thermal remanences.

sufar4_asc_magic.py

[Essentials Chapter 13]; [MagIC Database] [sufar4_asc_magic docs]

The Agico Kappabridge instrument comes with the SUFAR program which makes the measurements and saves the data in a txt file like that in sufar4_asc_magic_example.txt in the sufar4_asc_magic directory. These data were measured on a KLY4S instrument with a spinning mode. Import them into the MagIC format:

%sufar4_asc_magic.py -f sufar4-asc_magic_example.txt -loc U1356A -spc 0 -ncn 5  
anisotropy tensors put in  ./rmag_anisotropy.txt  
bulk measurements put in  ./magic_measurements.txt  
specimen info put in  ./er_specimens.txt  
sample info put in  ./er_samples.txt  
site info put in  ./er_sites.txt  
 
         You can now import your data into the Magic Console and complete data entry,  
         for example the site locations, lithologies, etc. plotting can be done  
          with aniso_magic.py

tdt_magic.py

[Essentials Chapter 10] and [MagIC] [tdt_magic docs]

This program imports the default data format for the ThellierTool Program of Leonhardt et al. (2004). After importing, the data can be viewed with thellier_magic.py.

The file format of the ThellierTool .tdt format is:

Thellier-tdt  
50.12 0 0 0 0  
2396D 0.00 1821.50 4.6 49.4  
2396D 200.00 1467.5 7.4 44.5  
2396D 200.11 1808.3 4.3 34.4  
2396D 200.13 1421.89 6.1 46  
2396D 230.00 1317.69 6.9 45.3  
2396D 230.11 1711.2 4.3 32.9  
2396D 260.00 1150.89 8.4 44.7  
2396D 200.12 1475.92 4.3 34.7  
...

The first two lines are headers. The first column of the second line is the applied field in μT. The rest of this line is azimuth and plunge of the fiducial line and dip direction and dip of the bedding plane. The data columns are: specimen, treatment, intensity, declination and inclination. the intensity is the magnetization in 103 A/m. The treatment is of the form XXX.YY where XXX is the temperature and YY is 00, 11, 12, 13, 14 OR 0, 1, 2, 3, 4 where 0 is the NRM step, 11/1 is the pTRM acquisition step, 12/2 is the pTRM check step and 13/3 is the pTRM tail check. 14/4 is the additivity check.

Use the program tdt_magic.py to import the ThellierTool tdt formatted file tdt_magic_example.dat into the MagIC format. The field was 52.12 μT applied along the 0,0 direction.

$ tdt_magic.py -f tdt_magic_example.dat -loc TEST -dc 50.12 0 0  
results put in  magic_measurements.txt

5.2.67 measurements_normalize.py

[MagIC Database] [measurements_normalize docs]

This program takes specimen weights or volumes from an er_specimen.txt formatted file and normalizes the magnetic moment data to make magnetizations. Weights must be in kilograms and volumes in m3.

Use the program measurements_normalize.py to generate weight normalized magnetizations for the data in the file magic_measurements.txt in the —it measurements_normalize directory. Use the specimen weights in the file specimen_weights.txt.

$ magic_select.py -f magic_measurements.txt \  
     -key magic_method_codes LP-IRM has -F irm_measurements.txt  
205  records written to  ./irm_measurements.txt  
$ measurements_normalize.py -f irm_measurements.txt \  
      -fsp specimen_weights.txt -F irm_norm_measurements.txt  
Data saved in  ./irm_norm_measurements.txt

5.2.68 mk_redo.py

MagIC [mk_redo docs]

The programs zeq_magic.py and thellier_magic.py as well as the two GUIs Demag GUI and Thellier GUI make pmag_specimen formatted files which can be used for further data reduction either by plotting or contributing to site means, etc. Sometimes it is useful to redo the calculation, using anisotropy corrections or a change in coordinate systems, etc. The re-doing of these specimen level calculations is handled by, for example zeq_magic_redo.py or thellier_magic_redo.py. These programs use magic_measurements formatted files and perform calculations as dictated by a “redo” file which has the specimen name, bounds for calculation and, in the case of the demagnetization data interpretation, the type of calculation desired (best-fit lines with directional estimation magic method code:DE-BFL, best-fit planes with those with magic method code DE-BFP, etc.).

Make “redo” files from the existing pmag_specimen formatted file in the data files downloaded from the MagIC website as in download_magic.py and examine them as follows:

$ mk_redo.py  
$ cat zeq_redo  
sr01a1 DE-BFL 473 823 A  
sr01a2 DE-BFL 473 848 A  
sr01c2 DE-BFL 473 823 A  
sr01d1 DE-BFL 373 798 A  
.......  
$ cat thellier_redo  
sr01a1 573 823  
sr01a2 573 848  
sr01c2 673 823  
sr01d1 373 823  
 
.....

Note that the temperature steps are in kelvin and the AF demagnetization steps are in Tesla as required in the MagIC data base.

5.2.69 nrm_specimens_magic.py

MagIC [nrm_specimens_magic docs]

After making NRM measurements, it is frequently useful to look at the directions in equal area projection to get a “quick look” at the results before proceeding to step wise demagnetization. The data in the magic_measurements files are usually in specimen coordinates - not geographic, so we need a way to rotate the data into geographic and or stratigraphic coordinates and save them in a pmag_specimens formatted file for plotting with eqarea_magic.py. The program nrm_specimens_magic.py will do this for you.

Get into the directory you made for the download_magic.py example.
Use nrm_specimens_magic.py to convert the NRM measurements in magic_measurements.txt to geographic coordinates saved in a file named nrm_specimens.txt. The orientation data are in the file er_samples.txt. Then plot the specimen directions for the entire study using eqarea_magic.py:

$ nrm_specimens_magic.py -crd g  
er_samples.txt  read in with  271  records  
Data saved in  nrm_specimens.txt  
 
$ eqarea_magic.py -f nrm_specimens.txt  
177  records read from  ./nrm_specimens.txt  
All  
sr01a1 SO-SUN   324.1    66.0  
sr01a2 SO-SUN   325.3    62.1  
sr01c2 SO-SUN   344.7    63.8  
sr01d1 SO-CMD-NORTH   327.8    64.5  
sr01e2 SO-SUN   333.7    67.0  
...  
 
 

The first command created a file nrm_specimens.txt and the second created an equal area projection of the NRM directions in geographic coordinates which should look like this:

PIC

5.2.70 orientation_magic.py

[Essentials Chapter 9] and [Preparing for MagIC] [orientation_magic docs]

Try to import the file orientation_example.txt into the er_samples.txt and er_sites.txt files using orientation_magic.py. Click here for details about the orient.txt file format. It has field information for a few sites. The samples were oriented with a Pomeroy orientation device (the default) and it is desirable to calculate the magnetic declination from the IGRF at the time of sampling (also the default). Sample names follow the rule that the sample is designated by a letter at the end of the site name (convention #1 - which is the default). So we do this by:

orientation_magic.py -f orient_example.txt  
saving data...  
Data saved in  ./er_samples.txt  and  ./er_sites.txt  

5.2.71 parse_measurements.py

This program reads in the magic_measurements.txt file and creates an er_specimens.txt file. The specimen volumes and/or weights can then be put in columns labeled specimen_volume and specimen_weight respectively. Volumes must be in m3 and weights in kg. (Yes you can do the math...).

Try this out on the magic_measurements.txt file created in the irmaq_magic.py example. Pretend you have a bunch of specimen weights you want to use to normalize the NRM with the program measurements_normalize.py.

$ parse_measurements.py  
site record in er_sites table not found for:  318-U1359A-002H-1-W-109  
site record in er_sites table not found for:  318-U1359A-002H-2-W-135  
site record in er_sites table not found for:  318-U1359A-002H-3-W-45  
site record in er_sites table not found for:  318-U1359A-002H-4-W-65  
.....

The program appears a bit flustered because you have no er_sites.txt file in this directory. If you DID, you would overwrite whatever site name was in that file onto the specimen table. This allows you to carry the changes in that table through to the specimen table (see orientation_magic.py.)

5.2.72 pca.py

[Essentials Chapter 11] [pca docs]

This program calculates best-fit lines, planes or Fisher averages through selected treatment steps. The file format is a simple space delimited file with specimen name, treatment step, intensity, declination and inclination. Calculate the best-fit line through the first ten treatment steps in data file zeq_example.txt:

 $ pca.py -dir L 1 10 -f zeq_example.dat  
eba24a DE-BFL  
0 0.00 339.9 57.9 9.2830e-05  
1 2.50 325.7 49.1 7.5820e-05  
...  
eba24a DE-BFL 10    2.50  70.00    8.8   334.9    51.5

According to the help message, this is: specimen name, calculation type, N, beg, end, MAD, declination and inclination. The calculation type is the MagIC method code for best-fit lines (see Essentials Appendix ??.)

5.2.73 plotxy.py

[plotxy docs]

This program makes a simple XY plot from any arbitrary input file. You can specify which columns are X and Y, bounds on the columns, symbol color and size, axis labels and other options.

$ plotxy.py -f plotXY_example.txt

5.2.74 plot_cdf.py

[plot_cdf docs]

This program plots cumulative distribution functions of a single column of input data. Use as an example, a normally distributed set of 1000 data points generated by gaussian.py. Use the defaults of zero mean with a standard deviation of 1.

$ gaussian.py -n 1000 > gaussian.out  
  $ plot_cdf.py -f gaussian.out  
S[a]ve  plot, <Return> to quit a  
1  saved in  CDF_.svg

which should have generated a plot something like this:

PIC

5.2.75 plot_magic_keys.py

MagIC [plot_magic_keys docs]

This program will plot any column in a specified MagIC formatted file against any other column in the same file.

5.2.76 plot_map_pts.py

high resolution instructions [plot_map_pts docs]

NOTE: This program only works if you have installed basemap (full version of Canopy Python as opposed to the free version.) It may not work with free versions. plot_map_pts.py will generate a simple map of the data points in a file (lon lat) on the desired projection. If you want to use high resolution or the etopo20 meshgrid (-etp option), You must install the etopo20 data files and can use install_etopo.py for that. There are many options, so check the documentation (-h option) for details.

Draw a set of 200 uniformly distributed points on the globe with the program uniform.py. Plot these on an orthographic projection with the viewing point at a longitude of 0 and a latitude of 30N. If you have installed the high resolution data sets, use the -etp option to plot the topographic mesh. Plot the points as large (size = 10) white dots. Also note that for some reason the .svg output does not place nicely with illustrator.

$ uniform.py -n 200 >uniform.out  
$ plot_map_pts.py -f uniform.out -prj ortho -eye 30 0 -etp -sym wo 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf

which should produce a plot similar to this:

PIC

5.2.77 plotdi_a.py

[Essentials Chapter 11] [plotdi_a docs]

Place the following declination, inclination α95 data in a space delimited file called
plotdi_a_example.dat.




DecInc α95



39.1 37.5 5.0
30.3 36.2 15
29.9 45.6 7
34.6 28.4 3



Make a plot of these data using plotdi_a.py:

$ plotdi_a.py -f plotdi_a_example.dat  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  eq.svg

which makes the plot:

PIC

5.2.78 pmag_results_extract.py

[:MagIC] [pmag_results_extract docs]

This program extracts a tab delimited txt file from a pmag_results formatted file. This allows you to publish data tables that have identical data to the data uploaded into the MagIC database. Try this out in the directory created for the download_magic.py example:

$ pmag_results_extract.py -f pmag_results.txt  
data saved in:  ./Directions.txt ./Intensities.txt ./SiteNfo.txt

This creates tab delimited files that can be incorporated into a paper, for example. You can also export the information in LaTeX format if you prefer.

5.2.79 pt_rot.py

[Essentials Chapter 16] [pt_rot docs]

This program reads in a file with location, age and destination plate and rotates the data into the destination plate coordinates using the rotations and methods in Essentials Appendix A.3.5. Alternatively, you can supply your own rotation parameters with the -ff option.

First, save the location of Cincinnati (39.1 latitude, -84.7 longitude) in a file called pt_rot.input using either the UNIX cat function or your favorite text editor. Save the data as a lon/lat pair separated by a space. Then use the program pt_rot.py to rotate Cincinnati to (South) African (saf) coordinates at 80 Ma. Plot both points using plot_mapPTS.py. You should save the lon, lat, plate, age, destination_plate information in a file called pt_rot_example.dat in the following:

$ awk ’{print $1,$2}’ pt_rot.input > lon_lat  
$ -prj moll -f lon_lat -sym g^ 10 -R -B -eye 0 0  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf  
$ pt_rot.py -f pt_rot.input pt_rot_example.dat  
299.763594614 34.6088989286  
$ pt_rot.py -f pt_rot_example.dat >pt_rot.out  
$ plot_mapPTS.py -f pt_rot.out -sym ro 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf

The two plots will look like these:

PIC

Now, use the program to rotate a selection of North American poles from 180-200 Ma (in the file nam_180-200.txt in the pt_rot directory) to Pangea A coordinates (finite rotation pole and angle in nam_panA.frp. Note that plot_mapPTS.py reads in longitude/latitude pairs, while pt_rot.py reads in latitude longitude pairs.

$ plot_mapPTS.py -f nam_180-200.txt -prj ortho -eye 60 90 -sym c^ 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf  
 
$ pt_rot.py -ff nam_180-200.txt nam_panA.frp  
165.005966833 75.1055653383  
272.368103052 83.1542552681  
281.233199772 63.3738811186  
260.36973769 70.7986591555  
250.91392318 70.3067026787  
251.438099128 65.9369002914  
255.371917029 63.4115373574  
204.92422073 71.7944043036  
.....  
$ pt_rot.py -ff nam_180-200.txt nam_panA.frp > pt_rot_panA.out  
$ plot_mapPTS.py -f pt_rot_panA.out -prj ortho -eye 60 90 -sym ro 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf

These plots should look like this:

PIC

5.2.80 qqlot.py

[Essentials Appendix B.1.5] [qqplot docs]

Makes a quantile-quantile plot of the input data file against a normal distribution. The plot has the mean, standard deviation and the D statistic as well as the Dc statistic expected from a normal distribution. Use qqplot.py to test whether the data generated with gaussian.py is in fact normally distributed. (It will be 95% of the time!).

$ gaussian.py  -F  gauss.out  
$ qqplot.py -f gauss.out  
mean,sigma, d, Dc  
0.02069909 0.849042146783 0.0528626896977 0.0886  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  qq.svg

which generates this plot:

PIC

5.2.81 quick_hyst.py

[Essentials Chapter 5] and [MagIC Database] [quick_hyst docs]

hysteresis_magic.py makes plots of hysteresis loops and calculates the main hysteresis parameters. For a quick look with no interpretation, you can use quick_hyst.py. Try it out on the data file hysteresis_magic_example.dat in the hysteresis_magic directory.

$ quick_hyst.py -f hysteresis_magic_example.dat -fmt svg  
IS06a-1 1 out of  8  
working on t:  273  
S[a]ve plots, [s]pecimen name, [q]uit, <return> to continue  
 a  
1  saved in  LO:__SI:_IS06_SA:_IS06a_SP:_IS06a-1_TY:_hyst_.svg  
IS06a-1 1 out of  8  
working on t:  273  
S[a]ve plots, [s]pecimen name, [q]uit, <return> to continue  
 q  
Good bye

which makes a plot like this:

PIC

5.2.82 revtest.py

[Essentials Chapter 12] [revtest docs]

Use the Tauxe et al., (2010) reversals test implemented in revtest.py to test whether the two modes in the data set revtest_example.txt are antipodal or not:

$ revtest.py -f revtest_example.txt  
doing first mode, be patient  
doing second mode, be patient  
s[a]ve plots, [q]uit: a  
2  saved in  REV_Y.svg  
1  saved in  REV_X.svg  
3  saved in  REV_Z.svg  

which produces these plots:

PIC

Because the 95% confidence bounds for each component overlap each other, the two directions are not significantly different.

5.2.83 revtest_magic.py

[Essentials Chapter 12] and MagIC [revtest_magic docs]

Same as revtest.py but for pmag_sites MagIC formatted files. Try it out on the data file revtest_sites.txt. Then try using customize_criteria.py to change or create a pmag_criteria.txt file that fits your needs and redo the reversals test using only the selected sites.

 revtest_magic.py -f revtest_magic_example.txt  
doing first mode, be patient  
doing second mode, be patient  
s[a]ve plots, [q]uit: q  
$ revtest_magic.py -f revtest_sites.txt -exc

5.2.84 revtest_mm1990.py

[revtest_mm1990 docs]

This program tests whether two sets of observations (one with normal and one with reverse polarity) could have been drawn from distributions that are 180 apart using the McFadden and McElhinny (1990) implementation of the Watson (1983) V statistic test for a common mean. The implementation of the V statistic test in this program is the same as in watsons_v.py. Use revtest_mm1990.py to perform a reversal test on data from the Ao et al., 2013 study of Early Pleistocene fluvio-lacustrine sediments of the Nihewan Basin of North China.

Lets plot the combined data set Ao_etal2013_combined.txt using eqarea.py:

$ eqarea.py -f Ao_etal2013_combined.txt

PIC

Conduct a reversal test between the normal directions Ao_etal2013_norm.txt and reversed directions Ao_etal2013_rev.txt:

$ MM1990_revtest.py -f Ao_etal2013_norm.txt -f2 Ao_etal2013_rev.txt  
be patient, your computer is doing 5000 simulations...  
 
Results of Watson V test:  
 
Watson’s V:           2.7  
Critical value of V:  6.0  
‘‘Pass‘‘: Since V is less than Vcrit, the null hypothesis that the  
 two populations are drawn from distributions that share a  
 common mean direction (antipodal to one another) cannot be rejected.  
 
M&M1990 classification:  
 
Angle between data set means: 2.9  
Critical angle for M&M1990:   4.3  
The McFadden and McElhinny (1990) classification for this test is: ‘A’

As with watsons_v.py, this program produces a plot that displays the cumulative distribution (CDF) of the Watson V values resulting from the Monte Carlo simulation. In the plot below, the red line is the CDF, the green solid line is the value of the V statistic calculated for the two populations and the dashed blue line is the critical value for the V statistic. In this example, the data are consistent with the two directional groups sharing a common mean as evidenced by V (green solid line) being a lower value than the critical value of V (dashed blue line).

PIC

5.2.85 s_eigs.py

[Essentials Chapter 13] [s_eigs docs]

Convert the .s format data in s_eigs_example.dat to eigenvalues and eigenvectors:

$ s_eigs.py -f s_eigs_example.dat  
0.33127186 239.53  44.70 0.33351338 126.62  21.47 0.33521473  19.03  37.54  
0.33177859 281.12   6.18 0.33218277 169.79  73.43 0.33603862  12.82  15.32  
0.33046979 283.57  27.30 0.33328307 118.37  61.91 0.33624712  16.75   6.13  
...

5.2.86 s_geo.py

[Essentials Chapter 13] [s_geo docs]

Rotate the .s data into geographic coordinates using s_geo.py. The input format is the 6 tensor elements of s, followed by the azimuth and plunge of the X axis.

$ s_geo.py -f s_geo_example.dat  
0.33412680 0.33282733 0.33304587 -0.00015289 0.00124843 0.00135721  
0.33556300 0.33198264 0.33245432 0.00087259 0.00024141 0.00096166  
0.33584908 0.33140627 0.33274469 0.00131845 0.00118816 0.00002986  
...

5.2.87 s_hext.py

[Essentials Chapter 13] [s_hext docs]

Take the output from the s_geo.py example and calculate Hext statistics:

$ s_geo.py -f s_geo_example.dat | s_hext.py  
F =  5.79 F12 =  3.55 F23 =  3.66  
N =  8  sigma =  0.000641809950  
0.33505     5.3    14.7    25.5   124.5    61.7    13.3   268.8    23.6  
0.33334   124.5    61.7    25.1   268.8    23.6    25.5     5.3    14.7  
0.33161   268.8    23.6    13.3     5.3    14.7    25.1   124.5    61.7  

5.2.88 s_tilt.py

[Essentials Chapter 13] [s_tilt docs]

Rotate the .s data saved in s_tilt_example.dat into stratigraphic coordinates:

$ s_tilt.py -f s_tilt_example.dat  
0.33455709 0.33192658 0.33351630 -0.00043562 0.00092778 0.00105006  
0.33585501 0.33191565 0.33222935 0.00055959 -0.00005316 0.00064731  
0.33586669 0.33084923 0.33328408 0.00142267 0.00013233 0.00009202  
0.33488664 0.33138493 0.33372843 -0.00056597 -0.00039086 0.00004873  
.  
.

5.2.89 s_magic.py

[MagIC] [MagIC Database] [s_magic docs]

Import .s format file output from the s_tilt.py example into an rmag_anisotropy formatted file. Files of the rmag_anisotropy format can be plotted with aniso_magic.py. To see how this works, use the program s_magic.py as follows:

$ s_tilt.py -f s_tilt_example.dat >s_magic_example.dat  
$ s_magic.py -f s_magic_example.dat  
data saved in  ./rmag_anisotropy.txt

This creates the output file rmag_anisotropy.txt by default, which can be plotted with the program aniso_magic.py.

5.2.90 scalc.py

[Essentials Chapter 14] [scalc docs]

Calculate the S scatter statistic for a set of VGPs saved in scalc_example.txt. Repeat using a Vandamme (Vandamme et al., 1994) variable cutoff. Then get the bootstrap bounds on the calculation.

$ scalc.py -f scalc_example.txt  
99    19.5     90.0  
$ scalc.py -f scalc_example.txt -v  
89    15.2     32.3  
$ scalc.py -f scalc_example.txt -v -b  
89    15.2    13.3     16.8    32.3

Using no cutoff, the VGP scatter was 19.5. The Vandamme co-latitude cutoff was 32.3 which threw out 6 points and gave a scatter of 15.2.

5.2.91 scalc_magic.py

[Essentials Chapter 14] [scalc_magic docs]

This is the same as scalc.py but works on pmag_results formatted files. Try it out on the pmag_results.txt file in the directory created for the download_magic.py example. Use a VGP co-latitude cutoff of 30.

$ scalc_magic.py -f pmag_results.txt  -c 30  
13    17.8     30.0  

5.2.92 site_edit_magic.py

[site_edit_magic docs]

There are frequently outlier directions when looking at data at the site level. Some paleomagnetists throw out the entire site, while some arbitrarily discard individual samples, assuming that the orientations were ‘bad’. Lawrence et al. (2009) suggested a different approach in which common causes of misorientation are specifically tested. For example, if the arrow indicating drill direction (see orientation conventions) was drawn the wrong way on the sample in the field, the direction would be off in a predictable way. Similarly (and more commonly) extraneous marks on the sample were used instead of the correct brass mark, the directions will fall along a small circle which passes through the correct direction. The program site_edit_magic.py plots data by site on an equal area projection, allows the user to select a particular specimen direction and plots the various pathological behaviors expected from common misorientations. If the stray directions can reasonably be assigned to a particular cause, then that sample orientation can be marked as ‘bad’ with a note as to the reason for exclusion and the directions from that sample can be excluded from the site mean, and the reason can be documented in the MagIC database.

Try this out using the data downloaded in the download_magic.py example.

$ site_edit_magic.py  
....  
sr11  
specimen, dec, inc, n_meas/MAD,| method codes  
sr11a1:   340.6    74.1 8 / 0.9 | LP-DIR-AF:DE-BFL  
sr11b1:   353.9    70.3 10 / 2.1 | LP-DIR-AF:DE-BFL  
sr11c1:   342.6    69.4 8 / 1.3 | LP-DIR-AF:DE-BFL  
sr11e2:   341.5    73.4 8 / 0.9 | LP-DIR-AF:DE-BFL  
sr11f1:   136.6     9.4 14 / 3.5 | LP-DIR-T:DE-BFP  
sr11g2:   345.5    77.0 8 / 0.8 | LP-DIR-AF:DE-BFL  
sr11i1:    77.1     0.8 12 / 3.5 | LP-DIR-AF:DE-BFP  
sr11j3:    97.4   -13.2 8 / 11.8 | LP-DIR-AF:DE-BFL  
 
 Site lines planes  kappa   a95   dec   inc  
sr11 6  2       5     28.4     18.0     73.6  6.7445  
s[a]ve plot, [q]uit, [e]dit specimens, [p]revious site, <return> to continue:  
 e  
Enter name of specimen to check: sr11j3  
Triangle: wrong arrow for drill direction.  
Delta: wrong end of compass.  
Small circle:  wrong mark on sample. [cyan upper hemisphere]  
Mark this sample as bad? y/[n]  n  
Mark another sample, this site? y/[n]  n  
.....  
sr20  
no orientation data for  sr20j  
specimen, dec, inc, n_meas/MAD,| method codes  
sr20c1:   339.1    56.2 9 / 2.1 | LP-DIR-T:DE-BFL  
sr20d2:   341.9    60.3 9 / 1.7 | LP-DIR-AF:DE-BFL  
sr20e1:   332.1    55.3 10 / 1.0 | LP-DIR-AF:DE-BFL  
sr20f1:   337.1    54.0 9 / 3.7 | LP-DIR-T:DE-BFL  
sr20g1:   340.1    60.2 9 / 1.6 | LP-DIR-T:DE-BFL  
sr20i1:   330.4    31.9 11 / 0.6 | LP-DIR-AF:DE-BFL  
sr20i2:   330.7    31.1 9 / 2.3 | LP-DIR-T:DE-BFL  
 
 Site lines planes  kappa   a95   dec   inc  
sr20 7  0      38      9.9    335.1     50.0  6.8434  
s[a]ve plot, [q]uit, [e]dit specimens, [p]revious site, <return> to continue:  
 e  
Enter name of specimen to check: sr20i1  
Triangle: wrong arrow for drill direction.  
Delta: wrong end of compass.  
Small circle:  wrong mark on sample. [cyan upper hemisphere]  
Mark this sample as bad? y/[n]  y  
Reason: [1] broke, [2] wrong drill direction, [3] wrong compass direction,  
[4] bad mark, [5] displaced block [6] other 2  
Mark another sample, this site? y/[n]  n

Compare behaviors for sr11, sample sr11XXX and sr20, sample sr20i:

PIC

The latter plausibly fits the “wrong mark” hypothesis, while the former does not fit any of the common pathologies. The sample orientation in the er_samples.txt file has a note that it is ‘bad’ and will be ignored by the PmagPy programs when calculating site means.

5.2.93 specimens_results_magic.py

MagIC [specimens_results_magic docs]

Starting from the magnetometer output files, paleomagnetists interpret the data in terms of vectors, using software like zeq_magic.py or thellier_magic.py. The interpretations from these programs are stored in pmag_specimen formatted files. Once a pmag_specimens format file has been created, the data are usually averaged by sample and/or by site and converted into V[A]DMs and/or VGPs and put in a pmag_results formatted file along with the location and age information that are available. Data must be selected or rejected according to some criteria at each level (for example, the specimen MAD direction must be less than some value or the site κ must be greater than some value). This onerous task can be accomplished using the program specimens_results_magic.py. This program has many optional features, so the reader is encouraged just to look at the documentation.

Test it out by using the data files created in the site_edit_magic.py example. Block the interpretation of some of the samples owing to ‘bad’ orientation. Change the selection criteria using customize_criteria.py. Then generate a new pmag_specimen.txt file using mk_redo.py, zeq_magic_redo.py and combine_magic.py. Then do the averaging by site with specimens_results_magic.py. The age bounds of the data are 0-5 Ma (-age 0 5 Ma). Use the existing (or modified) criteria (-exc option). There are some paleointensity data, so use the current site latitude for the VADM calculation (-lat option). Calculate the data in the geographic coordinate system (-crd g). Then extract the data into tab delimited tables using pmag_results_extract.py

$ mk_redo.py  
$ zeq_magic_redo.py -crd g  
Processing  173  specimens - please wait  
Recalculated specimen data stored in  ./zeq_specimens.txt  
$ combine_magic.py -F pmag_specimens.txt -f zeq_specimens.txt thellier_specimens.txt  
File  ./zeq_specimens.txt  read in with  173  records  
File  ./thellier_specimens.txt  read in with  50  records  
All records stored in  ./pmag_specimens.txt  
$ specimens_results_magic.py -age 0 5 Ma -exc -lat -crd g  
Acceptance criteria read in from  pmag_criteria.txt  
 
 Pmag Criteria stored in  pmag_criteria.txt  
 
 sites written to  pmag_sites.txt  
 results written to  pmag_results.txt  
$ pmag_results_extract.py -fa er_ages.txt  
data saved in:  ./Directions.txt ./Intensities.txt ./SiteNfo.txt

Note that this procedure is more or less automated in Pmag GUI.

5.2.94 stats.py

[Essentials Chapter 11] [stats docs]

Calculates gaussian statistics for sets of data. Calculate the mean of the data generated in the gaussian.py example and saved in gauss.out:

$ stats.py -f gauss.out  
100 10.12243251 1012.243251 2.79670530387 27.6287868663

which according to the help message is:

      N, mean, sum, sigma, (%) , stderr, 95% conf.  
      where sigma is the standard deviation  
      where % is sigma as percentage of the mean  
      stderr is the standard error and  
      95% conf.=  1.96*sigma/sqrt(N)

5.2.95 strip_magic.py

[Essentials Chapter 15] and [MagIC]

Follow the instructions for download_magic.py but search for Tauxe and Hartl and 1997. Download the smartbook (text file) and unpack it into a new directory using the download_magic.py command and the file zmab0094214tmp02.txt as the input file (in the Tauxe and Hartl directory). First run strip_magic.py to see what is available for plotting, then plot the inclination data versus depth (pos). Then plot the VGP latitudes versus age:

$ download_magic.py -f zmab0094214tmp03.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
..........  
$ strip_magic.py  
available objects for plotting:  [’all’]  
available X plots:  [’age’, ’pos’]  
available Y plots:  [’lon’, ’int’, ’lat’]  
available method codes:  [’LP-PI-IRM’, ’LP-PI-REL’]  
$ strip_magic.py -x pos -y int -mcd LP-PI-IRM  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  strat.svg  
 %strip_magic.py -x age -y lat  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  strat.svg

The last command made this plot:

PIC

5.2.96 sundec.py

[Essentials Chapter 9] [sundec docs]

Use the program sundec.py to calculate azimuth of the direction of drill. You are located at 35 N and 33 E. The local time is three hours ahead of Universal Time. The shadow angle for the drilling direction was 68 measured at 16:09 on May 23, 1994.

The program sundec.py works either by interactive data entry or by reading from a file.

Save the following in a file called sundec_example.dat:

3 35 33 1994 5 23 16 9 68

which is:

ΔGMT lat lon year mon day hh mm shadow_angle

We can analyze this file with either:

$ sundec.py -f sundec_example.dat  
  154.2

or

$ sundec.py < sundec_example.dat  
  154.2

or by manual input:

$ sundec.py -i  
Time difference between Greenwich Mean Time (hrs to ADD to  
      GMT for local time):  
<cntl-D> to quit 3  
Year:  <cntl-D to quit> 1994  
Month:  5  
Day:  23  
hour:  16  
minute:  9  
Latitude of sampling site (negative in southern hemisphere): 35  
Longitude of sampling site (negative for western hemisphere): 33  
Shadow angle: 68  
  154.2  
Time difference between Greenwich Mean Time (hrs to ADD to  
      GMT for local time):  
<cntl-D> to quit ^D  
 Good-bye  

In any case, the declination is 154.2.

5.2.97 thellier_magic.py

[Essentials Chapter 10] and [MagIC] [thellier_magic docs]

NB: This program is no longer maintained. You should try thellier_gui.py instead.

To see how thellier_magic.py works, download the text file from a recent paper by Shaar et al. (2011) from the MagIC database:

http://earthref.org/MAGIC/doi/0.1016/j.epsl.2010.11.013 http://dx.doi.org/10.7288/V4/MAGIC/12116

Unpack the data with download_magic.py. The original interpretations are stored in a file called pmag_specimens.txt, but thellier_magic.py looks for a specimen file called thellier_specimens.txt. To create such a file, use the program mk_redo.py followed by thellier_magic_redo.py. Then you can take a look at the data using thellier_magic.py and change some of the interpretations, for example adding one for the specimen s2s0-01 which was rejected by Shaar et al. (2011). This has both non-linear TRM acquisition and ATRM anisotropy data, so the corrections will have to be applied using thellier_magic_redo.py after exiting the thellier_magic.py program.

Here is a transcript of a session that does all this:

$ download_magic.py -f zmab0100159tmp01.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
er_samples  data put in  ./er_samples.txt  
....  
$ mk_redo.py  
$ thellier_magic_redo.py  
Processing  112  specimens - please wait  
uncorrected thellier data saved in:  ./thellier_specimens.txt  
 
$ thellier_magic.py  
.....  
16     550   323.6    -4.9 4.370e-08     4.2  
17     560   322.1    -2.5 3.240e-08     4.9  
Looking up saved interpretation....  
Saved interpretation:  
specimen     Tmin  Tmax  N  lab_field  B_anc  b  q  f(coe)  Fvds  beta  MAD  Dang  Drats  Nptrm  Grade  R  MD%  sigma  Gmax  
 
s1p1-01    0  560 18 75.0 79.4 -1.059  54.2 0.919 0.897 0.015      2.0     1.5     1.7 8 A  0.998 -1 0.016     4.9  
 
pTRM direction=    210.8     85.0  MAD:     3.7  
Saved interpretation:  
specimen     Tmin  Tmax  N  lab_field  B_anc  b  q  f(coe)  Fvds  beta  MAD  Dang  Drats  Nptrm  Grade  R  MD%  sigma  Gmax  
 
s1p1-01    0  560 18 75.0 79.4 -1.059  54.2 0.919 0.897 0.015      2.0     1.5     1.7 8 A  0.998 -1 0.016     4.9  
 
pTRM direction=    210.8     85.0  MAD:     3.7  
 
      s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation,  
       [p]revious, [s]ample, [q]uit:  
 
Return for next specimen  
s  
Enter desired specimen name (or first part there of): s2s0-01  
s2s0-01 214 of  269  
index step Dec   Inc  Int       Gamma  
0     0   194.5   -25.2 3.380e-07  
1     100   194.1   -25.2 3.360e-07     0.0  
2     200   192.1   -25.4 3.330e-07    11.5  
3     250   194.8   -25.5 3.290e-07    76.7  
4     300   193.6   -25.3 3.070e-07    11.7  
5     350   191.6   -26.7 1.460e-07     8.1  
6     375   188.4   -33.6 3.420e-08     5.5  
7     405    36.9   -27.6 6.800e-09     6.1  
8     430    29.8   -12.9 2.930e-09     6.2  
Looking up saved interpretation....  
    None found :(  
 
               s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation, [p]revious, [s]ample, [q]uit:  
 
Return for next specimen  
b  
 
Enter index of first point for calculation:  [ 0 ]  
return to keep default  0  
Enter index  of last point for calculation:  [ 8 ]  
return to keep default  8  
Optimization terminated successfully.  
         Current function value: 0.000000  
         Iterations: 82  
         Function evaluations: 157  
Optimization terminated successfully.  
         Current function value: 0.000000  
         Iterations: 46  
         Function evaluations: 89  
Non-linear TRM corrected intensity=  79.7699845088  
Save this interpretation? [y]/n  
y  
s2s0-02 215 of  269  
index step Dec   Inc  Int       Gamma  
0     0   295.5    -9.9 6.560e-07  
1     100   304.9   -10.1 6.480e-07    88.7  
2     200   298.8   -10.5 6.420e-07    84.8  
3     250   299.8   -10.5 6.330e-07    87.9  
4     300   299.2   -10.5 5.860e-07    27.0  
5     350   295.1    -9.5 2.270e-07     4.5  
6     375   280.1   -21.7 4.940e-08     2.0  
7     405   169.3   -16.3 1.450e-08     2.1  
8     430   151.9   -15.5 1.070e-08     2.6  
Looking up saved interpretation....  
    None found :(  
 
               s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation, [p]revious, [s]ample, [q]uit:  
 
Return for next specimen  
q

You should have gotten plots that look like these:

PIC

When you exit the program, you will have modified the thellier_specimens.txt file to include the new interpretation.

5.2.98 thellier_magic_redo.py

[Essentials Chapter 10] and [MagIC] [thellier_magic_redo docs]

This program is no longer maintained. This function is embedded in thellier_gui.py when you import a specimen file.

This program allows the recalculation of paleointensity experimental data derived from Thellier type experiments. It allows correction for remanence anisotropy from AARM or ATRM ellipsoids (stored in rmag_anisotropy format files (see [Essentials Chapter 13] and the aarm_magic.py and atrm_magic.py) and non-linear TRM corrections, if TRM aquisition data are available (see also trmaq_magic.py).

Apply the corrections to the data you re-interpreted in the thellier_magic.py example, and create a new pmag_specimens.txt formatted file, use the program thellier_magic_redo.py. First, create a “redo” file using , then re-do the calculations using thellier_magic_redo.py, including anisotropy and non-linear TRM corrections. Combine all the specimen files into one using combine_magic.py.

$ mk_redo.py -f thellier_specimens.txt  
$ thellier_magic_redo.py -ANI -NLT  
Processing  112  specimens - please wait  
WARNING: NO FTEST ON ANISOTROPY PERFORMED BECAUSE OF MISSING SIGMA - DOING CORRECTION ANYWAY  
....  
anisotropy corrected data saved in:  ./AC_specimens.txt  
$ combine_magic.py -F pmag_specimens.txt -f thellier_specimens.txt AC_specimens.txt NLT_specimens.txt  
File  ./thellier_specimens.txt  read in with  112  records  
File  ./AC_specimens.txt  read in with  112  records  
non-linear TRM corrected data saved in:  ./NLT_specimens.txt  
All records stored in  ./pmag_specimens.txt

5.2.99 tk03.py

[Essentials Chapter 16] [tk03 docs]

Sometimes it is useful to generate a distribution of synthetic geomagnetic field vectors that you might expect to find from paleosecular variation of the geomagnetic field. The program tk03.py generates distributions of field vectors from the PSV model of Tauxe and Kent (2004). Use this program to generate a set of vectors for a latitude of 30 and plot them with eqarea.py.

$ tk03.py -lat 30 >tk03.out  
$ eqarea.py -f tk03.out

which should give you a plot something like this:

PIC

5.2.100 uniform.py

[uniform docs]

It is at times handy to be able to generate a uniformly distributed set of directions (or geographic locations). This is done using a technique described by Fisher et al. (1987).

Draw 50 directions from a uniform distribution and save to a file called uniform.out. Then plot the directions with eqarea.py:

$  uniform.py -F uniform.out -n 50  
$ eqarea.py -f uniform.out

which should yield a plot something (but not exactly) like this:

PIC

5.2.101 update_measurements.py

[MagIC Database] [update_measurements docs]

This program is no longer maintained. This function is embedded in Pmag GUI.

In the SIO laboratory, our default specimen naming scheme has a logical relationship to the sample and site names. Sometimes there is no simple relationship and the MagIC import procedure can’t parse the sample name into a site name, or the original concept of a site gets changed, for example if several lava flows turn out to be the same one. In any case, there must be a way to change the way data get averaged after the measurement data get converted to a magic_measurments formatted file. To do this, the sample, site relationship can be established in the er_sample.txt file for example via the procedure described in orientation_magic.py docuementation. After the correct relationships are in the er_samples.txt file, these must be propagated throughout the rest of the MagIC tables, starting with the magic_measurements.txt file. The program update_measurements.py will do this at the measurements level. specimens_results_magic.py will then propagate the changes through to the pmag_results.txt file.

5.2.102 upload_magic.py

[MagIC Database] [upload_magic docs]

This function is embedded in Pmag GUI.

This program takes all the MagIC formatted files and puts them into a file which can be imported into the MagIC console software for uploading into the MagIC database. As an example, we can “repackage” the file used file downloaded in the download_magic.py example. You could re-interpret that data or fix records with errors, then re-upload the corrected file.

In the upload_magic directory in the example data_files directory, someone has entered all the site latitudes as longitudes and vice versa (surprisingly common!). You can fix this in the er_sites.txt file, by swapping the headers sample_lat/sample_lon in the er_samples.txt file and site_lat/site_lon in the er_sites.txt file then run specimens_results_magic.py to propogate the changes through to the results table. Be sure to set the criteria switch (-exc), the latitude switch (-lat), the geographic coordinate switch (-crd g) and the age switch (-age 0 5 Ma). Then re-package the files for uploading.

AFTER fixing the er_samples.txt and er_sites.txt files, do the following:

$ specimens_results_magic.py -exc -lat -age 0 5 Ma -crd g  
Acceptance criteria read in from  pmag_criteria.txt  
 
 Pmag Criteria stored in  pmag_criteria.txt  
 
 sites written to  pmag_sites.txt  
 results written to  pmag_results.txt  
 
 
$ upload_magic.py  
Removing:  [’citation_label’, ’compilation’, ’calculation_type’, ’average_n_lines’,  
 ’average_n_planes’, ’specimen_grade’, ’site_vgp_lat’, ’site_vgp_lon’, ’direction_type’,  
 ’specimen_Z’, ’magic_instrument_codes’, ’cooling_rate_corr’, ’cooling_rate_mcd’]  
./er_expeditions.txt is bad or non-existent - skipping  
file  ./er_locations.txt  successfully read in  
er_locations written to  ./upload.txt  
file  ./er_samples.txt  successfully read in  
only first orientation record from er_samples.txt read in  
er_samples written to  ./upload.txt  
file  ./er_specimens.txt  successfully read in  
only measurements that are used for interpretations  
.....  
now converting to dos file ’upload_dos.txt’  
Finished preparing upload file

5.2.103 vdm_b.py

[Essentials Chapter 2] [vdm_b docs]

Use this program to try to recover the original field intensity b from a Virtual Dipole Moment of 71.59 ZAm2 for the latitude 22.

$ cat>vdm_b_example.dat  
71.59e21 22  
^D  
$ vdm_b.py -f vdm_b_example.dat  
 3.300e-05

Compare this answer with the original in b_vdm.py, noting that the original input was in millitesla, while this is in tesla.

5.2.104 vector_mean.py

[Essentials Chapter 2] [vector_mean docs]

Create a set of vector data with tk03.py for a latitude of 30N. Calculate the vector mean of these data.

$ tk03.py -lat 30 >vector_mean_example.dat  
$ vector_mean.py -f vector_mean_example.dat  
    1.3    49.6    2.289e+06 100

5.2.105 vgp_di.py

[Essentials Chapter 2] [vgp_di docs]

Use the program vgp_di.py to convert the following:





λpϕp λs ϕs




68 191 33 243




Put the data into a file vgp_di_example.dat for example using cat on a *NIX operating system. Here is a transcript of one way to use the program which spits out declination, inclination:

$ vgp_di.py -f vgp_di_example.dat  
  335.6    62.9

Just for good measure, you could try it out on the data in the di_vgp.py example. See if you get the right answer!

5.2.106 vgpmap_magic.py

[Essentials Chapter 2], [MagIC Database] and high resolution instructions [vgpmap_magic docs]

Make a plot of the VGPs calculated for the dataset downloaded in the download_magic.py example. Install the high resolution and the etopo20 data files. Use install_etopo.py for that. and plot the data on an orthographic projection with the viewpoint at 60N and the Greenwich Meridian (longitude = 0). Make the points black dots with a size of 10pts. Save the file in png format

 vgpmap_magic.py -crd g -prj ortho -eye 60 0 -etp -sym ko 10 -fmt png  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  VGP_map.png

You should have a plot like this one:

PIC

To use this program, you need to have basemap, which is not part of the free Python software distribution. Also, if you don’t have the high resolution installed by install_etopo.py, just leave the -etp switch off to get a plain plot.

5.2.107 watsons_f.py

[Essentials Chapter 11] [watsons_f docs]

First generate two data files with fishrot.py with κ = 15, N = 10 and I = 42, with D = 10. for the first and 20 for the second:

$ fishrot.py -k 15 -n 10 -I 42 -D 10 >watsons_f_example_file1.dat  
$ fishrot.py -k 15 -n 10 -I 42 -D 20 > watsons_f_example_file2.dat

To compare these two files using watsons_f.py:

$ watsons_f.py -f watsons_f_example_file1.dat -f2 watsons_f_example_file2.dat  
7.74186876032 3.2594 3.2594

The first number is Watson’s F statistic for these two files (see Essentials Chapter 11) and the second is the number to beat for the two files to be drawn from the same fisher distribution (share a common mean). In this case the data fail this test (F is greater than the required number). Your results may vary!

5.2.108 watsons_v.py

[Essentials Chapter 11] [watsons_v docs]

Use the two data files generated in the example for watsons_f.py and repeat the test using Watson’s V w statistic.

$ watsons_v.py -f watsons_f_example_file1.dat -f2 watsons_f_example_file2.dat  
Doing  5000  simulations  
50  
100  
150  
200  
...  
Watson’s V,  Vcrit:  
         10.5        6.8  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  Watsons_v_watsons_v_example_file1.dat_watsons_f_example_file2.dat.svg  
       

which generates the plot:

PIC

The two files are significantly different because Watson’s V (10.5 in this example) is greater than the V crit value estimated using Monte Carlo simulation (6.8).

Note that your results may vary in detail because every instance of fishrot.py generates a different randomly drawn dataset.

5.2.109 zeq.py

[Essentials Chapter 9] [zeq docs]

Use the program zeq.py to 1) plot a Zijderveld diagram of the data in
zeq_example.txt. b) Calculate a best-fit line from 15 to 90 mT. c) Rotate the data such that the best-fit line is projected onto the horizontal axis (instead of the default, North). d) Calculate a best-fit plane from 5 to 80 mT. Save these plots.

$ zeq.py -f zeq_example.dat -u mT

By selecting ‘b’, you can pick the bounds and choose ’l’ for best-fit line or ‘p’ for best-fit plane. You can rotate the X-Y axes by selecting ‘h’ and setting the X axis to 312. Finally, you can save your plots with the ’a’ option. You should have saved something like these plots:

PIC

The equal area projection has the X direction (usually North in geographic coordinates) to the top. The red line is the X axis of the Zijderveld diagram. Solid symbols are lower hemisphere. The solid (open) symbols in the Zijderveld diagram are X,Y (X,Z) pairs. The demagnetization diagram plots the fractional remanence remaining after each step. The green line is the fraction of the total remanence removed between each step.

5.2.110 zeq_magic.py

[Essentials Chapter 9] & [MagIC Database] [zeq_magic docs]

This program is no longer maintained and has been superseded by Demag GUI.

Plot the AF demagnetization data available in the file you got in the download_magic.py example using zeq_magic.py. Use geographic coordinates, where orientations are available.

$ zeq_magic.py -crd g  
sr01a1 0 out of  177  
    looking up previous interpretations...  
Specimen  N    MAD    DANG  start     end      dec     inc  type  component  
sr01a1 9     1.8     1.2   200.0   550.0   331.0    64.4  DE-BFL  A  
 
g: 0      0.0  C  4.065e-05   324.1    66.0  
g: 1    100.0  C  3.943e-05   330.5    64.6  
g: 2    150.0  C  3.908e-05   324.9    65.5  
g: 3    200.0  C  3.867e-05   329.4    64.6  
g: 4    250.0  C  3.797e-05   330.3    64.5  
g: 5    300.0  C  3.627e-05   330.1    64.0  
g: 6    350.0  C  3.398e-05   327.0    64.4  
g: 7    400.0  C  2.876e-05   328.2    64.0  
g: 8    450.0  C  2.148e-05   323.8    65.2  
g: 9    500.0  C  1.704e-05   326.0    63.9  
g: 10    525.0  C  1.200e-05   326.5    63.7  
g: 11    550.0  C  5.619e-06   325.5    64.4  
 
 g/b: indicates  good/bad measurement.  ‘‘bad‘‘ measurements excluded from calculation  
 
  set s[a]ve plot, [b]ounds for pca and calculate, [p]revious, [s]pecimen,  
   change [h]orizontal projection angle,   change [c]oordinate systems,  
 [d]elete current interpretation(s), [e]dit data,   [q]uit:  
 
<Return>  for  next specimen  
 

The plots will look similar to the zeq.py example, but the default here is for the X-axis to be rotated to the specimen’s NRM direction (note how the X direction is rotated off from the top of the equal area projection).

5.2.111 zeq_magic_redo.py

[Essentials Chapter 9] & [MagIC Database] [zeq_magic_redo docs]

The functionality of this program has been incorporated into Demag GUI so you should use that.

In the zeq_magic directory, use the program zeq_magic_redo.py to create a pmag_specimens formatted file with data in geographic coordinates from the sample coordinate file zeq_specimens.txt. Assuming that sample orientations are in a file called er_samples.txt, use mk_redo.py first to create file called zeq_redo. Then use zeq_magic_redo.py to create two pmag_specimen formatted files: one in specimen coordinates zeq_specimens_s.txt and one in geographic coordinates zeq_specimens_g.txt. Combine these into one file called pmag_specimens.txt.

Note that indented lines belong with the line above as a single line.

$ mk_redo.py  
$ zeq_magic_redo.py -F zeq_specimens_s.txt -crd s  
Processing  175  specimens - please wait  
Recalculated specimen data stored in  ./zeq_specimens_s.txt  
 
$ zeq_magic_redo.py -F zeq_specimens_g.txt -crd g  
Processing  175  specimens - please wait  
Recalculated specimen data stored in  ./zeq_specimens_g.txt  
$ combine_magic.py -F pmag_specimens.txt   -f zeq_specimens_s.txt zeq_specimens_g.txt

5.3 Complaints department

The best way to raise issues with any of the software or its documentation is to post to the issues page of the Github project: https://github.com/ltauxe/PmagPy/issues. Lisa Tauxe (ltauxe@usd.edu) can also be contacted by email with bug reports, suggestions and requests.

Chapter 6
The MagIC database and file formats

A number of the programs in PmagPy were developed to take advantage of the MagIC database and aid getting data in and out of it. So, we need some basic understanding of what MagIC is and how it is structured. MagIC is an Oracle database that is part of the EarthRef.org collection of databases and digital reference material. Anyone interested in the MagIC database should first become a registered EarthRef.org user. To do this, go to http://earthref.org and click on the Register link in the Topmenu. Registration is not required for access to data or browsing around, but is required for uploading of data into the MagIC database, something which we sincerely hope you will have a chance to do. After you register, go to http://earthref.org/MAGIC/search.

You can download data using the MagIC search website. After downloading, the data can be unpacked and examined using various tools in the PmagPy package, for example using Pmag GUI.

Paleomagnetic and rock magnetic data are collected and analyzed in a wide variety of ways with different objectives. Data sets can be extremely large or can be the barest boned data summaries published in legacy data tables. The goal of MagIC has been to have the flexibility to allow a whole range of data including legacy data from publications or other databases to new studies which include all the measurements, field photos, methodology, and so on. The general procedure for the future will be to archive the data at the same time that they are published. So, to smooth the path, it is advisable to put your data into the MagIC format as early in the process as possible. All data that enters the database must be converted to the standard MagIC format either as a set of MagIC tables, or as one combined text file. These can then be uploaded into the MagIC database.

6.1 Structure of the database tables

The MagIC database is organized around a series of data tables. The complete data model can be found here: https://www2.earthref.org/MagIC/data-models/3.0

Each MagIC table has a one line header of the form:

tab table_name

“tab” (or “tab delimited”) means that the table is tab delimited. In theory other delimiters are possible, but PmagPy only uses tab delimited formats. The table_name must be one of these nine table names.



table Brief description


contribution study metadata
locations location level data
sites site level data, including geographic information, site averages of sample data, etc.
samples sample level data, including orientation, sampling methods, sample averages of specimen data etc.
specimens specimen level data, including interpretations of best-fit lines, planes, paleointensity, etc.


measurements measurement data used in the study
ages age information.
criteria criteria used in study for data selection
images images associated with the study




The second line of every file contains the column headers (meta-data) describing the included data. For example, a sites table might look like this:







tab sites
sitelocation lithologies geologic_types lat lon
AZ01 Azores basalt lava flow 37.80-25.80
...






Although data can be entered directly into Excel spreadsheets by hand, it is easier to generate the necessary tables as a by-product of ordinary data processing without having to know details of the meta-data and method codes. The section on PmagPy describes how to use the PmagPy software for data analysis and generate the MagIC data tables automatically for the most common paleomagnetic studies involving directions and/or paleointensities. See also Pmag GUI.

6.2 A word about method codes

The MagIC database tags records with “method codes” which are short codes that describe various methods associated with a particular data record. The complete list is available here: https://earthref.org/MagIC/method-codes. Most of the time, you do not need to know what these are (there are over a hundred!), but it is helpful to know something about them. These are divided into several general categories like ‘geochronology methods’ and ‘field sampling methods’. Method codes start with a few letters which designate the category (e.g., GM or FS for geochronogy and field sampling respectively). Then there is a second part and possibly also a third part to describe methods with lesser or greater detail. This table lists method codes describing various lab treatment methods to give you a flavor for how the codes work:




LT-AF-D Lab Treatment Alternating field: Double demagnetization
with AF along X,Y,Z measurement
followed by AF along -X,-Y,-Z measurement
LT-AF-G Lab Treatment Alternating field: Triple demagnetization
with AF along Y,Z,X measurement
followed by AF along Y and AF along Z measurement
LT-AF-I Lab Treatment Alternating field: In laboratory field
LT-AF-Z Lab Treatment Alternating field: In zero field
LT-CHEM Lab Treatment Cleaning of porous rocks by chemical leaching with HCl
LT-FC Lab Treatment Specimen cooled with laboratory field on
LT-HT-I Lab Treatment High temperature treatment: In laboratory field
LT-HT-Z Lab Treatment High temperature treatment: In zero field
LT-IRM Lab Treatment IRM imparted to specimen prior to measurement
LT-LT-I Lab Treatment Low temperature treatment: In laboratory field
LT-LT-Z Lab Treatment Low temperature treatment: In zero field
LT-M-I Lab Treatment Using microwave radiation: In laboratory field
LT-M-Z Lab Treatment Using microwave radiation: In zero field
LT-NO Lab Treatment No treatments applied before measurement
LT-NRM-APAR Lab Treatment Specimen heating and cooling: Laboratory
field anti-parallel to the NRM vector
LT-NRM-PAR Lab Treatment Specimen heating and cooling: Laboratory
field parallel to the NRM vector
LT-NRM-PERP Lab Treatment Specimen heating and cooling:
Laboratory field perpendicular to the NRM vector
LT-PTRM-I Lab Treatment pTRM tail check: After zero field step,
perform an in field cooling
LT-PTRM-MD Lab Treatment pTRM tail check: After in laboratory field step,
perform a zero field cooling at same temperature
LT-PTRM-Z Lab Treatment pTRM tail check: After in laboratory field step,
perform a zero field cooling at a lower temperature
LT-T-I Lab Treatment Specimen cooling: In laboratory field
LT-T-Z Lab Treatment Specimen cooling: In zero field
LT-VD Lab Treatment Viscous demagnetization by applying MU-metal screening
LP-X Lab Treatment Susceptibility
LT-ZF-C Lab Treatment Zero field cooled, low temperature IRM imparted
LT-ZF-CI Lab Treatment Zero field cooled, induced M measured on warming



6.3 Uploading to MagIC

For uploading to the database, all the individual tables should be assembled into a single file. Each individual data table is separated from the next by a series of ‘>>>>>>>>>>’ symbols, so a typical upload file might look like this:

tab   locations  
location_type citations lat_n lon_e location lat_s lon_w  
Drill Site This study 19.0 156.0 801C 19.0 156.0  
Drill Site This study 19 156 801 19 156  
>>>>>>>>>>  
tab   samples  
citations lithologies site sample geologic_types geologic_classes  
This study   Submarine Basaltic Glass 16r5113 16r5113 Lava Flow Igneous: Extrusive  
This study   Submarine Basaltic Glass 17r1026 17r1026 Lava Flow Igneous: Extrusive  
.  
.  
.

Correctly formatted MagIC data tables can be assembled into a suitable upload text file by using the program upload_magic.py which reads in all MagIC tables in a given directory and puts them together as in the above example. You can invoke upload_magic.py on the command line or call it within Pmag GUI. upload_magic.py creates a contribution file which can be uploaded into the MagIC database. If using PmagPy to generate your upload file, upload_magic.py has some nifty tricks with propagating data from one table to another, deleting unneeded columns, and so on.

Chapter 7
Introduction to Python Programming

There are many resources for learning Python, but Lisa Tauxe’s Python For Earth Science Students will give you a good foundation in programming Python with geology-specific applications. The course covers Python basics, Jupyter notebooks, and plotting and analysis with scientific Python libraries. The entire course is freely available in a Github repository. It is composed of an interactive series of lectures in the form of Jupyter notebooks.

To get started, you will need to install Python. Then, download git and follow the install instructions. If you don’t know whether you have git installed, just type git on your command line and see if the help message appears.

Next, open your command line (you may need to restart it after installing Python and git) and run the following commands:

 
    git clone git@github.com:ltauxe/Python-for-Earth-Science-Students.git  
 
    cd Python-for-Earth-Science-Students  
 
    jupyter notebook

This will open a browser window with a list of Notebooks. Click on Lecture 1, which overviews the course and teaches you how to run Notebooks. From there, you can follow the lectures in order or pick and choose based on your interests. Python is a lot of fun - enjoy!

NB: For Anaconda-specific information, see this a handy cheat sheet with information about how to install and update Python packages, as well as create custom Python environments and more.

Chapter 8
Jupyter Notebooks

Data analysis in Python benefits from an increasingly robust ecosystem of packages for scientific computing and plotting. A tool that has gained increasingly widespread use for conducting and presenting data analysis is the Jupyter notebook. The Jupyter notebook builds on the IPython project which began as a way to bring increased interactivity to data analysis in Python (Pérez & Granger2007) and has evolved to include an interactive notebook environment that seeks to enable the full trajectory of scientific computing from initial analysis and visualization onwards to collaboration and through to publication. The project has expanded to enable a reproducible interactive computing environment for many other programming languages (such as R and Julia) and the language-agnostic parts of its architecture have recently been rebranded as Project Jupyter (http://jupyter.org). Jupyter notebooks allow for executable code, results, text and graphical output to coherently coexist in a single document. With these combined components, they are excellent tools both for conducting data analysis and presenting results.

Underlying the PmagPy programs that are accessible through the command line and the GUI interfaces (e.g., Pmag GUI) are two main function libraries: pmag.py and pmagplotlib.py. The functions within these modules can be imported and called upon within a Jupyter notebook running a Python 2.7 kernel. In addition to these functions, Nick Swanson-Hysell created a module called ipmag.py that is in active development and contains functions that replicate and extend functionality that is found within the PmagPy command line programs for use in the notebook environment.

To show some of what is possible in terms of data analysis using PmagPy in a notebook environment, we have created example notebooks that are available for download from this repository: https://github.com/PmagPy/2016_Tauxe-et-al_PmagPy_Notebooks. These notebooks can also be viewed as static webpages here: http://pmagpy.github.io/Example_PmagPy_Notebook.html and http://pmagpy.github.io/Additional_PmagPy_Examples.html.

The main example notebook (Example_PmagPy_Notebook.ipynb) combines data from two different studies for the sake of developing a mean paleomagnetic pole from the upper portion of a sequence of volcanics in the North American Midcontinent Rift (Halls1974Swanson-Hysell et al.2014). The two data files used within the notebook can be downloaded from the MagIC database. The digital object identifier (doi) search option allows for the data files to be readily located as https://earthref.org/MagIC/doi/10.1139/e74-113/ and https://earthref.org/MagIC/doi/10.1002/2013GC005180/. Downloading these data files from the database and putting them into folders within a local ‘Project Directory’ allows them to be accessed within the Jupyter notebook. Within the notebook, these data are unpacked into their respective MagIC formatted tab delimited data files. The data are then loaded into dataframes and filtered using several different criteria (stratigraphic height and polarity). Several functions from the ipmag module are used for making equal area projections and calculating statistics. In addition to combining the data sets to calculate the mean pole, the code in the notebook conducts a bootstrap fold test on the data using the approach of Tauxe & Watson (1994) as well as a common mean test. The data recombinations and calculations done in this notebook are examples of portions of the data analysis workflow which are often difficult to document and reproduce. The examples illustrate a small sliver of the potential for the use of notebooks for data manipulation and analysis of paleomagnetic data. Additional functionality available within PmagPy is demonstrated within the additional PmagPy examples notebook (Additional_PmagPy_Examples.ipynb) as small vignettes of example code. Functions related to paleomagnetic and rock magnetic data analysis are shown as examples. The notebook also illustrates some of the interactivity that can be built into the notebook utilizing IPython widgets.

To execute and edit notebook code yourself, you can clone or download this repository: https://github.com/PmagPy/2016_Tauxe-et-al_PmagPy_Notebooks. If you installed the Enthought Canopy or Anaconda Python distribution you will have the IPython and Jupyter packages installed. If you have another Python distribution you will want to make sure that you have IPython and Jupyter (installation instructions can currently be found here: http://ipython.org/install.html and here: http://jupyter.readthedocs.org/en/latest/install.html). To view and edit the notebooks, type ipython notebook or jupyter notebook on the command line. This command will open up a local IPython server in your default web browser within which you can use the directory to navigate to the data_files folder. Click on it to open and edit. You will see something like this:

PIC

Notebooks are constructed as a series of ‘cells’ which can be text or code. To view the ‘source’ of a text cell, just click on it. To render it, click on the ‘run’ button (sideways triangle on the toolbar). Similarly, to run the code in a code cell, click on the cell and then the ‘run’ button (or use the shift+enter short cut). To execute the entire notebook, click on the ‘Cell’ button and choose ’Run All’.

The easiest way to access PmagPy functionality in the notebook is to follow the regular install instructions using pip. This automatically adds all of the pmagpy module into your Python path. You can go straight to looking at the data!

If you didn’t install using pip, you can also do a developer install.

A third way to acces PmagPy inside a notebook is to set the path to the location of your downloaded PmagPy folder within the notebook itself. The default location on MacOS is in a folder called PmagPy in your home directory (e.g., /Users/YOUR_NAME_HERE/PmagPy) and on Windows is in the Documents folder (e.g., ’C:
Users
YOUR_NAME_HERE
Documents
PmagPy
PmagPy’. To do this you must import os and then have a line that reads sys.path.insert(0, ‘/Users/......’) where the path reflects your own path. If PmagPy is in your path you should be able to run the first code block which imports the PmagPy modules and makes them availible available to use within the notebook. The notebooks also import several other Python modules useful for scientific computing (numpy), data manipulation (pandas) and plotting (matplotlib).

Now you are ready to look at some data. In the code block under the heading ‘Reading data from MagIC format results files’, data are read in from a file downloaded and unpacked from the MagIC database. The notebook shows how to read in the data into a pandas DataFrame, and plot the directions on an equal area projection:

PIC

There are several other tricks shown off in the notebook, which should be enough to get you started using ipmag in a Python notebook environment. Conducting data analysis using PmagPy in a notebook allows for the underlying code of statistical tests to be available and for the decisions made in the specific implementation of such tests to be transparently presented.

Although there is much much more to do in Python, this documentation is aimed at getting and using PmagPy, so that’s it for this chapter. Congratulations if you made it to the end!

Chapter 9
Troubleshooting

9.1 Problems with installing Python or PmagPy

  1. Nothing works as advertised!
  2. Find your command line (Terminal for Mac users, Command Prompt for Windows users) and type python You should see something like this:

  3. I am installing pmagpy into the Anaconda Python distribution
  4. My path is correctly set, but I still get this error message:
     -bash: eqarea.py: command not found

  5. If you are using the pip installation of PmagPy and you have any problems, you should start by uninstalling and reinstalling pmagpy and pmagpy-cli. To do this, use these commands:
    pip install --upgrade --force-reinstall --no-cache-dir --no-deps pmagpy  
    pip install --upgrade --force-reinstall --no-cache-dir --no-deps pmagpy-cli

    To test that they have installed properly, you can run the command:

    pip list

    which will show you all of your pip-installed packages. If you don’t see pmagpy and pmagpy-cli, you’ll need to install them.

  6. If your computer uses multiple languages, or a language other than English, you may get an error message like this:
     
      File ‘‘/Applications/Canopy.app/appdata/canopy-1.7.4.3348.macosx-x86_64/Canopy.app/Contents/lib/python2.7/contextlib.py’’, line 17, in __enter__  
      return self.gen.next()  
      File ‘‘/Users/***/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/__init__.py’’, line 1000, in _open_file_or_url  
      encoding = locale.getdefaultlocale()[1]  
      File ‘‘/Applications/Canopy.app/appdata/canopy-1.7.4.3348.macosx-x86_64/Canopy.app/Contents/lib/python2.7/locale.py’’, line 543, in getdefaultlocale  
      return _parse_localename(localename)  
      File ‘‘/Applications/Canopy.app/appdata/canopy-1.7.4.3348.macosx-x86_64/Canopy.app/Contents/lib/python2.7/locale.py’’, line 475, in _parse_localename  
      raise ValueError, ’unknown locale: %s’ % localename  
      ValueError: unknown locale: UTF-8  
      

    Mac users can fix this by opening your .bashrc file and adding these lines:

      export LC_ALL=en_US.UTF-8  
      export LANG=en_US.UTF-8  
      

9.2 Problems updating Canopy

If you are using Canopy, you will need to keep it up to date. Currently, that means the version number should be 1.6.1 or higher. In general, you can update by opening the Canopy application; it will automatically check for updates. However, if you haven’t updated for a long time, Canopy can get “stuck” in an old version where it can no longer update. If that is the case, you will need to fully uninstall and reinstall canopy, following the directions on the Enthought site. For a full explanation, see https://support.enthought.com/hc/en-us/articles/204469570-Canopy-shows-no-updates-available-reinstalling-from-the-websiteEnthought website. For the full uninstall instructions, see https://support.enthought.com/hc/en-us/articles/204469700-Uninstalling-and-resetting-Canopy.

9.3 Error loading dlls (windows only)

Some of the modules used in PmagPy have dependencies that are not called directly. If you are missing one of those dependencies, the programs may fail in odd ways. In general, it is worthwhile to open Canopy, go into the package manager, and update all packages. More specifically, if you see an error message like this:

ImportError: DLL load failed: The specified procedure could not be found  

you may be missing the MKL package. Open Canopy, go to package manager, and select “Available packages”. Scroll down to MKL, and install it. This should fix the problem! Note that windows users who are having trouble opening one of the GUIs should try this solution whether or not they actually see this error message.

9.4 Problems with a specific program

  1. Thellier auto_interpreter crashes with a Segmentation Fault (on Mac). NB: This should no longer be a problem with
    matplotlib>=2.0.

    .

9.5 Other problems

Report a problem not listed above on Github, or e-mail ltauxe@ucsd.edu. Please include the following information: 1) the version of PmagPy that you are using, 2) your operating system, 3) any error messages that you got, 4) the datafile that is giving trouble, if relevant.

Bibliography

    Ben-Yosef, E., Ron, H., Tauxe, L., Agnon, A., Genevey, A., Levy, T., Avner, U., & Najjar, M. (2008). Application of copper slag in geomagnetic archaeointensity research. J. Geophys. Res., 113, doi:10.1029/2007JB005235.

    Besse, J. & Courtillot, V. (2002). Apparent and true polar wander and the geometry of the geomagnetic field over the last 200 myr. J. Geophys. Res, 107, doi:10.1029/2000JB000050.

    Carter-Stiglitz, B., Solheid, P., Egli, R., & Chen, A. (2006). Tiva canyon tuff (ii): Near single domain standard reference material available. The IRM Quarterly, 16(1), 1.

    Cogné, J. (2003). Paleomac: A macintosh application for treating paleomagnetic data and making plate reconstructions. Geochem. Geophys. Geosys., 4, 1007, doi:10.1029/2001GC000227.

    Day, R., Fuller, M. D., & Schmidt, V. A. (1977). Hysteresis properties of titanomagnetites: grain size and composition dependence. Phys. Earth Planet. Inter., 13, 260–266.

    Fairchild, L.M., Swanson-Hysell, N.L., & Tikoo, S.M. (2016). A matter of minutes: Breccia dike paleomagnetism provides evidence for rapid crater modification. Geology, 10.1130/G37927.1.

    Gee, J. S., Tauxe, L., & Constable, C. (2008). Amsspin - a labview program for measuring the anisotropy of magnetic susceptibility (ams) with the kappabridge kly-4s. Geochem. Geophys. Geosyst., 9, Q08Y02,doi:10.1029/2008GC001976.

    Gradstein, F., Ogg, J., & Smith, A. (2004). Geologic Time Scale 2004. Cambridge: Cambridge University Press.

    Halls, H.C. (1974). A paleomagnetic reversal in the Osler volcanic group, Northern Lake Superior. Can. J. Earth Sci., 11, (pp.1200-1207).

    Jelinek, V. (1977). The statistical theory of measuring anisotropy of magnetic susceptibility of rocks and its application. Brno, Geophyzika, (pp. 1–88).

    Korte, M. & Constable, C. (2011). Improving geomagnetic field reconstructions for 0-3 ka. Phys. Earth Planet. Int., 188, 247–259.

    Korte, M., Constable, C., Donadini, F., & Holme, R. (2011). Reconstructing the holocene geomagnetic field. Earth and Planetary Science Letters, 312, 497–505.

    Lawrence, K. P., Tauxe, L., Staudigel, H., Constable, C., Koppers, A., McIntosh, W. C., & Johnson, C. L. (2009). Paleomagnetic field properties near the southern hemisphere tangent cylinder. Geochem. Geophys. Geosyst., 10, Q01005, doi:10.1029/2008GC00207.

    Leonhardt, R., Heunemann, C., & Kraása, D. (2004). Analyzing absolute paleointensity determinations: Acceptance criteria and the software thelliertool4.0. Geochem. Geophys. Geosys., 5, Q12016, doi:10.1029/2004GC000807.

    McFadden, P. L. & McElhinny, M. W. (1990). Classification of the reversal test in palaeomagnetism. Geophys. J. Int., 103, 725–729.

    Paterson, G., Tauxe, L., Biggin, A., Shaar, R., & Jonestrask, L. (2014). On improving the selection of thellier-type paleointensity data. Geochem. Geophys. Geosys., 15.

    Pérez, F. & Granger, B. (2007). IPython: a system for interactive scientific computing. Computing in Science and Engineering, 9, 21–29.

    Sbarbori, E., Tauxe, L., Gogichaishvili, A., Urrutia-Fucugauchi, J., & Bohrson, W. (2009). Paleomagnetic behavior of volcanic rocks from isla socorro, mexico. Earth Planets and Space, (pp. in press).

    Shaar, R., Ben Yosef, E., Ron, H., Tauxe, L., Agnon, A., & Kessel, R. (2011). Geomagnetic field intensity: How high can it get? how fast can it change? constraints from iron-age copper-slag. Earth and Planetary Science Letters, 301, 297–306.

    Shaar, R., Ron, H., Tauxe, L., Kessel, R., Agnon, A., Ben Yosef, E., & Feinberg, J. (2010). Testing the accuracy of absolute intensity estimates of the ancient geomagnetic field using copper slag material. Earth and Planetary Science Letters, 290, 201–213.

    Shaar, R. & Tauxe, L. (2013). Thellier_gui: An integrated tool for analyzing paleointensity data from thellier-type experiments. Geochem. Geophys. Geosys., 14.

   Swanson-Hysell, N. L., A. A. Vaughan, M. R. Mustain, and K. E. Asp, Confirmation of progressive plate motion during the Midcontinent Rift’s early magmatic stage from the Osler Volcanic Group, Ontario, Canada. Geochem., Geophys., Geosyst., 15, 2039–2047, 2014.

    Tauxe, L. (1998). Paleomagnetic Principles and Practice. Dordrecht: Kluwer Academic Publishers.

    Tauxe, L., Banerjee, S. K., Butler, R., & van der Voo, R. (2010). Essentials of Paleomagnetism. Berkeley: University of California Press.

    Tauxe, L., Bertram, H., & Seberino, C. (2002). Physical interpretation of hysteresis loops: Micromagnetic modelling of fine particle magnetite. Geochem., Geophys., Geosyst., 3, DOI 10.1029/ 2001GC000280.

    Tauxe, L., Constable, C., Johnson, C., Miller, W., & Staudigel, H. (2003). Paleomagnetism of the southwestern u.s.a. recorded by 0-5 ma igneous rocks. Geochem., Geophys., Geosyst., (pp. DOI 10.1029/2002GC000343).

    Tauxe, L. & Hartl, P. (1997). 11 million years of oligocene geomagnetic field behaviour. Geophys. J. Int., 128, 217–229.

    Tauxe, L. & Kent, D. V. (2004). A simplified statistical model for the geomagnetic field and the detection of shallow bias in paleomagnetic inclinations: Was the ancient magnetic field dipolar?, volume 145, (pp. 101–116). American Geophysical Union: Washington, D.C.

    Tauxe, L., Kylstra, N., & Constable, C. (1991). Bootstrap statistics for paleomagnetic data. J. Geophys. Res, 96, 11723–11740.

    Tauxe, L., Luskin, C., Selkin, P., Gans, P. B., & Calvert, A. (2004). Paleomagnetic results from the snake river plain: Contribution to the global time averaged field database. Geochem., Geophys., Geosyst., Q08H13, doi:10.1029/2003GC000661.

    Tauxe, L. & Staudigel, H. (2004). Strength of the geomagnetic field in the cretaceous normal superchron: New data from submarine basaltic glass of the troodos ophiolite. Geochem. Geophys. Geosyst., 5(2), Q02H06, doi:10.1029/2003GC000635.

    Tauxe, L., Stickley, C., Sugisaki, S., Bijl, P., Bohaty, S., Brinkhuis, H., Escutia, C., Flores, J.-A., Houben, A., Iwai, M., Jimenez-Espejo, F., McKay, R., Passchier, S., Pross, J., Riesselman, C., Roehl, U., Sangiorgi, F., Welsh, K., Klaus, A., Fehr, J., Bendle, J., Dunbar, R., Gonzalez, S., Hayden, T., Katsuki, K., Olney, M., Pekar, S., Shrivastava, P., van de Flierdt, T., Williams, T., & Yamane, M. (2012). Chronostratigraphic framework for the iodp expedition 318 cores from the wilkes land margin: constraints for paleoceanographic reconstruction. Paleoceanography, 27, doi:10.1029/2012PA002308.

    Tauxe, L. & Watson, G. S. (1994). The fold test: an eigen analysis approach. Earth Planet. Sci. Lett., 122, 331–341.

    Torsvik, T., Müller, R., van der Voo, R., Steinberger, B., & Gaina, C. (2008). Global plate montion frames: toward a unified model. Rev. Geohys., 46, RG3004, doi:10.1029/2007RG000227.

    Vandamme, D. (1994). A new method to determine paleosecular variation. Phys. Earth Planet. Int., 85, 131–142.

    Watson, G. (1983). Large sample theory of the langevin distributions. J. Stat. Plann. Inference, 8, 245–256.