PmagPy Cookbook

Table of Chapters 1 Installing PmagPy
2 Pmag GUI Tutorial
3 MagIC GUI Tutorial
4 Survival computer skills
5 The PmagPy software package
6 The MagIC database and file formats
7 Introduction to Python Programming
8 Jupyter Notebooks
9 Troubleshooting
Bibliography

Contents

Table of Contents 1 Installing PmagPy
 1.1 Standalone GUI download
  1.1.1 OSX Standalone download
  1.1.2 Windows Standalone download
  1.1.3 Linux Standalone download
 1.2 Full PmagPy install
  1.2.1 Installing PmagPy
  1.2.2 Finishing up
  1.2.3 For Anaconda users
 1.3 Developer install of PmagPy
 1.4 Next steps
2 Pmag GUI Tutorial
 2.1 Converting magnetometer files to MagIC format
 2.2 Optional: Calculate geographic / tilt-corrected direction
 2.3 Filling in the EarthRef data
 2.4 Demag GUI quick start
 2.5 Thellier GUI quick start
 2.6 Upload to the database
 2.7 Downloading data from MagIC
 2.8 Preparing for MagIC
  2.8.1 Field and sampling information
  2.8.2 Supported Rock Magnetometer files
3 MagIC GUI Tutorial
 3.1 Starting with MagIC GUI
 3.2 MagIC GUI example dataset
4 Survival computer skills
 4.1 Finding your command line
 4.2 File systems
 4.3 Moving around in the file system
 4.4 Redirecting input and output
 4.5 Text editors
5 The PmagPy software package
 5.1 Graphical data analysis programs
  5.1.1 demag_gui.py
  5.1.2 thellier_gui.py
  5.1.3 MagIC GUI
 5.2 Command line PmagPy programs
  5.2.1 aarm_magic.py
  5.2.2 angle.py
  5.2.3 ani_depthplot.py
  5.2.4 aniso_magic.py
  5.2.5 apwp.py
  5.2.6 atrm_magic.py
  5.2.7 azdip_magic.py
  5.2.8 b_vdm.py
  5.2.9 basemap_magic.py
  5.2.10 biplot_magic.py
  5.2.11 bootams.py
  5.2.12 cart_dir.py
  5.2.13 chartmaker.py
  5.2.14 chi_magic.py
  5.2.15 combine_magic.py
  5.2.16 common_mean.py
  5.2.17 cont_rot.py
  5.2.18 convert2unix.py
  5.2.19 convert_samples.py
  5.2.20 core_depthplot.py
  5.2.21 curie.py
  5.2.22 customize_criteria.py
  5.2.23 dayplot_magic.py
  5.2.24 demag_gui.py
  5.2.25 di_eq.py
  5.2.26 di_geo.py
  5.2.27 di_rot.py
  5.2.28 di_tilt.py
  5.2.29 di_vgp.py
  5.2.30 dipole_pinc.py
  5.2.31 dipole_plat.py
  5.2.32 dir_cart.py
  5.2.33 dmag_magic.py
  5.2.34 download_magic.py
  5.2.35 eigs_s.py
  5.2.36 eq_di.py
  5.2.37 eqarea.py
  5.2.38 eqarea_ell.py
  5.2.39 eqarea_magic.py
  5.2.40 find_EI.py
  5.2.41 fisher.py
  5.2.42 fishqq.py
  5.2.43 fishrot.py
  5.2.44 foldtest.py
  5.2.45 foldtest_magic.py
  5.2.46 gaussian.py
  5.2.47 gobing.py
  5.2.48 gofish.py
  5.2.49 gokent.py
  5.2.50 goprinc.py
  5.2.51 grab_magic_key.py
  5.2.52 histplot.py
  5.2.53 hysteresis_magic.py
  5.2.54 igrf.py
  5.2.55 install_etopo.py
  5.2.56 incfish.py
  5.2.57 irmaq_magic.py
  5.2.58 k15_magic.py
  5.2.59 k15_s.py
  5.2.60 kly4s_magic.py
  5.2.61 lnp_magic.py
  5.2.62 lowrie.py
  5.2.63 lowrie_magic.py
  5.2.64 magic_select.py
  5.2.65 make_magic_plots.py
  5.2.66 Measurement Import Scripts
  5.2.67 measurements_normalize.py
  5.2.68 mk_redo.py
  5.2.69 nrm_specimens_magic.py
  5.2.70 orientation_magic.py
  5.2.71 parse_measurements.py
  5.2.72 pca.py
  5.2.73 plotxy.py
  5.2.74 plot_cdf.py
  5.2.75 plot_magic_keys.py
  5.2.76 plot_map_pts.py
  5.2.77 plotdi_a.py
  5.2.78 pmag_results_extract.py
  5.2.79 pt_rot.py
  5.2.80 qqlot.py
  5.2.81 quick_hyst.py
  5.2.82 revtest.py
  5.2.83 revtest_magic.py
  5.2.84 revtest_mm1990.py
  5.2.85 s_eigs.py
  5.2.86 s_geo.py
  5.2.87 s_hext.py
  5.2.88 s_tilt.py
  5.2.89 s_magic.py
  5.2.90 scalc.py
  5.2.91 scalc_magic.py
  5.2.92 site_edit_magic.py
  5.2.93 specimens_results_magic.py
  5.2.94 stats.py
  5.2.95 strip_magic.py
  5.2.96 sundec.py
  5.2.97 thellier_magic.py
  5.2.98 thellier_magic_redo.py
  5.2.99 tk03.py
  5.2.100 uniform.py
  5.2.101 update_measurements.py
  5.2.102 upload_magic.py
  5.2.103 vdm_b.py
  5.2.104 vector_mean.py
  5.2.105 vgp_di.py
  5.2.106 vgpmap_magic.py
  5.2.107 watsons_f.py
  5.2.108 watsons_v.py
  5.2.109 zeq.py
  5.2.110 zeq_magic.py
  5.2.111 zeq_magic_redo.py
 5.3 Complaints department
6 The MagIC database and file formats
 6.1 Structure of the database tables
 6.2 A word about method codes
7 Introduction to Python Programming
 7.1 A first look at NumPy
 7.2 Variable types
 7.3 Data Structures
 7.4 Pandas
 7.5 Python Scripts
 7.6 Code blocks
 7.7 File I/O in Python
 7.8 Functions
 7.9 Classes
 7.10 Matplotlib
8 Jupyter Notebooks
9 Troubleshooting
 9.1 Problems with installing Python or PmagPy
 9.2 Problems updating Canopy
 9.3 Error loading dlls (windows only)
 9.4 Problems with a specific program
 9.5 Other problems

PIC

PmagPy Cookbook


January 25, 2017
Dear Reader,

This documentation is updated from that in the book entitled Essentials of Paleomagnetism by Tauxe et al., (2010). This cookbook was designed as a companion website to the book Essentials of Paleomagnetism, 4th Web Edition. Chapter references to this companion book are, for example, “Essentials Chapter 1”.

There are many chefs who contributed to this work, in particular, the MagIC Database Team (Cathy Constable, Anthony Koppers, Rupert Minnett, Nick Jarboe, Ron Shaar, and Lori Jonestrask). Nick Swanson-Hysell (UC Berkeley) contributed the demag_gui and Jupyter notebook documentation. The PmagPy project is supported by grants from the National Science Foundation.

Users of PmagPy should cite the open access article:

Tauxe, L., R. Shaar, L. Jonestrask, N. L. Swanson-Hysell, R. Minnett, A. A. P. Koppers, C. G. Constable, N. Jarboe, K. Gaastra, and L. Fairchild (2016), PmagPy: Software package for paleomagnetic data analysis and a bridge to the Magnetics Information Consortium (MagIC) Database, Geochem. Geophys. Geosyst., 17, doi:10.1002/2016GC006307.

Lisa Tauxe
Scripps Institution of Oceanography
La Jolla, CA 92093
January 25, 2017
http://magician.ucsd.edu/:x

Chapter 1
Installing PmagPy

You have three options for downloading and using PmagPy. If you only want to use the GUIs, you can download them as standalone programs. The standalone install doesn’t require that you have Python, but is more limited that the full PmagPy install. If you want access to all of PmagPy’s functionality, you must first install the recommended version of Python. Then use the pip utility to download and install the PmagPy function library and the PmagPy command line programs or manually download and install by setting environment variables yourself.

If you want to actively participate in developing and modifying PmagPy, you should do a developer install.

1.1 Standalone GUI download

If you do not need the full PmagPy functionality, and you only want to use Pmag GUI, MagIC GUI, Thellier GUI, and Demag GUI, there is now a standalone download for you. You won’t need to install Python for this.

Follow the download instructions in the provided links, and then you can skip the “Full install” section and go straight to using Pmag GUI or MagIC GUI. The standalone versions of these applications are still in development; please report problems to the PmagPy team: ltauxe@ucsd.edu.

1.1.1 OSX Standalone download

To get started, download the zip file and put the resulting folder on your desktop. Inside the PmagPy-Standalone folder you will have one folder each for Pmag GUI and MagIC GUI. Open the appropriate folder and double click the icon (depending on your security settings, you may have to right click the icon and then select “ok” the first time you open it).

You will find the latest stable release at: https://github.com/PmagPy/PmagPy-Standalone-OSX/releases/latest

1.1.2 Windows Standalone download

Get started by downloading the zip file (see links below) and put the resulting folder wherever you wish. You will need to extract all files. Inside the PmagPy-Standalone folder you will find icons for Pmag GUI and MagIC GUI. Double click the program you wish to use and you should be able to get started.

You will find the latest stable release at: https://github.com/PmagPy/PmagPy-Standalone-Windows/releases/latest

1.1.3 Linux Standalone download

This binary has only been tested on a Ubuntu 14.04 (Trusty) distribution and might experience problems on other distributions. You can simply clone the standalone repository or download and unzip. The GUIs should run when you double click the executable, but will take time to start up (anywhere from 5 to 30 seconds) please be patient.

You will find the latest stable release at: https://github.com/PmagPy/PmagPy-Standalone-Linux/releases/latest

1.2 Full PmagPy install

To get the full use of PmagPy functionality, you will first have to install Python.

1.2.1 Installing PmagPy

Here we provide instructions for using pip to install PmagPy from the command line. “Pip” is a package manager that comes standard with any modern Python distribution (NB: you will have to update it to the most recent version). Pip allows you to install and uninstall pmagpy, the package with all the low-level functionality of PmagPy and pmagpy-cli which provides the PmagPy command-line-interface and has all the packages you may use to download/upload, visualize, and manipulate data.

To install using pip:

1.2.2 Finishing up

Now your installation should be complete. There are a few more details:

1.2.3 For Anaconda users

We have attempted to support both Canopy and Anaconda Python. However, there are some differences in the way Anaconda Python interacts with the screen. Because of this, Anaconda users will need to use a different command to access the PmagPy GUIs. All four GUIs – Pmag GUI, MagIC GUI, Demag GUI, and Thellier GUI – can be invoked with Anaconda python. However, instead of calling them in the standard way on the command line:

pmag_gui.py

You will use:

pmag_gui_anaconda

If you try to use the first syntax, you will see an error message like this:

This program needs access to the screen.  
Please run with a Framework build of python, and only when you are  
logged in on the main display of your Mac.

Also, Anaconda users may need to install wxPython.

1.3 Developer install of PmagPy

If you are a developer, and wish to download PmagPy from the Github repository, you may want to add PmagPy directly to your path instead of installing via pip.

If you’re wondering whether you should do a pip install or a developer install, it depends on what you’re hoping to do with PmagPy. The benefit of a regular pip installation is that you will get a tested, stable version of PmagPy. You can install everything you need quickly and easily, and you’ll be able to use the full functionality. The benefit of a developer installation is that you will be able to stay up-to-date with what other developers are working on. You will also be able to make changes and use those changes immediately. The downside is that the developer install may be buggier and requires you to make edits in your bash_profile.

If you want to contribute to PmagPy or modify PmagPy programs for your own use, you should do a developer install, otherwise go with the regular pip install. The developer install follows these steps:

  1. (Optional) Install git on your computer and sign up for a Github account if you don’t already have one.
  2. Fork the PmagPy repo (if you have set up with git/Github as in step 1), or simply download the zipfile from the Github repository.
  3. Add PmagPy to your $PATH by putting these lines to your .bashrc or .bash_profile:
    export PATH=/Users/***/PmagPy:$PATH  
    export PATH=/Users/***/PmagPy/programs:$PATH  
    export PYTHONPATH=$PYTHONPATH:/Users/***/PmagPy/

    where *** is your username. Then restart your Terminal. At this point you should be able to use all PmagPy functionality as if you had done a full pip install. The only difference is that you will be able to stay up-to-date with PmagPy development and make edits in PmagPy code which will be immediately available in your system.

  4. Check out our guidelines on how to contribute to PmagPy for guidelines of all sorts, including how to raise issues, request features, and make pull requests.

1.4 Next steps

Where do you want to go from here?

Chapter 2
Pmag GUI Tutorial

Pmag GUI is a Graphical User Interface (GUI) written by Lori Jonestrask and Ron Shaar that provides a quick path to the main PmagPy programs. The work flow is illustrated schematically here:

PIC

Note: Pmag GUI is available for use with both 3.0 and 2.5 data. All new contributions should be built up using 3.0, but 2.5 may be used for legacy data sets and then later upgraded using on the MagIC site. You may specify which data model you want to use on the command-line using the switch “-DM”, or in the window that pops up when you first open Pmag GUI.

PIC

The current tutorial is for the 2.5 format, but the usage is very similar. We will update with new, 3.0 pictures soon!

  1. Make sure you have followed all of the steps in the Installing PmagPy section.

  2. When you invoke Pmag GUI, the first step is to change directories into a ‘Project Directory’. For each study, create a directory with a name that relates to that study. Here we will call it ThisProject. This is where you will collect and process all the rock and paleomagnetic data for a given study, usually a publication. The project directory name should have NO SPACES and be placed on the hard drive in a place that has NO spaces in the path. Under certain Windows versions, this means you should not use your home directory, but create a directory called for example: D:\MyPmagProjects and put ThisProject there.
  3. copy example files: In the data_files folder you will find a subfolder named Pmag_GUI. Copy the contents of the ThisProject directory into your own Project Directory folder.
  4. Start Pmag GUI either by typing pmag_gui.py on your command line, or (if you have the standalone), by double-clicking the Pmag GUI icon on your Desktop.
  5. Change directories by clicking on the “change dir” button and select your own ThisProject directory.

2.1 Converting magnetometer files to MagIC format

2.2 Optional: Calculate geographic / tilt-corrected direction

PIC

2.3 Filling in the EarthRef data

Filling in the Earth Ref data is a critical part of building a MagIC Project Directory. The Earth-Ref data relevant to this example are arranged in five er_* tables: er_specimens, er_samples, er_sites, er_locations, er_ages. To complete the ER data, click the button, and follow the directions in the help window in order:

2.4 Demag GUI quick start

This image shows the main panel of the demag GUI:

PIC

Use of the Demag GUI is described in more detail in the Demag GUI section below. Here are a few instructions that can be used as a quick start for using Demag GUI. Note: If at any point you require help with a particular aspect of Demag GUI clicking on [Help] [Usage and Tips] (hotkey: ctrl-h) then clicking on the item you wish to know more about will provide a pop-up box with information for most all aspects of the GUI and Interpretation Editor, see additional help for details.

To analyze the data in the example, follow these steps for each specimen:

2.5 Thellier GUI quick start

This image shows the main panel of Thellier GUI:

PIC

Use of the Thellier GUI is described in more detail in the Thellier GUI section below and in this PDF document: Thellier GUI manual. Here are a few instructions that can be used as a quick start for using Thellier GUI.

To analyze the data in the example, follow these steps for each specimen:

2.6 Upload to the database

For this, you first click on the green prepare upload txt file button on the main page of Pmag GUI. A file named ‘upload.txt’ will be created in your ThisProject directory. Now, go to the MagIC search interface. Click on the ‘create’ button and find or create your reference.

Click on ‘Upload’ in the Actions column and drag and drop the upload.txt file onto the ‘Drop a MagIC Text File’ window. Congratulations. Your data are now in the database. But, they are not yet activated which you can do by clicking on the ‘Activate’ button in the Actions column. Once you activate an uploaded dataset (only for published papers), it will be publicly available.

2.7 Downloading data from MagIC

Data can be downloaded from the MagIC database and examined with PmagPy tools. The MagIC search interface provides a rich variety of search filters available by clicking on the ‘Filter the MagIC Search Results’ text box. To locate data from a particular reference, simply substitute the digital object identifier (DOI) in your browser window:

http://earthref.org/MAGIC/doi/10.1029/2003GC000661

The above DOI will find the data for the paper by Tauxe et al. (2004). [This may fail in Safari; if so, use an alternative browser like Firefox or Chrome.] To download the data, simply click on the file icon below the column labeled ‘data’. This will save a file to your downloads folder. To unpack this file after downloading it from the database click the “unpack downloaded txt file button“ in the Pmag GUI panel.

2.8 Preparing for MagIC

2.8.1 Field and sampling information

There is an astounding number of different ways that paleomagnetists document data in the field and in the lab. This variety is met with a large number of method codes that describe sampling and orientation procedures (see http://earthref.org/MAGIC/methods.htm for a complete description). The MagIC database expects sample orientations to be the azimuth and plunge of the fiducial arrow used for measurement (see [Essentials, Chapter 9] ) and the orientation of the bedding to be dip direction and downward dip so no matter what your own preference is, it must be translated into the standard MagIC convention for use with the PmagPy programs and with Pmag GUI.

Pmag GUI supports two different ways of getting orientation and other sampling related information into a MagIC usable format. The first way is through step 2 on the GUI front panel and filling in the data from within the GUI. That way will work for many applications, but it may be desirable to fill the spreadsheet in separately from the GUI by using a tab delimited file (orient.txt format). By clicking on step 2 on the GUI front panel you create a file named demag_orient.txt which has all of your sample names in it. Each orient.txt file should have all the information for a single location sensu MagIC.

PIC

The next row has the names of the columns. The required columns are: sample_name, mag_azimuth, field_dip, date, lat, long, sample_lithology, sample_type, sample_class) but there are a number of other possible columns (e.g., Optional Fields in orient.txt formatted files are: [date, shadow_angle, hhmm], date, stratigraphic_height, [bedding_dip_direction, bedding_dip], [image_name, image_look, image_photographer], participants, method_codes, site_name, and site_description, GPS_Az]). Column names in brackets must be supplied together and the data for stratigraphic_height are in meters. Also note that if these are unoriented samples, just set mag_azimuth and field_dip to 0.

It is handy to document the lithology, type and material classification information required by MagIC. These are all controlled vocabularies listed at http://earthref.org/MAGIC/shortlists.htm. For archaeological materials, set the lithology to “Not Specified”.

Put in stratigraphic height, sun compass, differential GPS orientation information under the appropriate column headings. You can also flag a particular sample orientation as suspect, by having a column ’sample_flag’ and setting it to either ’g’ for good or ’b’ for bad. Other options include documenting digital field photograph names and who was involved with the sampling.

For Sun Compass measurements, supply the shadow_angle, date and time. The date must be in mm/dd/yy format. If you enter the time in local time, be sure you know the offset to Universal Time as you will have to supply that when you import the file. Also, only put data from one time zone in a single file. The shadow angle should follow the convention shown in this figure (from Tauxe et al., 2010):

PIC

Supported sample orientation schemes:

There are options for different orientation conventions (drill direction with the Pomeroy orientation device [drill azimuth and hade] is the default), different naming conventions and a choice of whether to automatically calculate the IGRF value for magnetic declination correction, supply your own or ignore the correction. The program generates er_samples.txt and er_sites.txt files. Be warned that existing files with these names will be overwritten.

All images, for example outcrop photos are supplied as a separate zip file. image_name is the name of the picture you will import, image_look is the “look direction“ and image_photographer is the person who took the picture. This information will be put in a file named er_images.txt and will ultimately be read into the er_image table in the console where additional information must be entered (keywords, etc.).

Often, paleomagnetists note when a sample orientation is suspect in the field. To indicate that a particular sample may have an uncertainty in its orientation that is greater than about 5, enter SO-GT5 in the method_codes column and any other special codes pertaining to a particular sample from the method codes table. Other general method codes can be entered later. Note that unlike date and sample_class, the method codes entered in orient.txt pertain only to the sample on the same line.

Samples are oriented in the field with a “field arrow“ and measured in the laboratory with a “lab arrow“. The lab arrow is the positive X direction of the right handed coordinate system of the specimen measurements. The lab and field arrows may not be the same. In the MagIC database, we require the orientation (azimuth and plunge) of the X direction of the measurements (lab arrow). Here are some popular conventions that convert the field arrow azimuth (mag_azimuth in the orient.txt file) and dip (field_dip in orient.txt) to the azimuth and plunge of the laboratory arrow (sample_azimuth and sample_dip in er_samples.txt). The two angles, mag_azimuth and field_dip are explained below.

[1] Standard Pomeroy convention of azimuth and hade (degrees from vertical down) of the drill direction (field arrow). sample_azimuth = mag_azimuth; sample_dip =-field_dip.

PIC

2] Field arrow is the strike of the plane orthogonal to the drill direction, Field dip is the hade of the drill direction. Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = -field_dip

PIC

[3] Lab arrow is the same as the drill direction; hade was measured in the field. Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip.

PIC

[4] Lab arrow orientation same as mag_azimuth and field_dip.

PIC

[5] Lab arrow azimuth is mag_azimuth and lab arrow dip is the field_dip-90

PIC

[6] Lab arrow azimuth is mag_azimuth-90, Lab arrow dip is 90-field_dip, i.e., the field arrow was strike and dip of orthogonal face:

PIC

Structural correction conventions:

Because of the ambiguity of strike and dip, the MagIC database uses the dip direction and dip where dip is positive from 0 180. Dips>90 are overturned beds.

Supported sample naming schemes:

            [1] XXXXY: where XXXX is an arbitrary length site designation and Y  
                is the single character sample designation.  e.g., TG001a is the  
                first sample from site TG001.    [default]  
            [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)  
            [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)  
            [4-Z] XXXX[YYY]:  YYY is sample designation with Z characters from site XXX  
            [5] site name = sample name  
            [6] site name entered in site_name column in the orient.txt format input file  
            [7-Z] [XXX]YYY:  XXX is site designation with Z characters from samples  XXXYYY

When you are finished with editing the orient.txt file, return to step 2 on the GUI front panel.

Data files downloaded from the IODP (LIMS) database

There are two types of files that help in plotting of IODP paleomagnetic data sets: the core summaries with depth to core top information and the sample information that contains lists of samples taken. Visiting the IODP science query website at http://web.iodp.tamu.edu/WTR/html/sci-data.html allows you to select ’SRM - Remanence of magnetization’ under the Analysis scroll down menu. By picking the expedition, site, hole, etc. you can download a .csv format (comma separated values) for the expedition data. (Be aware that this is the rawest form of the data, including disturbed intervals, bad measurements, core ends, etc. and may not be exactly what ended up getting published!). First click on the “Show Report“ button, then, “Expand Table”, then “Get File”:

PIC

This can take a very long time, so get yourself a cup of tea.

You can also (while you’re at it) click on the ’Summaries’ tab and download the coring summaries:

PIC

Place both of the downloaded files in your MyFiles directory.

2.8.2 Supported Rock Magnetometer files

The MagIC database is designed to accept data from a wide variety of paleomagnetic and rock magnetic experiments. Because of this the magic_measurements table is complicated. Each measurement only makes sense in the context of what happened to the specimen before measurement and under what conditions the measurement was made (temperature, frequency, applied field, specimen orientation, etc). Also, there are many different kinds of instruments in common use, including rock magnetometers, susceptibility meters, Curie balances, vibrating sample and alternating gradient force magnetometers, and so on. We have made an effort to write translation programs for the most popular instrument and file formats and continue to add new supported formats as the opportunity arises. Here we describe the various supported data types and tell you how to prepare your files for importing. In general, all files for importing should be placed in the MyFiles directory or in subdirectories therein as needed. If you don’t see your data type in this list, please send an example file and a request to: ltauxe@ucsd.edu and we’ll get it in there for you.

The supported file formats are:

Rock Magnetometer Files:

Anisotropy of Magnetic Susceptibility files:

Hysteresis file formats

Pmag GUI will import hysteresis data from room temperature Micromag alternating gradient magnetometers (AGM) in several different ways. You can import either hysteresis loops or backfield curves, or you can import whole directories of the same. In the latter case, the file endings must be either .agm (.AGM) or .irm (.IRM) and the first part of the file must be the specimen name. See the documentation for AGM_magic.py for examples.

Now you’ve collected together all the files you need, we can start importing them into MagIC directory with Step 1 in Pmag GUI.

Chapter 3
MagIC GUI Tutorial

MagIC GUI is a Graphical User Interface written by Lori Jonestrask. It is meant for efficiently compiling a contribution for uploading to the MagIC database. MagIC GUI is specifically designed for making a contribution without measurement data; if you ARE including measurement data, we recommend that you use Pmag GUI instead. MagIC GUI allows you to add data at the location, site, sample, and specimen levels. You can also add results and ages. The program uses an Excel-like grid interface for entering and editing data. Useful features include: pop-up menus with controlled vocabularies, multi-cell pasting from external spreadsheets, and built-in validations for MagIC database requirements. Note: magic_gui.py uses the current MagIC data model, 3.0. For working with legacy data in the 2.5 format, you should instead use magic_gui2.py. The tutorial below was written specifically for the older magic_gui2.py but will soon be updated with illustrations and instructions from magic_gui.py. The two programs do work almost identically, so the tutorial should still point you in the right direction even if you are working with 3.0 data.

3.1 Starting with MagIC GUI

Start up MagIC GUI. You’ll do this by clicking on the icon (if you downloaded the standalone version) or entering ‘magic_gui.py’ on your command line (if you downloaded the full PmagPy installation). If you are using Anaconda Python, you will type ‘magic_gui_anaconda’ on your command line instead.

When you first start magic_gui.py, you will change directories into a ‘Project Directory’. For each study, create a directory with a name that relates to that study. Here we will call it MyProject. This is where you will collect and process all the rock and paleomagnetic data for a given study, usually a publication. The project directory name should have NO SPACES and be placed on the hard drive in a place that has NO spaces in the path. Under certain Windows versions, this means you should not use your home directory, but create a directory called for example: D:\MyPmagProjects and put MyProject there.

PIC

3.2 MagIC GUI example dataset

Now we’ll walk through a simple data input process, with fake data.

Chapter 4
Survival computer skills

The ‘Py’ part of ‘PmagPy’ stands for Python, the language in which all the code is written. It is not essential, but it is helpful, to understand a bit about computer operating systems and the Python language when using PmagPy because no one should be using programs as black boxes without understanding what they are doing. As all the programs are open source, you have the opportunity to look into them. If you understand a bit about how computers work yourself, you will be able to follow along what the programs are doing and even modify them to work better for you. In this chapter, you will find a brief introduction to the computer skills necessary for using the programs properly. We have tried to make this tutorial operating system independent. All the examples should work equally well on Mac OS, Windows and Unix-type operating systems. For a more complete explanation of the marvelous world of UNIX, refer to the website at http://www.tutorialspoint.com/unix/unix-quick-guide.htm. For handy tricks in DOS, try this link: http://www.c3scripts.com/tutorials/msdos. For an introduction to programming in Python, see the Python Programming Chapter. For now, we are interested in having the skills to find a command line and navigate the file system in order to get started with PmagPy.

4.1 Finding your command line

If you are not using a Unix-like computer (*NIX), you may never have encountered a command line. Using any of the command line programs requires accessing the command line. If you are using the MacOS operating system, look for the Terminal application in the Utilities folder within the Applications folder. When the Terminal application is launched, you will get a terminal window. The Unix (and MacOS) Bash shell has a $ sign as a prompt. Other shells have other command line prompts, such as the antiquated ‘C-shell’ used by Lisa Tauxe (don’t ask) which has a % prompt which is used in the examples here.

PIC

Under the Windows operating system, you can find your command line by searching for the “cmd” application. You’ll use this if you are using the full pip installation of PmagPy. If you are doing the standard PmagPy install, you will create a shortcut to the PmagPy command window on your desktop. Double clicking on that will give you a window in which you can type PmagPy commands. You will use the PmagPy prompt instead of the Windows command line from there on.

PIC

Note that the location of this program varies on different computers, so you may have to hunt around a little to find yours. Also, the actual “prompt” will vary for different machines.

4.2 File systems

When you first open a terminal window, you are in your “home” directory. Fundamental to all operating systems is the concept of directories and files. On windows-based operating systems (MacOS or Windows), directories are depicted as “folders” and moving about is accomplished by clicking on the different icons. In the world of terminal windows, the directories have names and are arranged in a hierarchical sequence with the top directory being the “root” directory, known as “/” (or C:’backslash’ in Windows) and the file system looks something like this:

PIC

Within the root directory, there are subdirectories (e.g. Applications and Users in bold face). In any directory, there can also be “files” (e.g. dir_cart_example.dat in italics). To refer to directories, the operating system relies on what is called a “pathname”. Every object has an “absolute” pathname which is valid from anywhere on the computer. The absolute pathname in *NIX always begins from the root directory / and in DOS (the operating system working in the Windows command line window), it is C:’backslash’.

The absolute pathname to the home directory lisa in the figure is /Users/lisa. Similarly, the absolute pathname to the directory containing PmagPy scripts would be /Users/lisa/PmagPy. There is also a “relative” pathname, which is in reference to the current directory (the one you are ‘sitting’ in). If user “lisa” is sitting in her home directory, the relative pathname for the file dir_cart_example.dat in the directory data_files would be data_files/dir_cart/dir_cart_example.dat. When using relative pathnames, it is useful to remember that ./ refers to the current directory and ../ refers to the directory “above”. Also, lisa’s home directory would be ~lisa, or if you are logged in as lisa yourself, then it is just ~.

4.3 Moving around in the file system

Now that you have found your command line and are comfortable in your home directory, you can view the contents of your directory with the Unix command ls or the DOS command dir. You can make a new directory with the command

% mkdir NEW_DIRECTORY_NAME

This command works in both Unix and DOS environments) and you can move into your new directory with the command

% cd NEW_DIRECTORY_NAME

To move back up into the home directory, just type cd .. remembering that .. refers to the directory above. Also, cd by itself will transport you home from where ever you are (there’s no place like home....). You can also change to any arbitrary directory by specifying the full path of the destination directory.

4.4 Redirecting input and output

Programs that operate at the command line level print output to the screen and read input from the keyboard. This is known as “standard input and output” or “standard I/O”. One of the nicest things about working at the command line level is the ability to redirect input and output. For example, instead of typing input to a program with the keyboard, it can be read from a file using the symbol <. Output can either be printed to the screen (standard output), redirected into a file using the symbol >, appended to the end of a file with >> or used as input to another program with the pipe operator (|).

4.5 Text editors

There are many ways of editing text and the subject is beyond the scope of this documentation. Text editing is a blessing and a curse. You either love it or hate it and in the beginning, and if you are used to programs like Word, you will certainly hate it. (And if you are used to a decent text editor, you will hate Word!). But you can’t use Word because the output is in a weird format that no scripting languages read easily. So you have to use text editor that will produce a plain (ascii) file, like Notepad, TextWrangler, Sublime Text or Atom. TextWrangler is free software available for Macs, the notepad comes standard in the Windows operating system and the Atom text editor is a free cross-platform option with lots of nice packages available that extend its functionality. Enthought Python’s Canopy programming environment comes with its own text editor.

Chapter 5
The PmagPy software package

The PmagPy software package is a comprehensive set of programs for paleomagnetists and rock magnetists. For installation, follow the instructions in the Installing PmagPy Chapter in this cookbook.

When you type something on your command line, your operating system looks for programs of the same name in special places. These are special “paths” so the directory with your Python scripts has to “be in your path”. To inform the operating system of the new directory, you need to “set your path”. It should have been set when you installed PmagPy.

5.1 Graphical data analysis programs

5.1.1 demag_gui.py

The Demag GUI (demag_gui.py) program enables the display and analysis of paleomagnetic demagnetization data. The software will display specimen level data within the chosen directory as a Zijderveld plot, equal area plot and intensity plot. Interpretations can be made to the data as least-squares line or plane fits. Mean directions can be calculated and displayed for these interpretations. These interpretations can be exported as MagIC pmag files from the program.

Launching

The best way to launch the Demag GUI application is through Pmag GUI. If you have installed PmagPy using pip, (or if PmagPy has been added to your PATH), you can type pmag_gui.py at the command line to launch it. Anaconda users will instead type ‘pmag_gui_anaconda’.

Alternatively, you can navigate to the PmagPy home directory and type python ./pmag_gui.py. Within Pmag GUI, data can be converted from the format of a particular lab into MagIC format so that it can be displayed and analyzed within Demag GUI. The program can be started by clicking on the Demag GUI button in Pmag GUI.

If you want to launch Demag GUI directly, (assuming PmagPy was properly installed using pip or has been added to your PATH), you can simply type demag_gui.py at the command line. Anaconda users will type ‘demag_gui_anaconda’ instead. Note: on OSX it is recommended to launch through Pmag GUI as on wxpython 2.9 the drop-down boxes behave better when Demag GUI is launched this way.

PIC

Demag GUI can also be launched through the command line by navigating to the directory containing demag_gui.py and running it with:

% python ./demag_gui.py

Adding and Editing Specimen Interpretations:

A new fit can be added by clicking the ‘add fit’ button or if no fit has yet been created for the current specimen by double clicking on two measurements in the list of measurements to the left or by double clicking on the data points on the zijderveld plot. It is also possible to add interpretations in mass using the interpretation editor tool described bellow. Additionally, you can select the fit you would like to edit or view by using the drop-down menu under the add fit button. Once you have selected a fit, the shape of the end points of the selected fit will turn to diamond shapes on all plots to distinguish them from the other data points.

PIC

Once the desired fit is selected, its bounds can be edited using the drop-down boxes under the bounds header.

PIC

Another way to select bounds is to double-click the list of measurement steps in the list on the left. The included steps in the currently selected interpretation are shown in highlighted in blue on the measurement list and the measurements marked “bad” are shown in highlighted in red. In the case of duplicate measurements, the first good measurement with the same treatment is used as a bound. All points between the selected bounds that are flagged good (i.e. not flagged bad and marked in red), including duplicate measurements, will be included in the interpreted fit.

PIC

Finally, you can select the bounds of an interpretation directly off the Zijderveld plot by hovering your mouse over a measurement (should change to a hand shape) and double clicking.

When first created, the fit will be given a generic name such as Fit 1. The name of the fit can be changed from the default by typing into the drop-down box containing fit name and then pressing enter. The default fit type is a least-squares line. You can choose different fits, such as a line anchored to the origin or a plane, by using the drop-down menu under the label ‘interpretation type’. Plane fits can be plotted as either poles, full planes, partial planes, best fit vectors, or best fit vectors and full plane (Note: plane poles will be displayed as squares and best fit vectors will display as sideways triangles on high level mean plot). This display option can be changed in the second drop-down menu under interpretation type.

PIC

The properties of the currently selected fit to the data can be seen in the upper center of the GUI in a box labeled ‘Interpretation Directions and Statistics’.

PIC

Deleting Specimen Interpretations

If you would like to delete a single interpretation, select the one you wish to delete from the interpretation drop-down menu and click delete. If you wish to clear all interpretations you may go into the interpretation editor located under the tools menu, select the fits you wish to delete and click the “delete selected” button.

Changing Specimen, Coordinate System, and Zijderveld Direction:

You can switch current specimen by clicking the next or previous button under the specimen box in the side bar (hotkey: ctrl-right and ctrl-left respectively). You can also select specimen from the drop-down menu or type the name of the specimen directly into the specimen box and hit enter to go directly to that specimen. Finally, you can double click on any of the points on the higher level means plot to switch directly to that specimen and interpretation.

The choice between coordinate systems (i.e. specimen, geographic or tilt-corrected) is available on the left above the list of steps. The data list and the plots will update to reflect the chosen coordinate system.

You can alter the X axis of the Zijderveld plot using the Zijderveld plot interactions box to set X=North, East, or NRM dec.

PIC

Flagging Bad Measurement Data

Due to flux jumps or other such errors, individual measurements should sometimes be excluded from interpretation. Such measurements can be flagged as “bad” by right clicking them within the measurement list and the measurement will then be highlighted in red. Additionally, you can double right click on the point you want to make bad in the Zijderveld plot to toggle it bad. The measurement_flag in the magic_measurements file will be change from “g” to “b” when a measurement is marked as bad the step will not be included in fits that are made to the data. Any measurement marked as bad will be colored red in the step list and will be shown as an empty rather than filled circle on the Zijderveld, equal area and M/M_0 plots. To change a bad measurement back to being good, one can right click on it again. Upon doing so, the red highlighting will go away, the data will be shown colored in within the plots and any fit that spans that data point will be recalculated to include it.

Acceptance criteria can be set by using the menu option [Analysis] [Acceptance Criteria] [Change Acceptance Criteria]. These criteria will be written to a pmag_criteria.txt table. These criteria can then be used to exclude interpretations that fail checks against this criteria during export.

Plot Interface

The four plots that take up the majority of the center of the GUI are where data and their interpretations are displayed. All plots are initially set to zoom mode and this is signified by a cross shaped cursor when you mouse over them. To zoom simply click and drag the rectangle to the desired area. You can switch to pan mode by right clicking on any one of the graphs and then clicking and dragging will pan around the plot. Finally, to return to the original plot zoom level and position simply click the middle mouse button to return home. Note: In the absence of a middle mouse button pressing both right and left mouse buttons at the same time works on most laptops in the case of Macbooks clicking with two fingers should work, and if using Apple’s magic mouse we recommend you download the MagicPrefs program which will allow you to configure your mouse however you prefer.

On the Zijderveld plot you have the additional option to select the current interpretation’s bounds by double clicking on a measurement point. You can also double right click on a measurement point in the zijderveld plot to mark it bad.

On the equal area plots, both the specimen and high level means, you can double click on any interpretation to switch to that specimen and interpretation immediately.

Saving Specimen Interpretations

Once you have picked out your interpretations, you can save the session data in two different ways: (1) as a .redo file which will allow you to have the fits preserved to be view again with Demag GUI or (2) as MagIC pmag_* tables to be uploaded to the MagIC database or otherwise processed. In addition, you may save image files of the plots.

The .redo File: You can use the menu option [File] [Save current interpretations to a redo file] to create this file type, you can just click the save button next to add fit, or you can use the hotkey ctrl-s. The advantage of the .redo file type being that it is designed to save your place when analysing a large dataset. Loading a redo file will reload all interpretations previously created any special colors assigned to them and take you to the specimen you saved the redo file on allowing you to pick up where you left off. Note: This file type does NOT load previous interpretations on start up you must go to the menu option [File] [Import previous interpretations from a redo file] (hotkey: ctrl-r) to restore your previous session.

The Pmag Tables: By going to the menu [File] [Save MagIC pmag tables] you can export your interpretations made in Demag GUI to the MagIC pmag tables which can then be used by other MagIC programs or uploaded to the MagIC database. You can export any or all of the three coordinate systems upon selecting this option and you may choose to save pmag_samples, pmag_sites, and pmag_results tables in addition to the pmag_specimens table that is output. If you choose to output additional information you will be prompted by a pop up window for additional information. Note: This save format loads on start up of the GUI immediately restoring your interpretations. Selection of this option will overwrite your demag_gui.redo file in the working directory.

Images of Plots: Select the menu option [File] [Save plot] [Save all plots] to save all plots, or you can save any of the plots individually. If you zoom or pan any of the plots the shifted image will be saved not the originally plotted image although the plot will redraw and reset to the original image in the GUI.

Flag Specimen Interpretations Good or Bad

You can flag the current specimen interpretation (marked by large diamonds on all plots) good or bad by using the menu option [Analysis] [Flag Interpretations]. The list of interpretations in the interpretation editor tool of Demag GUI can also be used to toggle interpretations good or bad in the same way that measurements can be marked good or bad in the measurement list, by right clicking on the entry you want toggled. This will change the shape of the interpretation to a small diamond on all plots, remove it from use in any higher level means, and mark the entry specimen˙flag in the pmag tables b instead of g to signify this.

Sample Orientation

You can check sample orientation by using the menu option [Analysis] [Sample Orientation] [Check Sample Orientations] (hotkey: ctrl-o). This function will set your mean options to fisher of all components at the current site level and display the wrong arrow (up triangle), wrong compass (down triangle), and rotated sample for declanation incraments of 5 degrees (dotted circle). This allows you to check if the sample orientation is correct and thus can be used in analysis. If you determine the current sample orientation to be bad you can mark it as such using the menu option [Analysis] [Sample Orientation] [Mark Sample Bad] (hotkey: ctrl-.). This will change the sample˙orientation˙flag in the er˙samples file to b not g and will prevent you from marking the specimen interpretations good in that sample so you do not use the improperly oriented data in your final results. If you later realize this was a mistake you can mark the sample orientation good again using [Analysis] [Sample Orientation] [Mark Sample Good] (hotkey: ctrl-,). Finally, to turn off the check sample orientations data simply select the [Check Sample Orientations] option again and it will be removed. Note: The current sample is specified as the sample of the current specimen.

PIC

High Level Means Plot and Statistics

The set of drop-down boxes to the right of the interpretation data are there to determine what level you want to analyse in the high level means plot and are grouped into the Display Level and Mean Options boxes.

PIC

The Display Level boxes consist of the upper drop-down menu which allows you to select the level at which interpretations are displayed options being all interpretations in the current: sample, site, location, or study. The lower drop-down menu lets you select what the current sample, site, location, or study is.

The top drop-down menu in the Mean Options box lets you chose what kind of mean you would like to take of the specimen components currently displayed. The lower drop-down menu lets you pick which specimen components to display allowing you to display All components, No components, or any single component.

The mean statistics for the chosen high level mean are displayed in the lower right of the GUI and can be cycled through using the arrow buttons next to the statistics boxes in the case of multiple high level means.

It is possible to toggle on and off displaying any one of the means in the high level plot which can be useful in the case of a cluttered graph of all components. This can be done by going to the menu option [Analysis] [Toggle Mean Display] and selecting the name of the component you would like to toggle.

All interpretations marked bad will appear as small diamonds regardless of type on the high level mean plot. The below gives examples for a number of plane display options of bad interpretations (the symbols off to the side), best fit vectors to the means (sideways triangles), plane poles (squares), and the planes themselves.

PIC

Interpretation Editor

In order to more easily view and edit specimen interpretation data there is a specimen interpretation editor which can be launched using the menu option [Tools] [Interpretation Editor] (hotkey: ctrl-e). Note: If you would like more help than provided here the interpretation editor has in context help same as the main GUI, see additional help for details.

List of Interpretations: This panel contains a list which details the fits made to the data and their parameters from which you can select which interpretation to view by double clicking it. In the list, the currently selected interpretation is highlighted blue as shown in the image below. You can mark interpretations as bad which removes them from any Fisher means or other high level means by right clicking on their entry in the list. All interpretations marked bad are colored red in the list and marked as a small diamond on the plot. The specimen entry associated with this fit will be given a bad (‘b’) flag within the pmag_specimens table. You can search through interpretations by using the search bar above the list. Finally, interpretations can be highlighted by clicking on the list and holding the shift or ctrl/command key to select multiple interpretations.

Buttons and Boxes: Highlighting entries allows you to delete or alter the characteristics of multiple interpretations at once without having to select each one in turn. This mass alteration is allowable using the the Name/Color/Bounds boxes to input the changes and then clicking the “apply changes to highlighted fits” button. You can delete highlighted fits using the “delete highlighted fits” button. The “add fit to highlighted specimens” button in the interpretation editor adds a fit to all highlighted specimens in the list on the left. You can use the “add new fit to all specimens“ as a convenient option to add a new interpretation with all the attributes described in the above Name/Color/Bounds boxes to every specimen with measurement data. This is a useful method for quickly analysing a new dataset by checking for components between common unblocking steps like giving every specimen a general magnetite interpretation inferring about where that should be (e.g. bounds=300C to 580C, name=MAG, color=violet).

Additional High Level Means Options: The interpretation editor also allows the displaying of site and sample means as the primary points on the high level mean plot by changing the bottom left display options drop-down box. Though it does not yet allow taking fisher means of sample means or site means so the mean type box will be forced to read ”None” if this option is changed from specimens.

PIC

VGP Viewer

Another tool offered by Demag GUI is the VGP (Virtual Geomagnetic Pole) Viewer which allows you to view your VGPs before exporting to MagIC tables. This tool can be opened by using the menu option [Tools] [View VGPs] (hotkey: ctrl-shift-v). The VGP viewer requires latitude and longitude data for sites and locations in order to calculate all VGPs from specimen interpretations, if this data is not already contained in the MagIC tables imported when the GUI started then it will be asked for during calculation so have it ready. The VGP viewer allows you to select viewing samples, sites, or location VGPs from the drop-down menu at the top. Plot interactions are the same here as in the main GUI and can be zoomed, panned, and points selected in the same manner. The list on the left shows all the data for the currently displayed VGPs.

PIC

Current Warnings Box

The white box in the far top right of the GUI is there to provide relevant warnings about aspects of the current specimen and interpretation such as missing data or duplicate data and can be useful in debugging and analysis.

Additional Help

Finally, if you need more help working with Demag GUI it offers in context assistance with the menu option [Help] [Usage and Tips] (hotkey: ctrl-h) which will change your cursor and then let you click on whichever aspect of the GUI you want help with. Once you have clicked a yellow pop-up box with information should appear in most cases, not all features have information but most should.

PIC

5.1.2 thellier_gui.py

[Essentials Chapter 10] and [MagIC] [Thellier_GUI_full_manual.pdf]

The program Thellier GUI (thellier_gui.py) combines functions from thellier_magic.py and new tools described by Shaar and Tauxe (2013) in a user-friendly graphical user interface (GUI).

As with Demag GUI, Thellier GUI can be called from the command line or from within Pmag GUI.

To launch Thellier GUI independently, find your command line and enter:

thellier_gui.py

If using Anaconda Python, you will instead use:

thellier_gui_anaconda

Once open, Thellier GUI loads files already prepared in a This Project directory and the interpretations from Thellier GUI are part of the workflow of Pmag GUI. This section is a brief introduction on how to use Thellier GUI as a stand alone application. Much more information is available within this manual: Thellier GUI full manual.

A complete list of the definitions for paleointensity statistics used by Thellier GUI is available as a supplement to the article by Paterson et al., 2014 and available for download here:

http://onlinelibrary.wiley.com/store/10.1002/2013GC005135/asset/supinfo/ggge20412-sup-0001-suppinfoCORRECTED.pdf?v=1&s=e1c3ab0a86c942d1039f6d2e15496aa172dc86ec

After launching the program, a “choose project directory” dialog window will appear as soon as the GUI is started. Your ThisProject directory should include a file named magic_measurements.txt (created for example by Pmag GUI. If a file named rmag_anisotropy.txt is in the project directory, then the program reads in the anisotropy data. Reading and processing the measurements files may take several seconds, depending on the number of the specimens.

Reading and compiling measurements data

When your ThisProject project directory is selected, the program reads all the measurement data, checks them, processes them and sorts them. If non-linear-TRM (NLT) data exist in magic_measurement.txt then the program tries to process the data using Equations (1)-(3) in Shaar et al., 2010. The program reads magic_measurement.txt, and processes the measurements for presentation as Arai and Zijderveld plots. We recommend that you check all the warnings and errors in Thellier_GUI.log before starting to interpret the data. For details about warnings and error messages during these steps, consult the tutorial document in the thellier_GUI folder in data_files. Also, consult the Preferences to change certain plotting options.

Main panel

This figure shows a snapshot of the main panel.

PIC

The top field in the panel includes the following buttons/controls (from left to right):

The center of the main panel has these elements:

The bottom of the main panel include paleointensity statistics. The first line has the threshold values (empty if N/A). The second line is the specimen’s statistics. For details see Appendix A1 in Shaar and Tauxe (2013).

5.1.3 MagIC GUI

This application provides a simple interface for transforming data from a paleomagnetic or rock magnetic study into the format necessary for contribution to the MagIC database.

5.2 Command line PmagPy programs

PmagPy scripts work by calling them on a command line. The python scripts must be placed in a directory that is in your “path”. To see if this has been properly done, type dir_cart.py -h on the command line and you should get a help message. If you get a “command not found” message, you need to fix your path; check the “installing python” page on the software website. Another possible cause for failure is that somehow, the python scripts are no longer executable. To fix this, change directories into the directory with the scripts, and type the command: chmod a+x *.py

For people who hate command line programs and prefer graphical user interfaces with menus, etc., some of the key programs for interpreting paleomagnetic and rock magnetic data are packaged together in a program called Pmag GUI. This can be invoked by typing pmag_gui.py on the command line. The Pmag GUI program generates the desired commands for you behind the scenes, so you do not have to learn UNIX or how to use the command line (except to call Pmag GUI itself). Nonetheless, some understanding of what is actually happening is helpful, because Pmag GUI is more limited than full range of PmagPy programs. So, here is a brief introduction to how the PmagPy programs work.

All PmagPy programs print a help message out if you type: program_name.py -h on the command line. Many have an “interactive” option triggered by typing program_name.py -i. Many also allow reading from standard input and output. The help message will explain how each particular program functions. There are some common features for the command line options:

The PmagPy scripts call on two special modules, the pmag and the pmagplotlib modules. These contain most of the calculations and plotting functions.

The source code and the help messages for all programs in the PmagPy package are also available online at:

https://github.com/PmagPy/PmagPy

A link to these source code/help menu files is provided for each program listed below with the format [program_name_docs].

Examples of how to use command line PmagPy programs are given below. In all examples, the ’$’ prompt stands for whatever command line prompt you have. The example data files referred to in the following are located in the data_files directory bundled with the PmagPy software distribution.

5.2.1 aarm_magic.py

[Essentials Chapter 13] [MagIC Database] [aarm_magic docs]

Anisotropy of anhysteretic or other remanence can be converted to a tensor and used to correct natural remanence data for the effects of anisotropy remanence acquisition. For example, directions may be deflected from the geomagnetic field direction or intensities may be biased by strong anisotropies in the magnetic fabric of the specimen. By imparting an anhysteretic or thermal remanence in many specific orientations, the anisotropy of remanence acquisition can be characterized and used for correction. We do this for anisotropy of anhysteretic remanence (AARM) by imparting an ARM in 9, 12 or 15 positions. Each ARM must be preceded by an AF demagnetization step. The 15 positions are shown in the k15_magic.py example.

For the 9 position scheme, aarm_magic.py assumes that the AARMs are imparted in positions 1,2,3, 6,7,8, 11,12,13. Someone (a.k.a. Josh Feinberg) has kindly made the measurements and saved them an SIO formatted measurement file named arm_magic_example.dat in the datafile directory called aarm_magic. Note the special format of these files - the treatment column (column #2) has the position number (1,2,3,6, etc.) followed by either a “00” for the obligatory zero field baseline step or a “10” for the in-field step. These could also be ‘0‘ and ‘1’.

We need to first import these into the magic_measurements format and then calculate the anisotropy tensors. These can then be plotted or used to correct paleointensity or directional data for anisotropy of remanence.

So, first use the program sio_magic.py to import the AARM data into the MagIC format. The DC field was 50 μT, the peak AC field was 180 mT, the location was ‘Bushveld’ and the lab protocol was AF and Anisotropy. The naming convention used Option # 3 (see help menu).

Then use the program aarm_magic.py to calculate the best-fit tensor and write out the MagIC tables: rmag_anisotropy and rmag_results. These files can be used to correct remanence data in a pmag_specimens format table (e.g, intensity data) for the effects of remanent anisotropy (e.g., using the program thellier_magic_redo.py.

Here is a transcript of a session that works. Note that the sio_magic.py command is all on one line (which is what the terminal backslash means).

$ sio_magic.py -f arm_magic_example.dat -loc Bushveld -LP AF:ANI -F aarm_measurements.txt \  
      -ncn 3 -ac 180 -dc 50 -1 -1  
 
results put in  aarm_measurements.txt  
 
 
$ aarm_magic.py  
specimen tensor elements stored in  ./arm_anisotropy.txt  
specimen statistics and eigenparameters stored in  ./aarm_results.txt  

5.2.2 angle.py

[Essentials Appendix A.3.4] [angle docs]

Use the program angle.py to calculate the angle (α) between two directions D = 350.2,I = 26.8;D = 98.6,I = 67.3.

$ angle.py -i  
Declination 1: [cntrl-D  to quit] 350.2  
Inclination 1: 26.8  
Declination 2: 98.6  
Inclination 2: 67.3  
   72.1  
Declination 1: [cntrl-D  to quit] ^D  
Good bye

[NB: PC users will get a more angry sounding exit message]

You can also use this program by reading in a filename using the ’-f’ option or from standard input (with <). Try this out with the test file in the angle directory (angle.dat). First examine the contents of the input file using “cat” (or “ type” on a DOS prompt). Then use angle.py to calculate the angles. You can also save your output in a file angle.out with the ’-F’ option:

 $ cat angle.dat  
    11.2    32.9      6.4   -42.9  
   11.5    63.7      10.5   -55.4  
   11.9    31.4     358.1   -71.8  
  349.6    36.2     356.3   -45.0  
   60.3    63.5      58.9   -56.6  
  351.8    37.6      55.0   -45.8  
  345.5    53.1      44.8   -26.9  
   12.2    20.1      13.6   -54.0  
  352.1    37.6       5.1   -39.9  
  341.2    53.8      25.1   -61.1  
....  
 
 $ angle.py -f angle.dat  
    75.9  
  119.1  
  103.7  
   81.4  
  120.1  
  100.9  
   95.1  
   74.1  
   78.4  
  120.1  
......  
.

5.2.3 ani_depthplot.py

[Essentials Chapter 13]; [MagIC Database] [ani_depthplot docs]

Anisotropy data can be plotted versus depth. The program ani_depthplot.py uses MagIC formatted data tables of the rmag_anisotropy.txt and er_samples.txt types. rmag_anisotropy.txt stores the tensor elements and measurement meta-data while er_samples.txt stores the depths, location and other information. Bulk susceptibility measurements can also be plotted if they are available in a magic_measurements.txt formatted file.

In this example, we will use the data from Tauxe et al. (2012) measured on samples obtained during Expedition 318 of the International Ocean Drilling Program. To get the entire dataset, go to the MagIC data base at: http://earthref.org/MAGIC/ and find the data using the search interface. As a short-cut, you can use the “permalink”:

http://earthref.org/MAGIC/m000629dt20120607193954.

Download the text file by clicking on the icon under the red arrow in:

PIC

Unpack the data using the program download_magic.py. This will unpack the data into the different holes. Change directories into Location_2 (which contains the data for Hole U1359A). Or, you can use the data in the ani_depthplot directory of the example data files.

$ ani_depthplot.py

will create the plot:

PIC

5.2.4 aniso_magic.py

[Essentials Chapter 13]; [MagIC Database] [aniso_magic docs]

Samples were collected from the eastern margin a dike oriented with a bedding pole declination of 110 and dip of 2. The data have been imported into a rmag_anisotropy formatted file named dike_anisotropy.txt.

Make a plot of the data using aniso_magic.py. Use the site parametric bootstrap option and plot out the bootstrapped eigenvectors. Draw on the trace of the dike.

These things are done in this session:

$ aniso_magic.py -f dike_anisotropy.txt -gtc 110 2 -par -v -crd g  
Doing bootstrap - be patient  
Boostrap Statistics:  
 tau_i, V_i_D, V_i_I, V_i_zeta, V_i_zeta_D, V_i_zeta_I, V_i_eta, V_i_eta_D, V_i_eta_I  
0.34040287    29.6    14.5     6.9    168.4     67.4     32.7    295.9     14.2  
0.33536589   166.3    70.5    10.3     26.1     13.0     23.9    293.3     12.1  
0.32423124   296.2    12.8     5.2    180.5     62.4     10.4     32.0     24.0  
compare with [d]irection  
 plot [g]reat circle,  change [c]oord. system, change [e]llipse calculation,  
     s[a]ve plots, [q]uit  

which produced these plots:

PIC

The specimen eigenvectors are plotted in the left-hand diagram with the usual convention that squares are the V 1 directions, triangles are the V 2 directions and circles are the V 3 directions. All directions are plotted on the lower hemisphere. The bootstrapped eigenvectors are shownin the middle diagram. Cumulative distributions of the bootstrapped eigenvalues are shown to the right with the 95% confidence bounds plotted as vertical lines. It appears that the magma was moving in the northern and slightly up direction along the dike.

There are more options to aniso_magic.py that come in handy. In particular, one often wishes to test if a particular fabric is isotropic (the three eigenvalues cannot be distinguished), or if a particular eigenvector is parallel to some direction. For example, undisturbed sedimentary fabrics are oblate (the maximum and intermediate directions cannot be distinguished from one another, but are distinct from the minimum) and the eigenvector associated with the minimum eigenvalue is vertical. These criteria can be tested using the distributions of bootstrapped eigenvalues and eigenvectors.

The following session illustrates how this is done, using the data in the test file sed_anisotropy.txt in the aniso_magic directory.

$ aniso_magic.py -f sed_anisotropy.txt -d 3 0 90 -v -crd g  
 
Doing bootstrap - be patient  
Boostrap Statistics:  
 tau_i, V_i_D, V_i_I, V_i_zeta, V_i_zeta_D, V_i_zeta_I, V_i_eta, V_i_eta_D, V_i_eta_I  
0.33673453   359.5     7.9     9.8    156.0     81.6      3.2    269.3      3.3  
0.33546972    89.5     0.0    37.5     11.5     53.7     80.3    178.0     35.5  
0.32779574   179.6    82.1     4.1      4.7      7.6      3.2    274.6      1.2  
compare with [d]irection  
 plot [g]reat circle,  change [c]oord. system, change [e]llipse calculation,  
      s[a]ve plots, [q]uit  
 

which makes these plots:

PIC

The top three plots are as in the dike example before, showing a clear triaxial fabric (all three eigenvalues and associated eigenvectors are distinct from one another. In the lower three plots we have the distributions of the three components of the chosen axis, V 3, their 95% confidence bounds (dash lines) and the components of the designated direction (solid line). This direction is also shown in the equal area projection above as a red pentagon. The minimum eigenvector is not vertical in this case.

5.2.5 apwp.py

[Essentials Chapter 16] [apwp docs]

The program apwp.py calculates paleolatitude, declination, inclination from a pole latitude and longitude based on the paper Besse and Courtillot (2002; see Essentials Chapter 16 for complete discussion). Use it to calculate the expected direction for 100 million year old rocks at a locality in La Jolla Cove (Latitude: 33N, Longitude 117W). Assume that we are on the North American Plate! (Note that there IS no option for the Pacific plate in the program apwp.py, and that La Jolla was on the North American plate until a few million years ago (6?).

$ apwp.py -i  
 
 Welcome to paleolatitude calculator - CTRL-D to quit  
 
pick a plate: NA, SA, AF, IN, EU, AU, ANT, GL  
 
Plate  
NA  
Site latitude  
33  
 Site longitude  
-117  
 Age  
100  
Age  Paleolat.  Dec.  Inc.  Pole_lat.  Pole_Long.  
   100.0    38.8   352.4    58.1    81.5   198.3  

Note that as with many PmagPy programs, the input information can be read from a file and the output can be put in a file. For example, we put the same information into a file, apwp_example.dat and use this syntax:

$  apwp.py -f apwp_example.dat  
   100.0    38.8   352.4    58.1    81.5   198.3

5.2.6 atrm_magic.py

[Essentials Chapter 13] [MagIC Database] [atrm_magic docs]

Anisotropy of thermal remanence (ATRM) is similar to anisotropy of anhysteretic remanence (AARM) and the procedure for obtaining the tensor is also similar. Therefore, the program atrm_magic.py is quite similar to aarm_magic.py. However, the SIO lab procedures for the two experiments are somewhat different. In the ATRM experiment, there is a single, zero field step at the chosen temperature which is used as a baseline. We use only six positions (as opposed to nine for AARM) because of the additional risk of alteration at each temperature step. The positions are also different:

PIC

The file atrm_magic_example.dat in the atrm_magic directory is an SIO formatred data file containing ATRM measurement data done in a temperature of 520C. Note the special format of these files - the treatment column (column #2) has the temperature in centigrade followed by either a “00” for the obligatory zero field baseline step or a “10” for the first postion, and so on. These could also be ‘0‘ and ‘1’, etc..

Use the program sio_magic.py to import the ATRM data into the MagIC format. The DC field was 40 μT. The naming convention used option # 1 (see help menu). Then use the program atrm_magic.py to calculate the best-fit tensor and write out the MagIC tables: rmag_anisotropy and rmag_results formatted files. These files can be used to correct remanence data in a pmag_specimens format table (e.g, intensity data) for the effects of remanent anisotropy (e.g., using the program thellier_magic_redo.py.

Here is an example transcript:

$  sio_magic.py -f atrm_magic_example.dat -loc Africa -LP T:ANI -F atrm_measurements.txt  
      -ncn 1  -dc 40 -1 -1  
 
results put in  atrm_measurements.txt  
 
 
$  atrm_magic.py  
 
specimen tensor elements stored in  ./trm_anisotropy.txt  
specimen statistics and eigenparameters stored in  ./atrm_results.txt

5.2.7 azdip_magic.py

[Essentials Chapter 9] and [MagIC Database] [azdip_magic docs]

Many paleomagnetists save orientation information in files in this format: Sample Azimuth Plunge Strike Dip (AZDIP format), where the Azimuth and Plunge are the declination and inclination of the drill direction and the strike and dip are the attitude of the sampled unit (with dip to the right of strike). The MagIC database convention is to use the direction of the X coordinate of the specimen measurement system. To convert an AzDip formatted file (example.az) for samples taken from a location name “Northern Iceland” into the MagIC format and save the information in the MagIC er_samples.txt file format, use the program azdip_magic.py:

$  azdip_magic.py -f azdip_magic_example.dat -ncn 1 -mcd FS-FD:SO-POM  
      -loc ‘‘Northern Iceland‘‘  
 
Data saved in  er_samples.txt

Note that there are many options for relating sample names to site names and we used the first convention that has a single character at the end of the site name to designate each sample (e.g., is132d is sample ’d’ from site is132). We have also specified certain field sampling and orientation method codes (-mcd), here field sampling-field drilled (FS-FD) and sample orientation-Pomeroy (SO-POM). The location was “Northern Iceland”. See the help menu for more options.

Another way to do this is to use the orientation_magic.py program which allows much more information to be imported.

5.2.8 b_vdm.py

[Essentials Chapter 2] [b_vdm docs]

Use the program b_vdm to convert an estimated paleofield value of 33 μT obtained from a lava flow at 22 N latitude to the equivalent Virtual Dipole Moment (VDM) in Am2. Put the input information into a file called vdm_input.dat and read from it using standard input :

$  cat > b_vdm_example.dat  
33 22  
^D  
$  b_vdm.py < b_vdm_example.dat  
 7.159e+22

5.2.9 basemap_magic.py

[MagIC Database] and high resolution instructions [basemap_magic docs]

NB: this program no longer maintained - use plot_mapPTS.py for greater functionality.

Python has a complete map plotting package and PmagPy has a utility for making simple base maps for projects. Site location data imported for example using orientation_magic.py into an er_sites formatted text file can be plotted using basemap_magic.py. There are many options, so check the help message for more details. Note that if you want to use high resolution datafiles or the etopo20 meshgrid (-etp option), you must install the high resolution continental outlines. You can use install_etopo.py for that. As an example, use the program basemap_magic.py to make a simple base map with site locations in a MagIC er_sites.txt formatted file named basemap_example.txt.

$  basemap_magic.py -f basemap_example.txt

which makes this plot:

PIC

Use the buttons at the bottom of the plot to resize or save the plot in the desired format.

5.2.10 biplot_magic.py

[Essentials Chapter 8] and [MagIC Database] [biplot_magic docs]

It is often useful to plot measurements from one experiement against another. For example, rock magnetic studies of sediments often plot the IRM against the ARM or magnetic susceptibility. All of these types of measurements can be imported into a single magic_measurements formatted file, using magic method codes and other clues (lab fields, etc.) to differentiate one from another. Data were obtained from a Paleogene core from 28S for a relative paleointensity study. IRM, ARM, magnetic susceptibility and remanence data were uploaded to the MagIC database. The magic_measurements formatted file for this study is saved in core_measurements.txt.

Use the program biplot_magic.py to make a biplot of magnetic susceptibility against ARM. Note that the program makes use of the MagIC method codes which are LT-IRM for IRM, LT-AF-I for ARM (AF demagnetization, in a field), and LP-X for magnetic susceptibility.

First, to find out which data are available, run the program like this:

$  biplot_magic.py -f biplot_magic_example.txt  
   which responds with this:  
   [’LT-AF-Z’, ’LT-AF-I’, ’LT-IRM’, ’LP-X’]  
   

These are the method codes for AF demagnetization of NRM, ARM, IRM and susceptibility measurements respectively. So to make a plot of susceptibility against ARM, we would run the program again:

$  biplot_magic.py -f biplot_magic_example.txt -x LP-X -y LT-AF-I  
LP-X  selected for X axis  
LT-AF-I  selected for Y axis  
All  
measurement_magn_mass  being used for plotting Y  
measurement_chi_mass  being used for plotting X.  
 
S[a]ve plots, [q]uit,  Return for next plot

which makes the plot:

PIC

5.2.11 bootams.py

[Essentials Chapter 13] [bootams docs]

The program bootams.py calculates bootstrap statistics for anisotropy tensor data in the form of:

x11 x22 x33 x12 x23 x13

It does this by selecting para-data sets and calculating the Hext average eigenparameters. It has an optional parametric bootstrap whereby the σ for the data set as a whole is used to draw new para data sets. The bootstrapped eigenparameters are assumed to be Kent distributed and the program calculates Kent error ellipses for each set of eigenvectors. It also estimates the standard deviations of the bootstrapped eigenvalues.

Use this to calculate the bootstrapped error statistics for the data in file bootams_examples.data:

$  bootams.py -par -f bootams_example.dat  
Doing bootstrap - be patient  
 
tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc  
 
0.33505 0.00021     5.3    14.7    11.7   267.1    28.0    27.0   122.6    56.8  
0.33334 0.00023   124.5    61.7    19.6   244.0    14.4    23.5   340.1    22.1  
0.33161 0.00026   268.8    23.6    10.4     8.6    21.8    20.4   137.0    57.3  

Note that every time bootams gets called, the output will be slightly different because this depends on calls to random number generators. If the answers are different by a lot, then the number of bootstrap calculations is too low. The number of bootstraps can be changed with the -nb option.

5.2.12 cart_dir.py

[Essentials Chapter 2] [cart_dir docs]

Use the program cart_dir.py to convert these cartesian coordinates to geomagnetic elements:




x1 x2 x3



0.3971 -0.1445 0.9063
-0.5722 0.0400 -0.8192



To use the interactive option:

$ cart_dir.py -i  
X: [cntrl-D  to quit] 0.3971  
Y: -0.1445  
Z: 0.9063  
  340.0    65.0  1.000e+00  
X: [cntrl-D  to quit] -.5722  
Y: 0.0400  
Z: -0.8192  
  176.0   -55.0  1.000e+00

To read from a file:

$ cart_dir.py -f cart_dir_example.dat  
  340.0    65.0  1.000e+00  
  176.0   -55.0  1.000e+00

5.2.13 chartmaker.py

[Essentials Chapter 10] [chartmaker docs]

Paleointensity experiments are quite complex and it is easy to make a mistake in the laboratory. The SIO lab uses a simple chart that helps the experimenter keep track of in-field and zero field steps and makes sure that the field gets verified before each run. You can make a chart for an infield-zerofield, zerofield-infield (IZZI) experiment using the program chartmaker.py. Make such a chart using 50C steps up to 500C followed by 10C steps up to 600C.

$ chartmaker.py  
    Welcome to the thellier-thellier experiment automatic chart maker.  
    Please select desired step interval and upper bound for which it is valid.  
    e.g.,  
    50  
    500  
    10  
    600  
 
    a blank entry signals the end of data entry.  
    which would generate steps with 50 degree intervals up to 500,  
     followed by 10 degree intervals up to 600.  
 
    chart is stored in:  chart.txt  
 
 Enter desired treatment step interval: <return> to quit 50  
 Enter upper bound for this interval: 500  
 Enter desired treatment step interval: <return> to quit 10  
 Enter upper bound for this interval: 600  
 Enter desired treatment step interval: <return> to quit  
output stored in: chart.txt  
 
%cat chart.txt  
file:_________________    field:___________uT  
 
 
                                 date | run# | zone I | zone II | zone III | start | sp | cool |  
________________________________________________________  
       0.0                |           |           |               |                |         |      |         |  
________________________________________________________  
   Z   100.0           |           |           |               |                |         |      |         |  
________________________________________________________  
   I   100.1           |           |           |               |                |         |      |         |  
________________________________________________________  
   T   100.3           |           |           |               |                |         |      |         |  
________________________________________________________  
   I   150.1           |           |           |               |                |         |      |         |  
.  
.  
.  

The chart allows you to fill in the file name in which the data were stored and the field value intended for the infield steps. The designations ‘Z’, ‘I’, ’T’, ’and ’P’ are for zero-field, in-field, pTRM tail checks and pTRM checks respectively. There are fields for the date of the runs, the fields measured in different zones in the oven prior to the start of the experiment, and the start and stop times. The numbers, e.g., 100.1 are the treatment temperatures (100) followed by the code for each experiment type. These get entered in the treatment fields in the SIO formatted magnetometer files (see sio_magic.py).

5.2.14 chi_magic.py

[Essentials Chapter 8] & [MagIC Database] [chi_magic docs]

It is sometimes useful to measure susceptibility as a function of temperature, applied field and frequency. Here we use a data set that came from the Tiva Canyon Tuff sequence (see Carter-Stiglitz, 2006). Use the program chi_magic.py to plot the data in the magic_measurements formatted file: chi_magic_example.dat.

$ chi_magic.py -f chi_magic_example.dat  
IRM-Kappa-2352 1 out of  2  
IRM-OldBlue-1892 2 out of  2  
Skipping susceptibitily - AC field plot as a function of temperature  
enter s[a]ve to save files,  [return] to quit  
 

produced this plot:

PIC

You can see the dependence on temperature, frequency and applied field. These data support the suggestion that there is a strong superparamagnetic component in these specimens.

5.2.15 combine_magic.py

[MagIC Database] [combine_magic docs]

MagIC tables have many columns only some of which are used in a particular instance. So combining files of the same type must be done carefully to ensure that the right data come under the right headings. The program combine_magic.py can be used to combine any number of MagIC files from a given type. For an example of how to use this program, see AGM_magic.py.

5.2.16 common_mean.py

[Essentials Chapter 12] [common_mean docs]

Most paleomagnetists use some form of Fisher Statistics to decide if two directions are statistically distinct or not (see Essentials Chapter 11 for a discussion of those techniques. But often directional data are not fisher distributed and the parametric approach will give misleading answers. In these cases, one can use a boostrap approach, described in detail in [Essentials Chapter 12]. Here we use the program common_mean.py for a bootstrap test for common mean to check whether two declination, inclination data sets have a common mean at the 95% level of confidence. The data sets are: common_mean_ex_file1.dat and common_mean_ex_file2.dat. But first, let’s look at the data in equal area projection using the program eqarea.py.

The session:

$ eqarea.py -f common_mean_ex_file1.dat -sav  
1  saved in  common_mean_ex_file1.dat.svg  
 
$  eqarea.py -f common_mean_ex_file2.dat -sav  
1  saved in  common_mean_ex_file2.dat.svg

generates two .svg formatted files that look like these:

PIC

Now let’s look at the common mean problem using common_mean.py.

$ common_mean.py -f common_mean_ex_file1.dat -f2 common_mean_ex_file2.dat  
Doing first set of directions, please be patient..  
Doing second set of directions, please be patient..  
S[a]ve plots, <Return> to quit

The three plots are:

PIC

These suggest that the two data sets share a common mean.

Now compare the data in common_mean_ex_file1.dat with the expected direction at the 5N latitude that these data were collected (Dec=0, Inc=9.9).

To do this, follow this transcript:

$ common_mean.py -f common_mean_ex_file1.dat -dir 0 9.9  
Doing first set of directions, please be patient..  
S[a]ve plots, <Return> to quit

PIC

Apparently the data (cumulative distribution functions) are entirely consistent with the expected direction (dashed lines are the cartesian coordinates of that).

5.2.17 cont_rot.py

[Essentials Chapter 16 and Essentials Appendix A.3.5.] [cont_rot docs]

Use the program cont_rot.py to make an orthographic projection with latitude = -20 and longitude = 0 at the center of the African and South American continents reconstructed to 180 Ma using the Torsvik et al. (2008) poles of finite rotation. Do this by first holding Africa fixed. Move the output plot to fixed_africa.svg. Then make the plot for Africa adjusted to the paleomagnetic reference frame. Make the continental outlines in black lines and set the resolution to ’low’.

$ cont_rot.py -con af:sam -prj ortho -eye -20 0 -sym k-  1 -age 180 -res l  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Cont_rot.pdf  
$ mv Cont_rot.pdf fixed_africa.pdf  
$ cont_rot.py -con af:sam -prj ortho -eye -20 0 -sym  k- 1 -age 180 \  
     -res l -sac  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Cont_rot.pdf

These commands generated the following plots (first on left, second on right):

PIC

5.2.18 convert2unix.py

[convert2unix docs]

This is a handy little script that turns Windows or Mac file formatted files (not Word or other proprietary formats) into Unix file format. It does the change in place, overwriting the original file.

5.2.19 convert_samples.py

[MagIC Database] [convert_samples docs]

If one of the MagIC related programs in PmagPy created a very lean looking er_samples.txt file (for example using azdip_magic.py) and you need to add more information (say, latitude, longitude, lithology, etc.), you can convert the er_samples.txt file into an orient.txt file format, edit it in, for example Excel, and then re-import it back into the er_samples.txt file format. Try this on the er_samples formatted file in the convert_samples directory, convert_samples_example.dat.

$ convert_samples.py -f convert_samples_example.dat  
Data saved in:  orient_Northern_Iceland.txt

5.2.20 core_depthplot.py

[[Essentials Chapter 15] [core_depthplot docs]

Use the program core_depthplot.py to plot various measurement data versus sample depth. The data must be in the MagIC data formats. The program will plot whole core data, discrete sample at a bulk demagnetization step, data from vector demagnetization experiments, and so on. There are many options, so check the help menu before you begin.

We can try this out on some data from DSDP Hole 522, measured by Tauxe and Hartl (1997). These can be downloaded and unpacked (see download_magic.py for details), or you can try it out on the data files in the directory core_depthplot. You must specify a lab-protocol (LP) for plotting. In this example, we will plot the alternating field (AF) data after the 15 mT step. The magnetizations will be plotted on a log scale and, as this is a record of the Oligocene, we will plot the Oligocene time scale, using the calibration of Gradstein et al. (2004), commonly referred to as “GTS04” for the the Oligocene. We are only interested in the data between 50 and 150 meters (the -d option sets this) and we will suppress the declinations (-D).

$ core_depthplot.py -fsa er_samples.txt -LP AF 15 -log -d 50 150 -ts gts04 23 34 -D

will produce the plot:

PIC

5.2.21 curie.py

[Essentials Chapter 6] [curie docs]

Use the program curie.py to interpret curie temperature data in the example file curie_example.dat. Use a smoothing window of 10.

%curie.py -f curie_example.dat -w 10  
second deriative maximum is at T=552

which generates these plots:

PIC

5.2.22 customize_criteria.py

[MagIC Database] [customize_criteria docs]

NB: The best place to customize your selection criteria is within Demag GUI or Thellier GUI. But if you want to do it on the command line, then here’s how.

The MagIC database allows documentation of which criteria were used in selecting data on a specimen, sample or site level and to easily apply those criteria (and re-apply them as they change) in preparing the different tables. These choices are stored in the pmag_criteria table in each MagIC project directory (see Pmag GUI documentation).

Certain PmagPy programs use the datafile pmag_criteria.txt to select data (for example thellier_magic.py and specimens_results_magic.py). To customize these criteria for your own data sets, you can use the program customize_criteria.py. This program is also called by the Pmag GUI under the Utilities menu. Try it out on pmag_criteria.txt. This is a “full vector” set of criteria - meaning that it has both directions and intensity flags set. Change the specimen_alpha95 cutoff to 180. from whatever it is now set to. Save the output to a new file named new_criteria.txt.

$ customize_criteria.py -f pmag_criteria.txt -F new_criteria.txt  
Acceptance criteria read in from  pmag_criteria.txt  
 [0] Use no acceptance criteria?  
 [1] Use default criteria  
 [2] customize criteria  
 2  
Enter new threshold value.  
 Return to keep default.  
 Leave blank to not use as a criterion  
 
sample_alpha95 10  
new value: 5  
sample_int_n 2  
new value: 2  
sample_int_sigma 5e-6  
new value:  
sample_int_sigma_perc 15  
new value:  
.......  
 
Criteria saved in pmag_criteria.txt  
 
 Pmag Criteria stored in  pmag_criteria.txt  

Note that the default place for the PmagPy programs to look for criteria is in pmag_criteria.txt, so you should probably rename the new one that for it to take effect as your new default.

5.2.23 dayplot_magic.py

[Essentials Chapter 5] [dayplot_magic docs]

Use the program dayplot_magic.py to make Day (Day et al., 1977) , or Squareness-Coercivity and Squareness-Coercivity of Remanence plots (e.g., Tauxe et al., 2002) from the rmag_hyseresis formatted data in dayplot_magic_example.dat.

The session:

  $ dayplot_magic.py -f dayplot_magic_example.dat  
 
 S[a]ve to save plots, return to quit:

gives the plots:

PIC

5.2.24 demag_gui.py

[Essentials Chapter 7] [demag_gui docs]

The program demag_gui.py is used by Pmag GUI for interpreting demagnetization experimental data. See Demag GUI for guidance.

5.2.25 di_eq.py

[Essentials Appendix B] [di_eq docs]

Paleomagnetic data are frequently plotted in equal area projection. PmagPy has several plotting programs which do this (e.g., eqarea.py, but occasionally it is handy to be able to convert the directions to X,Y coordinates directly, without plotting them at all. The program di_eq.py does this. Here is an example transcript of a session using the datafile di_eq_example.dat:

$ di_eq.py -f di_eq_example.dat  
-0.239410 -0.893491  
0.436413 0.712161  
0.063844 0.760300  
0.321447 0.686216  
0.322720 0.670562  
0.407412 0.540654  
.  
.  
.

5.2.26 di_geo.py

[Essentials Chapter 9] and Changing coordinate systems [di_geo docs]

Use the programs di_geo.py to convert D = 8.1,I = 45.2 from specimen coordinates to geographic adjusted coordinates. The orientation of laboratory arrow on the specimen was: azimuth = 347; plunge = 27. di_geo.py works in the usual three ways (interactive data entry, command line file specification or from standard input . So for a quickie answer for a single specimen, you can use the interactive mode:

$ di_geo.py -i  
Declination: <cntrl-D> to quit  81  
Inclination: 45.2  
Azimuth: 347  
Plunge: 27  
   94.8    43.0

which spits out our answer of Declination = 5.3 and inclination = 71.6.

For more data, it is handy to use the file entry options. There are a bunch of declination, inclination, azimuth and plunge data in the file di_geo_example.dat in the di_geo directory. First look at the data in specimen coordinates in equal area projection, using the program eqarea.py. Note that this program only pays attention to the first two columns so it will ignore the orientation information.

$ eqarea.py -f di_geo_example.dat

which should look like this:

PIC

The data are highly scattered and we hope that the geographic coordinate system looks better! To find out try:

di_geo.py -f di_geo_example.dat >di_geo.out;  eqarea.py -f di_geo.out  

which looks like this:

PIC

These data are clearly much better grouped.

5.2.27 di_rot.py

[Essentials Chapter 11] [di_rot docs]

Generate a Fisher distributed set of data from a population with a mean direction of D = 0,I = 42 using the program fishrot.py. Calculate the mean direction of the data set using gofish.py. Now use the program di_rot.py to rotate the set of directions to the mean direction. Look at the data before and after rotation using eqarea.py.

$ fishrot.py -I 42 >fishrot.out  
$ gofish.py <fishrot.out  
    1.7    42.4    100    95.3720     21.4     3.1  
$ di_rot.py -f fishrot.out -F dirot.out -D 1.7 -I 42.4  
$ eqarea.py -f fishrot.out  
 $ eqarea.py -f dirot.out

which generates plots like these:

PIC

Note that every instance of fisher.py will draw a different sample from a Fisher distribution and your exact plot and average values will be different in detail every time you try this (and that’s part of the fun of statistical simulation.)

5.2.28 di_tilt.py

[Essentials Chapter 9] and Changing coordinate systems [di_tilt docs]

Use the program di_tilt.py to rotate a direction of Declination = 5.3 and Inclination = 71.6 to “stratigraphic” coordinates. The strike was 135 and the dip was 21. The convention in this program is to use the dip direction, which is to the “right” of this strike.

Here is a session with di_tilt.py using the interactive option:

$ di_tilt.py -i  
Declination: <cntl-D> to quit 5.3  
Inclination: 71.6  
Dip direction: 225  
Dip: 21  
  285.7    76.6  
Declination: <cntl-D> to quit ^D  
 Good-bye

Try the same on the data file saved as di_tilt_example.dat using the command line -f switch:

5.2.29 di_vgp.py

[Essentials Chapter 2] [di_vgp docs]

Use the program di_vgp to convert the following:





D I λs (N) ϕs (E)




11 63 55 13
154 -58 45.5 -73




Here is a transcript of a typical session using the command line option for file name entry:

$ di_vgp.py -f di_vgp_example.dat  
  154.7    77.3  
    6.6   -69.6

5.2.30 dipole_pinc.py

[Essentials Chapter 2] [dipole_pinc docs]

Calculate the expected inclination at a paleolatitude of 24S.

$ dipole_pinc.py -i  
Paleolat for converting to inclination: <cntl-D> to quit -24  
  -41.7  
Paleolat for converting to inclination: <cntl-D> to quit ^D  
 Good-bye  

5.2.31 dipole_plat.py

[Essentials Chapter 2] [dipole_plat docs]

Calculate the paleolatitude for an average inclination of 23.

$ dipole_plat.py -i  
Inclination for converting to paleolatitude: <cntl-D> to quit 23  
   12.0  
Inclination for converting to paleolatitude: <cntl-D> to quit ^D  
 Good-bye

5.2.32 dir_cart.py

[Essentials Chapter 2]] [dir_cart docs]

Use the program dir_cart.py to convert the following data from declination D, inclination I and intensity M to x1,x2,x3.




D I M (μAm2)



20 46 1.3
175 -24 4.2



You can enter D,I,M data into data file, then running the program by typing what is after the prompts (%) [ the other stuff is computer responses] :

$ cat > dir_cart_example.dat  
 20 46 1.3  
175 -24 4.2  
^D  
$ dir_cart.py <dir_cart_example.dat  
8.4859e-01 3.0886e-01 9.3514e-01  
-3.8223e+00 3.3441e-01 -1.7083e+00

Or you could use dir_cart.py interactively as in:

 $ dir_cart.py -i  
 
Declination: [cntrl-D  to quit]  
 Good-bye  
 
$ dir_cart.py -i  
Declination: [cntrl-D  to quit] 20  
Inclination: 46  
Intensity [return for unity]: 1.3  
8.4859e-01 3.0886e-01 9.3514e-01  
Declination: [cntrl-D  to quit] 175  
Inclination: -24  
Intensity [return for unity]: 4.2  
-3.8223e+00 3.3441e-01 -1.7083e+00  
Declination: [cntrl-D  to quit] ^D  
 Good-bye

5.2.33 dmag_magic.py

[Essentials Chapter 9] and [MagIC Database] [dmag_magic docs]

Use dmag_magic.py to plot out the decay of all alternating field demagnetization experiments in the magic_measurements formatted file in dmag_magic_example.dat. These are from Lawrence et al. (2009). Repeat for all thermal measurements, but exclude all the data acquired during the thermal but not paleointensity experiments. Try this at the location level and then at the site level.

Here is a transcript of a session:

$  dmag_magic.py -f dmag_magic_example.dat -LT AF  
5921  records read from  dmag_magic_example.dat  
McMurdo plotting by:  er_location_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  McMurdo_LT-AF-Z.svg  
$ dmag_magic.py -f dmag_magic_example.dat -LT T -XLP PI  
5921  records read from  dmag_magic_example.dat  
McMurdo plotting by:  er_location_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  McMurdo_LT-T-Z.svg  
$ dmag_magic.py -f dmag_magic_example.dat -LT AF -obj sit  
5921  records read from  dmag_magic_example.dat  
mc20 plotting by:  er_site_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  
mc200 plotting by:  er_site_name  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  mc200_LT-AF-Z.svg  

which produced these plots:

PIC

5.2.34 download_magic.py

[MagIC] [download_magic docs]

This program unpacks .txt files downloaded from the MagIC database into individual directories for each location into which the individual files for each table (e.g., er_locations.txt, magic_measurements.txt, pmag_results.txt and so on) get placed. As an example, go to the MagIC data base at http://earthref.org/MAGIC/search. Enter “Tauxe and 2004” into the Reference Text Search field will show you several references. Look for the one for Tauxe, L., Luskin, C., Selkin, P., Gans, P. and Calvert, A. (2004). Download the text file under the “Data” column and save it to your desktop. Make a folder into which you should put the downloaded txt file called MagIC_download and move the file into it. Now use the program download_magic.py to unpack the .txt file (zmab0083201tmp03.txt).

$ cd MagIC_download  
$ download_magic.py -f zmab0083201tmp03.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
er_samples  data put in  ./er_samples.txt  
working on:  ’er_specimens’  
.  
.  
.  
location_1:  Snake River  
unpacking:  ./er_locations.txt  
1  read in  
1  stored in  ./Location_1/er_locations.txt  
unpacking:  ./er_sites.txt  
27  read in  
27  stored in  ./Location_1/er_sites.txt  
unpacking:  ./er_samples.txt  
271  read in  
271  stored in  ./Location_1/er_samples.txt  
unpacking:  ./er_specimens.txt  
.  
.  
.

You can change directories into each Location directory (in this case only one) and examine the data using the PmagPy GUI programs (e.g., pmag_gui.py).

5.2.35 eigs_s.py

[Essentials Chapter 13] [eigs_s docs]

Print out the eigenparameters in the file eigs_s_example.dat and then convert them to tensor data in the .s format (x11,x22,x33,x12,x13,x23).

This session uses the unix utility cat to print the data. [You could use the Ms-Dos form type in a Windows command line window.] Then, it prints the tensor data to the screen.

$ cat eigs_s_example.dat  
0.33127  239.53   44.70 0.33351  126.62   21.47 0.33521  19.03   37.54  
0.33177  281.12    6.18 0.33218  169.79   73.43 0.33603   12.82   15.32  
...  
$ eigs_s.py -f eigs_s_example.dat  
0.33416328 0.33280227 0.33303446 -0.00016631 0.00123163 0.00135521  
0.33555713 0.33197427 0.33246869 0.00085685 0.00025266 0.00098151  
0.33585301 0.33140355 0.33274350 0.00132308 0.00117787 0.00000455  
...

5.2.36 eq_di.py

[Essentials Appendix B] [eq_di docs]

Data are frequently published as equal area projections and not listed in data tables. These data can be digitized as x,y data (assuming the outer rim is unity) and converted to approximate directions with the program eq_di.py. To use this program, install a graph digitizer (GraphClick from http://www.arizona-software.ch/graphclick/ works on Macs).

Digitize the data from the equal area projection saved in the file eqarea.png in the eq_di directory. You should only work on one hemisphere at a time (upper or lower) and save each hemisphere in its own file. Then you can convert the X,Y data to approximate dec and inc data - the quality of the data depends on your care in digitizing and the quality of the figure that you are digitizing.

Try out eq_di.py on your datafile, or use eq_di_example.dat which are the digitized data from the lower hemisphere and check your work with eqarea.py. You should retrieve the lower hemisphere points from the eqarea.py example.

$ eq_di.py -f eq_di_example.dat >tmp  
$ eqarea.py -f tmp

NB: To indicate that your data are UPPER hemisphere (negative inclinations), use the -up switch.

You can verify the process by comparing the plot generated for these data using eqarea.py with the original png file.

5.2.37 eqarea.py

[Essentials Chapter 2] and [Essentials Appendix B.1] [eqarea docs]

Use the program fishrot.py to generate a Fisher distiributed set of data drawn from a distribution with mean declination of 42 and a mean inclination of 60. LSave this to a file called it fishrot.out. Use eqarea.py to plot an equal area projection of the data.

$  fishrot.py -D 42 -I 60 > fishrot.out  
$ eqarea.py -f fishrot.out

which produces the plot:

PIC

5.2.38 eqarea_ell.py

[Essentials Chapters 11] and [Essentials Chapter 12] [eqarea_ell docs]

Use the program tk03.py to generate a set of simulated data for a latitude of 42N including reversals. Then use the program eqarea_ell.py to plot an equal area projection of the directions in di_example.txt and plot confidence ellipses. Here is an example for Bingham ellipses.

$ tk03.py -lat 42 -rev >tk03.out  
$ eqarea_ell.py -f tk03.out -ell B  
     Zdec    21.0  
     Edec   105.7  
     Eta     4.5  
     n        20  
     Einc    10.0  
     Zinc   -27.6  
     Zeta     4.5  
     dec   357.8  
     inc    60.3  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  tk03.out_eq.svg

which produces a plot like this:

PIC

Other ellipses are Kent, Fisher and bootstrapped ellipses. Check the documentation for details.

5.2.39 eqarea_magic.py

[MagIC Database] [eqarea_magic docs]

Follow the instructions for downloading and unpacking a data file from the MagIC database or use the file in the download_magic directory already downloaded from the MagIC website. Plot the directional data for the study from the pmag_results.txt file along with the bootstrapped confidence ellipse.

$ eqarea_magic.py -obj loc -crd g -f pmag_results.txt -ell Be  
24  records read from  ./pmag_results.txt  
All  
sr01 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   330.1    64.9  
sr03 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   151.8   -57.5  
sr04 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    16.5    54.6  
.  
.  
.  
sr31 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    23.2    54.0  
sr34 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   205.8   -49.4  
sr36 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   197.6   -65.5  
sr39 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   188.1   -47.2  
sr40 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   192.9   -60.7  
Normal Pole GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T     6.1    59.6  
Reverse pole GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   185.1   -58.8  
Grand Mean pole GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T     5.7    59.3  
mode  1  
     Zdec   111.1  
     Edec   206.0  
     Eta     7.8  
     n        1  
     Einc    28.9  
     Zinc     8.7  
     Zeta     3.3  
     dec     6.0  
     inc    59.6  
mode  2  
     Zdec   248.0  
     Edec   150.1  
     Eta     4.3  
     n        1  
     Einc    26.4  
     Zinc    15.4  
     Zeta     8.0  
     dec   185.1  
     inc   -58.8  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  LO:_Snake River_SI:__SA:__SP:__CO:_gu_TY:_eqarea_.svg

makes this plot:

PIC

The information printed to the window is the pmag_result_name in the data table, the method codes (here the geochronology method, the type of demagnetization code and the types of demagnetization experiments), and each site mean declination inclination. The information following “mode 1” are the bootstrapped ellipse parameters.

Some data are study averages and some are individual sites.

Use magic_select.py to select only the individual site data. Try the

 $ magic_select.py -f pmag_results.txt -key data_type i ’T’ -F site_results.txt  
$ eqarea_magic.py -f site_results.txt -ell Bv  
21  records read from  ./site_results.txt  
All  
sr01 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   330.1    64.9  
sr03 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   151.8   -57.5  
sr04 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    16.5    54.6  
sr09 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T    14.6    77.9  
.  
.  
.  
sr36 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   197.6   -65.5  
sr39 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T:LP-PI-ALT-PTRM:LP-PI-TRM-ZI   188.1   -47.2  
sr40 GM-ARAR-AP:LP-DC5:LP-DIR-AF:LP-DIR-T   192.9   -60.7  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  LO:_Snake River_SI:__SA:__SP:__CO:_g_TY:_eqarea_.svg  
2  saved in  LO:_Snake River_SI:__SA:__SP:__CO:_g_TY:_bdirs_.svg

which produces plots like:

PIC

5.2.40 find_EI.py

[Essentials Chapter 14][find_EI docs]

A data file was prepared using tk03.py to simulate directions at a latitude of 42. Use the program dipole_pinc.py to find what the expected inclination at this latitude is!

$ dipole_pinc.py -i  
Paleolat for converting to inclination: <cntl-D> to quit 42  
   61.0  
^D  
Paleolat for converting to inclination: <cntl-D> to quit ^D  
 Good-bye

The data were “flattened” with the program squish.py which uses the formula tanIo = f tanIf to simulate inclination error and saved in a data file find_EI_example.dat in the find_EI directory.

$ tk03.py -lat 42 > tk03.out  
$ squish.py -flt .4 -f tk03.out -F find_EI_example.dat

Use the program find_EI.py to find the optimum flattening factor f which, when used to “unflatten” the data yields inclination and elongation (ratio of major and minor eigenvalues of orientation matrix, see the section on eigenvalues in the textbook) most consistent with the TK03.GAD paleosecular variation model of Tauxe and Kent (2004).

$  find_EI.py -f find_EI_example.dat  
Bootstrapping.... be patient  
25  out of  1000  
50  out of  1000  
75  out of  1000  
100  out of  1000  
.  
.  
.  
Io Inc  I_lower, I_upper, Elon, E_lower, E_upper  
   38.9  =>     58.8 _    48.4 ^    67.0:  1.4679 _ 1.2912 ^ 1.7337  
S[a]ve plots - <return> to quit:  a  
2  saved in  findEI_ei.svg  
3  saved in  findEI_cdf.svg  
1  saved in  findEI_eq.svg  
4  saved in  findEI_v2.svg  
 

which produces these plots:

PIC

In this example, the original expected inclination at paleolatitude of 42 (61) is recovered within the 95% confidence bounds.

Note that the correction implemented by find_EI.py is, by default, meant to be a “study” level correction in which the distribution of data is determined primarily by secular variation (the TK03.GAD model). Alternatively, if you wish to correct “flattened” data to a Fisher distribution (a “site level” correction; see section 4.1 of Tauxe and Kent, 2004) you can specify this with the -sc flag like so:

$  find_EI.py -f find_EI_example.dat -sc

Note that many directions (~100) are needed for this correction-by-site to be reliable.

5.2.41 fisher.py

[Essentials Chapter 11] [fisher docs]

Draw a set of 10 directions from a Fisher distribution with a κ of 30 using fisher.py:

$ fisher.py -k 30 -n 10  
  233.7    81.4  
  357.0    76.4  
  272.5    62.8  
  137.0    70.0  
   83.7    71.2  
...  
  

You could plot the output with, e.g., eqarea.py.

Note that every instance of this program draws a different distribution, so yours will look different in detail.

5.2.42 fishqq.py

[Essentials Chapter 11] [fishqq docs]

Test whether a set of 100 data points generated with fisher.py are in fact Fisher distributed by using a Quantile-Quantile plot:

$ fisher.py -k 30 -n 100 >fishqq_example.txt  
$ fishqq.py -f fishqq_example.txt

produces these plots:

PIC

which support a Fisher distribution for these data.

5.2.43 fishrot.py

[Essentials Chapter 11] [fishrot docs]

Draw a set of 5 directions drawn from a Fisher distribution with a true mean declination of 33, a true mean inclination of 41, and a κ of 50:

$ fishrot.py -n 5 -D 33 -I 41 -k 50  
   35.8    32.8  
   36.2    30.2  
   37.5    41.8  
   28.6    23.9  
   26.1    32.8  
   

5.2.44 foldtest.py

[Essentials Chapter 12] [foldtest docs]

Use foldtest.py to perform the Tauxe and Watson (1994) foldtest on the data in foldtest_example.dat.

$ foldtest.py -f foldtest_example.dat  
doing  1000  iterations...please be patient.....  
0  
50  
100  
.  
.  
.  
 
82 - 117 Percent Unfolding  
range of all bootstrap samples:  70  -  137  
S[a]ve all figures, <Return> to quit   a  
3  saved in  fold_ta.svg  
2  saved in  fold_st.svg  
1  saved in  fold_ge.svg  
 

which gives the plots:

PIC

Apparently these directions were acquired prior to folding because the 95% confidence bounds on the degree of untilting required for maximizing concentration of the data (maximum in principle eigenvalue) τ1 of orientation matrix (see the section on eigenvalues in the textbook) includes 100%.

5.2.45 foldtest_magic.py

[Essentials Chapter 12] and [MagIC Database] [foldtest_magic docs]

This program performs the same test as foldtest.py. The only difference is that it reads MagIC formatted input files and allows the application of selection criteria as specified in the pmag_criteria.txt formatted file.

To begin, download the file from this references: Halls et al. (1974; doi: 10.1139/e74-113) at https://earthref.org/MagIC/DOI/10.1139/e74-113 and unpack it with download_magic.py. Then run foldtest_magic.py for the fold test.

$ download_magic.py -f magic_contribution_11087.txt  
$ foldtest_magic.py -f pmag_sites.txt -fsi er_sites.txt  
doing  1000  iterations...please be patient.....  
0  
50  
....  
900  
950  
96 - 149 Percent Unfolding  
S[a]ve all figures, <Return> to quit  
 a  
3  saved in  foldtest_ta.svg  
2  saved in  foldtest_st.svg  
1  saved in  foldtest_ge.svg  

which gives the plots:

PIC

5.2.46 gaussian.py

[Essentials Chapter 11] [gaussian docs]

Use gaussian.py to generate a set of 100 normally distributed data points drawn from a population with a mean of 10.0 and standard deviation of 30. Save it to a file named gauss.out.

$ gaussian.py -s 3 -n 100 -m 10. -F gauss.out

You can check the sample mean and standard deviation with stats.py or make a histogram of the data with histplot.py

5.2.47 gobing.py

[Essentials Chapter 12] [gobing docs]

Use the dataset generated in the eqarea_ell.py example. Calculate Bingham parameters using gobing.py instead of within the plotting program:

$ gobing.py -f tk03.out  
  357.8    60.3     4.5   105.7    10.0     4.5    21.0   -27.6 20

which according to the help message from gobing.py is:

mean dec, mean inc, Eta, Deta, Ieta, Zeta, Zdec, Zinc, N

5.2.48 gofish.py

[Essentials Chapter 11] [gofish docs]

Draw a set of 10 directions drawn from a Fisher distribution with a true mean declination of 15, a true mean inclination of 41, and a κ of 50 and save it to a file, then use gofish.py to calculate the Fisher parameters:

$  fishrot.py -n 10 -D 15 -I 41 -k 50 >fishrot.out  
$ gofish.py -f  fishrot.out  
   10.8    39.6    10     9.8484     59.4     6.3    10.5

which according to the help message from gofish.py -h is: mean dec, mean inc, N, R, k, a95, csd. Your results will vary because every instance of fishrot.py draws a different sample from the Fisher distribution.

5.2.49 gokent.py

[Essentials Chapter 12] [gokent docs]

Draw a set of 20 data points from a TK03.GAD distribution predicted for a latitude of 42N (see 14), without reversals. Calculate kent parameters using gokent.py

$ tk03.py -n 20 -lat 42 >tk03.out; gokent.py -f  tk03.out  
   359.2    55.0     9.3   147.7    30.8     7.8   246.8    14.9 20

which according to the help message from gokent.py is: mean dec, mean inc, Eta, Deta, Ieta, Zeta, Zdec, Zinc, N

5.2.50 goprinc.py

[Essentials Chapter 12] [goprinc docs]

Draw a set of 20 data points from a TK03.GAD distibution predicted for a latitude of 42N (see 14), including reversals. Calculate the eigenparameters of the orientation matrix (the principal components) using goprinc.py

$ tk03.py -n 20 -lat 42 -rev >tk03.out; goprinc.py -f tk03.out  
0.93863    11.0    58.6 0.04258   226.4    26.5 0.01879   128.3    15.6 20

which according to the help message from goprinc.py is: τ1V 1D,V 1I2V 2DV 2Iτ3V 3DV 3I,N.

5.2.51 grab_magic_key.py

[MagIC Database] [grab_magic_key docs]

grab_magic_key.py is a utility that will print out any column (-key option) from any [MagIC Database] formatted file. For example, we could print out all the site latitudes from the er_sites.txt file down loaded in the download_magic.py example:

$ grab_magic_key.py -f er_sites.txt -key site_lat  
42.60264  
42.60264  
42.60352  
42.60104  
...

You could save the data in a file with the output redirect function (> ) and plot them with, say plot_cdf.py.

$ grab_magic_key.py -f er_sites.txt -key site_lat > lats  
$ plot_cdf.py -f lats

which produces the fascinating (NOT!) plot:

PIC

5.2.52 histplot.py

[histplot docs]

Make a histogram of the data generated with the gaussian.py program.

$ histplot.py -f gauss.out

which makes a plot similar to:

PIC

5.2.53 hysteresis_magic.py

[Essentials Chapters 5], [Essentials Chapter 7], [Essentials Appendix C.1] & [MagIC Database] [hysteresis_magic docs]

Plot the data in hysteresis_magic_example.txt, (from Ben-Yosef et al., 2008) which were imported into MagIC using AGM_magic.py. Use the program hysteresis_magic.py to plot the data.

$ hysteresis_magic.py -f hysteresis_magic_example.dat  
 
IS06a-1 1 out of  8  
S[a]ve plots, [s]pecimen name, [q]uit, <return> to continue  
 a  
1  saved in  _IS06a-1_hyst.svg  
3  saved in  _IS06a-1_DdeltaM.svg  
2  saved in  _IS06a-1_deltaM.svg  
5  saved in  _IS06a-1_irm.svg

which makes the plots:

PIC

5.2.54 igrf.py

[Essentials Chapter 2][Essentials Chapter 2] [igrf docs]

Use the program igrf.py to estimate the field on June 1, 1995 in Amsterdam, The Netherlands (52.5 N, 5E).

$ igrf.py -i  
Decimal year: <cntrl-D to quit> 1995.5  
Elevation in km [0]  
Latitude (positive north) 52.5  
Longitude (positive east) 5  
  357.9    67.4    48534  
 
Decimal year: <cntrl-D to quit> ^D  
 Good-bye

Now read from the file igrf_example.dat which requests field information for near San Diego for the past 3000 years. The program will evaluate the field vector for that location for dates from 1900 to the present using the IGRF/DGRF coefficients in the IGRF-11 model available from: http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html. For the period from 1000 BCE to 1800, the program uses the CALS3k.4b of Korte and Constable (2011) available at: http://earthref.org/ERDA/1142/. Prior to that and back to 8000 BCE, the program uses the coefficients of Korte et al., 2011 for the CALS10k.1b, available at: http://earthref.org/ERDA/1403/

$ igrf.py -f igrf_example.dat -F igrf.out

You can also make a plot of secular variation of field elements by specifying the age range and location on the command line. Try this for the location of San Diego for the age range: -3000 1925 in increments of 50 years:

$ igrf.py -ages -3000 1950 50 -loc 33 -117 -mod pfm9k -plt  
     S[a]ve to save figure, <Return>  to quit

and get this plot:

PIC

5.2.55 install_etopo.py

Some mapping utilities use the Etopo package for topography and these data sets do not come standard with the Python installation. To install these properly, simply type:

$ install_etopo.py

5.2.56 incfish.py

[ Essentials Chapter 11] [incfish docs]

Use the program incfish.py to calculate average inclination for inclination only data simulated by fishrot.py for an average inclination of 60. If you save the declination, inclination pairs, you can compare the incfish.py answer with the Fisher mean. The datafile incfish_example_di.dat has the declination, inclination pairs and incfish_example_inc.dat has just the inclinations.

$ fishrot.py -I 60 -k 10 > incfish_example_di.dat  
$ awk ’{print $2}’ incfish_example_di.dat > incfish_example_inc.dat  
$ incfish.py -f incfish_example_inc.dat  
   57.1    61.0  100     92.9    13.9     1.0  
$ gofish.py -f incfish_example_di.dat  
    1.8    62.9    100    91.2727     11.3     4.4    24.0

The output for incfish.py is: [gaussian] mean inc, Fisher inc, N,R,k,α95. You can see that the incfish.py result is much closer to the Fisherian result than the gaussian mean is.

5.2.57 irmaq_magic.py

MagIC [irmaq_magic docs]

Someone (Saiko Sugisaki) measured a number of samples from IODP Expedition 318 Hole U1359A for IRM acquisition curves. She used the ASC impulse magnetizer’s coils # 2 and 3 and saved the data in the SIO format. Convert these to the MagIC measurements format, combine them in a single magic_measurements.txt file with combine_magic.py and plot the data with irmaq_magic.py.

$ sio_magic.py -f U1359A_IRM_coil2.txt -LP I -V 2 -F U1359A_IRM_coil2.magic \  
   -loc U1359A -spc 0 -ncn 5  
$ sio_magic.py -f U1359A_IRM_coil3.txt -LP I -V 3 -F U1359A_IRM_coil3.magic \  
    -loc U1359A -spc 0 -ncn 5  
$ combine_magic.py -F magic_measurements.txt -f U1359A_IRM_coil2.magic \  
    U1359A_IRM_coil3.magic  
$ irmaq_magic.py  
559  records read from  magic_measurements.txt  
U1359A  
 S[a]ve to save plot, [q]uit,  Return to continue:  a  
1  saved in  U1359A_LP-IRM.svg

which produces the plot:

PIC

5.2.58 k15_magic.py

[Essentials Chapter 13]; [MagIC Database] [k15_magic docs]

Someone took a set of samples from a dike margin in the Troodos Ophiolite and measured their anisotropy of magnetic susceptibility on an a Kappabridge KLY 2.0 instrument in the SIO laboratory:

tr245f     80.00   -46.0 204 25  
   995.     999.     993.     995.    1000.  
  1004.     999.    1001.    1004.     999.  
   998.     997.    1002.     998.     997.  
tr245g     52.00   -23.0 204 25  
  1076.    1066.    1072.    1077.    1067.  
  1076.    1068.    1072.    1076.    1067.  
  1068.    1075.    1071.    1068.    1076.  
tr245h    123.00   -29.0 204 25  
  1219.    1231.    1218.    1220.    1232.  
  1230.    1231.    1234.    1230.    1231.  
  1221.    1221.    1225.    1221.    1221.  
tr245i1    68.00   -26.0 204 25  
  1031.    1027.    1025.    1032.    1028.  
  1035.    1031.    1035.    1035.    1032.  
  1030.    1027.    1032.    1030.    1027.  
  

The first line of each set of four has the specimen name, azimuth, plunge, and bedding strike and dip the next three lines are sets of five measurements in the 15 positions recommended by Jelinek (1977):

PIC

The 15 measurements for each specimen, along with orientation information and the specimen name were saved in the file k15_example.dat. Convert these to the MagIC format using the program k15_magic.py:

$ k15_magic.py -spc 0 -f k15_example.dat -loc ‘‘Troodos Ophiolite‘‘  
Data saved to:  ./er_samples.txt ./rmag_anisotropy.txt ./rmag_results.txt ./k15_measurements.txt  
 

You can plot the output of this example (default file rmag_anisotropy.txt) using the program aniso_magic.py.

5.2.59 k15_s.py

[Essentials Chapter 13]; [MagIC Database] [k15_s docs]

Use k15_s.py to calculate the best-fit tensor elements and residual error for the data in the file k15_example.dat (same file as for k15_magic.py. These are: the specimen name, azimuth and plunge and the strike and dip, followed by the 15 measurements made using the Jelinek 1977 scheme shown in the k15_magic.py example. Calculate the .s data in specimen, geographic and tilt adjusted coordinates:

$ k15_s.py -f k15_example.dat  
0.33146986 0.33413991 0.33439023 0.00075095 -0.00083439 -0.00016688 0.00008618  
0.33335925 0.33335925 0.33328149 -0.00155521 -0.00132193 0.00116641 0.00017193  
0.33097634 0.33573565 0.33328801 0.00163177 0.00013598 0.00000000 0.00018131  
...  
$ k15_s.py -f k15_example.dat -crd g  
0.33412680 0.33282733 0.33304587 -0.00015289 0.00124840 0.00135721 0.00008618  
0.33556300 0.33198264 0.33245432 0.00087259 0.00024141 0.00096166 0.00017193  
0.33584908 0.33140627 0.33274469 0.00131844 0.00118816 0.00002987 0.00018131  
...  
$ k15_s.py -f k15_example.dat -crd t  
0.33455712 0.33192658 0.33351630 -0.00043563 0.00092770 0.00105006 0.00008618  
0.33585501 0.33191562 0.33222935 0.00055960 -0.00005314 0.00064730 0.00017193  
0.33586666 0.33084926 0.33328408 0.00142269 0.00013235 0.00009201 0.00018131  
...

5.2.60 kly4s_magic.py

[Essentials Chapter 13]; [MagIC Database] [KLY4S_magic docs]

The program AMSSpin available for downloading from http://earthref.org/ERDA/940/ generates data for the Kappabridge KLY4S spinning magnetic susceptibility instrument as described by Gee et al. (2008).

The files have the format:

318-U1359B-011H-1-W-75  0.3415264 0.3403534 0.3181202 -0.0006608 0.0000948 0.0023186  2.284E-4 1.26728E-4 1.31105E-8 -21.81   300.0   10.00 -5.13E-6 11/04/11 12:54 Saiko  
318-U1359B-011H-2-W-76  0.3369955 0.3361130 0.3268916 0.0000845 -0.0005236 0.0003290  3.500E-4 7.85130E-5 3.16381E-8 -21.81   300.0   10.00 -4.11E-6 11/04/11 12:57 Saiko  
318-U1359B-011H-3-W-78  0.3373485 0.3364144 0.3262371 -0.0002281 0.0005881 -0.0011413  3.523E-4 1.04700E-4 8.77716E-9 -21.81   300.0   10.00 -4.11E-6 11/04/11 13:00 Saiko  
318-U1359B-011H-4-W-96  0.3343428 0.3323376 0.3333195 0.0007210 -0.0005503 0.0011256  
 

where the columns are:

Specimen S_1 S_2 S_3 S_4 S_5 S_6 χb(μSI) date time user

Output files are in the format of the file KLY4S_magic_example.dat (found in the Measurement_Import/KLY4S_magic folder). One option is for orientation information to be output as an azdip formatted file (see azdip_magic.py.) Try it out on some data from IODP Expedition 318 (Hole U1359B) published by Tauxe et al., (2012). Note that the sample naming convention is # 5 (-ncn 5) and the sample name is the same as the site (-spc 0) using KLY4S_magic.py as follows:

%KLY4S_magic.py -f KLY4S_magic_example.dat  -spc 0 -ncn 5

This command will create the files needed by the MagIC database and the data can be plotted using aniso_magic.py. If you were to import the sample files from the LIMS data base for these samples, you could plot them versus depth, or as equal area projections using ANI_depthplot.py and aniso_magic.py respectively.

5.2.61 lnp_magic.py

[Essentials Chapter 11], [Essentials Appendix C2.2] & [MagIC Database] [lnp_magic docs]

This program will take pmag_specimen formatted MagIC files (for example, generated by zeq_magic.py) and plot data by site, combining best-fit lines and best-fit planes using the method described in Essentials Appendix C2.2. Try this out on the data from the San Francisco Volcanics, notorious for lightning strikes, published by Tauxe et al., 2003. These can be downloaded from the MagIC database from http://earthref.org/MAGIC/m000629dt20061213090720/ and unpacked with download_magic.py.

$ download_magic.py -f zmab0001193tmp02.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
er_samples  data put in  ./er_samples.txt  
...  
$ lnp_magic.py -f pmag_specimens.txt -crd g  
sv01  
Site lines planes  kappa   a95   dec   inc  
sv01 0  5     286      6.6    179.0    -54.3  4.9948  
$ tilt correction:  0  
s[a]ve plot, [q]uit, <return> to continue:  
 a  
1  saved in  sv01_g_eqarea.svg  

which generated this figure:

PIC

5.2.62 lowrie.py

[Essentials Chapter 8] [lowrie docs]

Someone (Saiko Sugisaki) subjected a number of specimens from IODP Expedition 318 Hole U1359A specimens to a 3-D IRM experiment and saved the data in the SIO format. Use lowrie.py to make plots of blocking temperature for the three coercivity fractions.

 
$ lowrie.py -f lowrie_example.dat  
318-U1359A-002H-1-W-109  
S[a]ve figure? [q]uit, <return> to continue   a  
1  saved in  lowrie:_318-U1359A-002H-1-W-109_.svg  
318-U1359A-002H-4-W-65  
S[a]ve figure? [q]uit, <return> to continue   q

which produces plots like this:

PIC

5.2.63 lowrie_magic.py

[Essentials Chapter 8] and [MagIC Database] [lowrie_magic docs]

This program works exactly like lowrie.py, but works on magic_measurements.txt formatted files. Use sio_magic.py to import the lowrie_example.dat data file into the MagIC format. Then use lowrie_magic.py to plot the data:

$ sio_magic.py -f lowrie_example.dat -LP I3d -loc U1359A -F lowrie_magic_example.dat  
averaging: records  11 12  
318-U1359A-004H-2-W-18 started with  15  ending with  14  
Averaged replicate measurements  
averaging: records  13 14  
318-U1359A-004H-2-W-44 started with  15  ending with  14  
Averaged replicate measurements  
averaging: records  14 16  
318-U1359A-012H-5-W-70 started with  16  ending with  14  
Averaged replicate measurements  
averaging: records  14 15  
$ lowrie_magic.py -f lowrie_magic_example.dat  
318-U1359A-002H-1-W-109  
S[a]ve figure? [q]uit, <return> to continue

5.2.64 magic_select.py

[MagIC Database] [magic_select docs]

This program takes any MagIC formatted file and selects lines that match the search criteria and saves the output as a new MagIC formatted file of the same type. Selection criteria match whole or partial strings, avoid whole or partial strings or evaluate minimum or maximum values. The syntax is to specify the input file with the -f option, the key you wish to search on with -key, followed by the string you wish to use as a criterion, followed with a ’T’ for match exactly, ’F’ avoid entirely, ’has’ for matching partially and ’not for avoiding any partial match, and min, max and eval for evaluating the numerical value. The -F option sets the output file.

For example, you could pick out all the records that match a certain site name string. You could select out records with specified method codes. You could get all records with inclinations that are positive.

Use magic_select.py to pick out all the best-fit lines based on AF demagnetization data from the pmag_specimens.txt file unpacked from the MagIC database in the download_magic.py example.

$ magic_select.py -f pmag_specimens.txt -key magic_method_codes LP-DIR-AF has -F AF_specimens.txt  
76  records written to  ./AF_specimens.txt  
  $ magic_select.py -f AF_specimens.txt -key magic_method_codes DE-BFL has -F AF_BFL_specimens.txt  
69  records written to  ./AF_BFL_specimens.txt  
 
 

5.2.65 make_magic_plots.py

[MagIC Database] [make_magic_plots docs]

This program will inspect the files in the current directory and autogenerate some standard plots.

5.2.66 Measurement Import Scripts

_2g_bin_magic.py

[_2g_bin_magic.py docs]

This program imports the binary format generated by the 2G proprietary software. It will place the orientation information in the er_samples.txt file and er_sites.txt file by default. Naming and orientation conventions can be specified as in the sections on Naming conventions and Orientation conventions. You can also specify sampling method codes that pertain to ALL samples (see section on field information.)

Aagm_magic.py

[Essentials Chapter 5] & [MagIC Database] [agm_magic docs]

This program imports Micromag hysteresis files into magic_measurements formatted files. Because this program imports data into the MagIC database, specimens need also to have sample/site/location information which can be provided on the command line. If this information is not available, for example if this is a synthetic specimen, specify -syn SYN_NAME for synthetic on the command line instead of the -spn SPEC_NAME switch.

Someone named Lima Tango has measured a synthetic specimen named myspec for hysteresis and saved the data in a file named agm_magic_example.agm. The backfield IRM curve for the same specimen was saved in agm_magic_example.irm. Use the program AGM_magic.py to import the data into a magic_measurements formatted output file. These were measured using cgs units, so be sure to set the units switch properly. Combine the two output files together using combine_magic.py. [This can be plotted using hysteresis_magic.py or quick_hyst.py.] hysteresis_magic.py will calculate various hysteresis parameters and put them in the relevant magic tables for you. ] s

$ agm_magic.py -spn myspec --usr ‘‘Lima Tango‘‘ -f agm_magic_example.agm -u cgs  
 
results put in  ./agm_measurements.txt  
$ agm_magic.py -spn myspec --usr ‘‘Lima Tango‘‘ -f agm_magic_example.irm -u cgs  -bak  
 
results put in  ./irm_measurements.txt  
$ combine_magic.py -F magic_measurements.txt -f agm_measurements.txt irm_measurements.txt  

cit_magic.py

[cit_magic.py docs]

Craig Jones’ PaleoMag software package (http://cires.colorado.edu/people/jones.craig/PMag3.html) imports various file formats, including the ’CIT’ format developed for the Caltech lab and now utilized in magnetometer control software that ships with 2G magnetometers that utilized a vertical sample changer system. The documentation for the CIT sample format is here: http://cires.colorado.edu/people/jones.craig/PMag_Formats.html#SAM_format. Demagnetization data for each specimen are in their own file in a directory with all the data for a site or study. These files are strictly formatted with fields determined by the character number in the line. A datafile generated in the UC Berkeley paleomagnetism lab, for a specimen published in Fairchild et al., (2016) (available in the MagIC database: (https://earthref.org/MagIC/11292/) has this format:

PI47- 9a mag compass orientation (IGRF corrected)  
      0 328.1  21.0  0.0  0.0  1.0  
NRM    103.2  66.0 103.2  66.0 8.81E-05 001.7 205.8  48.5 0.074307 0.216786 0.124038 hargrave 2015-09-16 18:32:38  
TT  50 103.2  66.5 103.2  66.5 8.74E-05 001.4 205.5  49.1 0.104540 0.153198 0.116564 hargrave 2015-09-19 18:43:09  
TT  75 106.2  65.9 106.2  65.9 8.53E-05 001.6 207.6  49.0 0.073179 0.193176 0.109758 hargrave 2015-09-21 22:26:07  
TT 100 107.6  64.5 107.6  64.5 8.18E-05 001.9 209.2  47.9 0.050586 0.250049 0.109796 hargrave 2015-09-24 10:00:09  
TT 125 103.9  63.0 103.9  63.0 7.49E-05 001.1 207.9  45.9 0.048136 0.090764 0.100753 hargrave 2015-09-24 15:53:38  
TT 150 107.5  60.3 107.5  60.3 7.01E-05 001.4 211.6  44.1 0.048329 0.118106 0.116416 hargrave 2015-09-25 12:36:42  
TT 175 114.0  57.5 114.0  57.5 6.55E-05 001.4 217.4  42.8 0.094454 0.066698 0.103792 hargrave 2015-09-26 14:01:11  
TT 200 110.5  57.5 110.5  57.5 6.33E-05 001.5 215.0  42.1 0.117744 0.073840 0.087541 hargrave 2015-09-26 19:58:01  
TT 225 114.7  55.8 114.7  55.8 6.05E-05 002.9 218.7  41.4 0.163297 0.240928 0.100647 hargrave 2015-09-28 19:49:38  
TT 250 110.5  56.1 110.5  56.1 5.88E-05 001.3 215.7  40.8 0.085600 0.041973 0.101365 hargrave 2015-10-01 12:08:19  
TT 275 114.0  54.3 114.0  54.3 5.35E-05 002.9 219.0  39.8 0.121358 0.235405 0.081647 hargrave 2015-10-05 19:00:28  
TT 300 113.7  53.9 113.7  53.9 4.88E-05 001.2 219.0  39.4 0.044699 0.063687 0.074005 hargrave 2015-10-06 02:35:35

There must be a file with the suffix ‘.sam’ in the same directory as the specimen data files which gives details about the specimens and a list of the specimen files in the directory. The .sam file has the form:

PI47-  
 48.7 -87.0   0.0  
PI47-1a  
PI47-2a  
PI47-3a  
PI47-4a  
PI47-5a  
PI47-6a  
PI47-7a  
PI47-8a  
PI47-9a

The first line is a comment (in this case the site name), the second is the latitude and longitude followed by a declination correction. In these data, the declination correction was applied to the specimen orientations so the value of the declination correction is set to be 0.

For detailed description of the file format, check the PaleoMag website.

Use the program CIT_magic.py to import the data files from the example data files in the CIT_magic directory of the Measurement_Import directory of the data_files directory. The location name was “Slate Islands”, the naming convention was #2, the specimen name is specified with 1 character (-spc 1), we don’t wish to average replicate measurements (-A) and they were collected by drilling and with a magnetic compass (-mcd ”FS-FD:SO-MAG”).

PI47- Laurentia$ cit_magic.py  -f PI47-.sam -loc "Slate Islands" -spc 1 -ncn 2 -mcd "FS-FD:SO-MAG" -A  
 
PI47-  
 
9  records written to file  ./er_specimens.txt  
specimens stored in  ./er_specimens.txt  
9  records written to file  ./er_samples.txt  
samples stored in  ./er_samples.txt  
1  records written to file  ./er_sites.txt  
sites stored in  ./er_sites.txt  
266  records written to file  ./magic_measurements.txt  
data stored in  ./magic_measurements.txt

These data can now be viewed and interpreted using, for example zeq_magic.py.

generic_magic.py

[generic_magic.py docs]

If you have a data file format that is not supported, you can relabel column headers to fit the generic format in generic_magic.py:

specimentreatmenttreatment_typemomentdec_sinc_s
sr01e2 0 A 2.93E-02200.6 27.9
sr01e2 10 A 2.83E-02 201 27.7
sr01e2 20 A 2.61E-02200.5 27.4
sr01e2 30 A 2.37E-02 199.9 27.7
sr01e2 60 A 1.85E-02202.5 27.9
.
.

For more options, here is the help message for generic_magic.py :

        A generic file is a tab-delimited file. Each columns should have a header.  
        The file must include the follwing headers (the order of the columns is not important):  
 
            specimen  
                string specifying specimen name  
 
            treatment:  
                a number with one or two decimal point (X.Y)  
                coding for thermal demagnetization:  
                    0.0 or 0 is NRM.  
                    X is temperature in celcsius  
                    Y is always 0  
                coding for AF demagnetization:  
                    0.0 or 0 is NRM.  
                    X is AF peak field in mT  
                    Y is always 0  
                coding for Thellier-type experiment:  
                    0.0 or 0 is NRM  
                    X is temperature in celcsius  
                    Y=0: zerofield  
                    Y=1: infield  
                    Y=2: pTRM check  
                    Y=3: pTRM tail check  
                    Y=4: Additivity check  
                    # Ron, Add also 5 for Thellier protocol  
 
            treatment_type:  
                N: NRM  
                A: AF  
                T: Thermal  
 
            moment:  
                magnetic moment in emu !!  
 
        In addition, at least one of the following headers are required:  
 
            dec_s:  
                declination in specimen coordinate system (0 to 360)  
            inc_s:  
                inclination in specimen coordinate system (-90 to 90)  
 
            dec_g:  
                declination in geographic coordinate system (0 to 360)  
            inc_g:  
                inclination in geographic coordinate system (-90 to 90)  
 
            dec_t:  
                declination in tilt-corrected coordinate system (0 to 360)  
            inc_t:  
                inclination in tilt-corrected coordinate system (-90 to 90)  
 

This program is called within Pmag GUI but could be used as a stand alone program if you wish.

huji_magic.py

[huji_magic.py docs]

This program was developed for the late Prof. Hagai Ron at the Hebrew University, Jerusalem for files with formats like this:

mk118 07R-08329 5/10/2007 7:35 N 0 10.4 73.9 10.4 73.9 2.23E-03  
mk118 07R-08330 5/10/2007 7:37 A 5 359.2 60 359.2 60 2.19E-03  
mk118 07R-08331 5/10/2007 7:38 A 10 358.4 48 358.4 48 1.50E-03  
mk118 07R-08332 5/10/2007 7:40 A 15 357.7 43.5 357.7 43.5 8.34E-04  
mk118 07R-08333 5/10/2007 7:42 A 20 358.4 42.5 358.4 42.5 4.89E-04  
mk118 07R-08334 5/10/2007 7:44 A 25 358.8 40.6 358.8 40.6 3.06E-04  
mk118 07R-08335 5/10/2007 7:46 A 30 356.6 41.1 356.6 41.1 2.16E-04  
mk118 07R-08336 5/10/2007 7:48 A 40 357 39.3 357 39.3 1.39E-04  
mk118 07R-08337 5/10/2007 7:50 A 50 357 40 357 40 1.03E-04  
mk118 07R-08338 5/10/2007 7:53 A 60 351.7 46.9 351.7 46.9 9.64E-05  
mk118 07R-08339 5/10/2007 7:55 A 70 350.7 48.7 350.7 48.7 8.89E-05  
mk118 07R-08340 5/10/2007 7:58 A 80 353.8 38.8 353.8 38.8 6.77E-05

huji_magic.py works in a similar fashion to sio_magic.py.

ldeo_magic.py

[MagIC Database] [ldeo_magic docs]

This program works in much the same fashion as sio_magic.py but the file formats are different. Here is a typical example:

isaf2.fix  
LAT:   .00  LON:    .00  
    ID     TREAT  I  CD    J    CDECL CINCL  GDECL GINCL  BDECL BINCL  SUSC  M/V  
________________________________________________________________________________  
is031c2       .0  SD  0 461.600 163.9  17.5  337.1  74.5  319.1  74.4    .0   .0

The first line is the file name and the second has the latitude and longitude information. The columns are:

        ID: specimen name  
        TREAT:  treatment step  
        I:  Instrument  
        CD:  Circular standard devation  
        J: intensity.  assumed to be total moment in 10^-4 (emu)  
        CDECL:  Declination in specimen coordinate system  
        CINCL:  Declination in specimen coordinate system  
        GDECL:  Declination in geographic coordinate system  
        GINCL:  Declination in geographic coordinate system  
        BDECL:  Declination in bedding adjusted coordinate system  
        BINCL:  Declination in bedding adjusted coordinate system  
        SUSC:  magnetic susceptibility (in micro SI)a  
        M/V: mass or volume for nomalizing (0 won’t normalize)

Use the program to import the file ldeo_magic_example.dat into the MagIC format. It was an AF demagnetization experiment.

$ ldeo_magic.py -f ldeo_magic_example.dat -LP AF  
results put in  magic_measurements.txt

iodp_csv_magic.py

Just for fun, download the whole core data from IODP expedition 318, Site U1359, Hole A. (Section Half set to ’A’) from the IODP LIMS database using WebTabular. These data were used in Tauxe et al. (2012) after some editing (removal core ends and disturbed intervals). Convert them into the MagIC format using ODP_csv_magic.py:

$ iodp_csv_magic.py -f SRM_318_U1359_A_A.csv  
processing:  SRM_318_U1359_A_A.csv  
specimens stored in  ./er_specimens.txt  
samples stored in  ./er_samples.txt  
......  
Averaged replicate measurements  
data stored in  ./magic_measurements.txt

This takes a while so be patient....

You can also downloaded the core summary data from the LIMS database and convert it to the MagIC format using Pmag GUI. Now you can plot the data and the core tops using core_depthplot.py

pmd_magic.py

[pmd_magic.py docs]

This format is the one used to import .PMD formatted magnetometer files (used for example in the PaleoMac software of Cogn�, 2003) into the MagIC format. (See http://www.ipgp.fr/~cogne/pub/paleomac/PMhome.html for the PaleoMac home page. The version of these files that PMD_magic.py expects (UCSC version) contains demagnetization data for a single specimen and have a format like this:

Strat post corrected - Summit Springs, -  
ss0101a   a= 93.9   b= 61.0   s=  0.0   d=  0.0   v=11.0E-6m3  06/17/2003 12:00  
STEP  Xc [Am2]  Yc [Am2]  Zc [Am2]  MAG[A/m]   Dg    Ig    Ds    Is  a95  
H000 -7.46E-06 -1.61E-05 +5.31E-05 +5.09E+00  73.3  35.2  73.3  35.2  0.5  cryoSlug  
H002 -7.57E-06 -1.53E-05 +5.03E-05 +4.83E+00  73.2  35.7  73.2  35.7  0.7  cryoSlug  
H005 -7.35E-06 -1.16E-05 +3.62E-05 +3.52E+00  71.5  38.3  71.5  38.3  0.7  cryoSlug  
H008 -7.35E-06 -6.27E-06 +1.92E-05 +1.95E+00  68.5  47.1  68.5  47.1  0.7  cryoSlug  
H012 -6.84E-06 -3.32E-06 +1.20E-05 +1.29E+00  69.1  56.1  69.1  56.1  0.6  cryoSlug  
H018 -6.00E-06 -2.26E-06 +8.96E-06 +1.00E+00  69.3  60.5  69.3  60.5  0.6  cryoSlug  
H025 -5.16E-06 -1.74E-06 +7.18E-06 +8.19E-01  69.2  62.5  69.2  62.5  0.5  cryoSlug  
H050 -3.12E-06 -9.23E-07 +4.03E-06 +4.71E-01  69.3  64.6  69.3  64.6  0.5  cryoSlug  
H090 -1.80E-06 -5.32E-07 +2.23E-06 +2.65E-01  67.6  65.6  67.6  65.6  0.5  cryoSlug  
H180 -9.49E-07 -2.94E-07 +1.21E-06 +1.42E-01  67.7  64.8  67.7  64.8  0.5  cryoSlug

The first line is a comment line. The second line has the specimen name, the core azimuth (a=) and plunge (b=) which are assumed to be the lab arrow azimuth and plunge (Orientation scheme #4)D. The third line is a header explaining the columns in the file.

Use pmd_magic.py to convert the file ss0101a.pmd in the directory ’PMD’ in the ’PMD_magic’ folder of the Measurement_import directory in the example data_files directory. These were taken at a location named ’Summit Springs’ and have a naming convention of the type XXXX[YYY], where YYY is sample designation with Z characters from site XXX, ornaming convention # 4-2. A single character distinguishes the specimen from the sample (-spc 1). All samples were oriented with a magnetic compass.

$ pmd_magic.py -f ss0101a.pmd -loc ‘‘Summit Springs‘‘ -ncn 4-2 -spc 1 -mcd SO-MAG -F ss0101a.magic  
 results put in  ./magic_measurements.txt  
sample orientations put in  ./er_samples.txt

Because each file must be imported separately, you should use a different name for the output file for each input file (otherwise you will overwrite the default each time) and set the switch for sample file to append for subsequent imports:

$ pmd_magic.py -f ss0102a.pmd -loc ‘‘Summit Springs‘‘ -ncn 4-2 -spc 1 -mcd SO-MAG -F ss0102a.magic-Fsa er_samples.txt  
sample information will be appended to  ./er_samples.txt  
results put in  ./magic_measurements.txt  
sample orientations put in  ./er_samples.txt

After you finish importing all the data, combine the individual files together with combine_magic.py and look at them with, for example, zeq_magic.py.

sio_magic.py

[sio_magic docs]

The program sio_magic.py allows conversion of the SIO format magnetometer files to the MagIC common measurements format. It allows various experiment types so read the help message. The SIO format is a space delimited file:

sc12b1      0.00        0.3      2.878E-2      185.0      9.3  
sc12b1      5.00        0.3      2.864E-2      187.7      9.6  
sc12b1      10.00      0.3      2.833E-2      185.5      10.0  
sc12b1      15.00      0.2      2.771E-2      188.2      10.0  
sc12b1      20.00      0.4      2.644E-2      187.6      10.1  
sc12b1      30.00      0.2      2.182E-2      186.0      9.9  
sc12b1      40.00      0.4      1.799E-2      185.3      9.9  
sc12b1      50.00      0.2      1.570E-2      187.9      9.6  
sc12b1      60.00      0.2      1.423E-2      185.6      10.0  
sc12b1      80.00      0.1      1.275E-2      187.4      9.5

The columns are:

Specimen treatment intensity declination inclination optional_string

The treatment field is the temperature (in centigrade), the AF field (in mT), the impulse field strength, etc. For special experiments like IRM acquisition, the coil number of the popular ASC impulse magnetizer can be specified if the treatment steps are in volts. The position for anisotropy experiments or whether the treatment is “in-field” or in zero field also require special formatting. The units of the intensity field are in cgs and the directions are relative to the ‘lab arrow’ on the specimen. Here are some examples of commonly used specimens and conversions from field arrow to lab arrow.

PIC

As an example, we use data from Sbarbori et al. (2009) done on a set of samples from the location “Socorro”, including AF, thermal, and thellier experimental data. These were saved in sio_af_example.dat, sio_thermal_example.dat, and sio_thellier_example.dat respectively. The lab field for the thellier experiment was 25 μT and was applied along the specimen’s Z axis (phi=0,theta=90).] Convert the example files into magic_measurement formatted files with names like af_measurements.txt, etc. Then combine them together with combine_magic.py:

 
Note:  all of these should actually be on ONE line:  
 
$ sio_magic.py -f sio_af_example.dat -F af_measurements.txt  
    -LP AF -spc 1 -loc Socorro  
 
averaging: records  11 13  
averaging: records  14 16  
averaging: records  17 19  
averaging: records  20 22  
sc12b1 started with  22  ending with  14  
Averaged replicate measurements  
results put in  af_measurements.txt  
 
 
    $ sio_magic.py -f sio_thermal_example.dat -F thermal_measurements.txt  \  
    -LP T -spc 1 -loc Socorro  
averaging: records  6 7  
sc12b2 started with  23  ending with  22  
Averaged replicate measurements  
results put in  thermal_measurements.txt  
 
 
$ sio_magic.py -f sio_thellier_example.dat -F  thellier_measurements.txt  \  
      -LP T -spc 1 -loc Socorro -dc 25 0 90  
averaging: records  10 11  
sc12b2 started with  56  ending with  55  
Averaged replicate measurements  
results put in  thellier_measurements.txt  
 
 
$ combine_magic.py -F magic_measurements.txt -f  thellier_measurements.txt   \  
       thermal_measurements.txt  af_measurements.txt  
 
File  ./thermal_measurements.txt  read in with  22  records  
File  ./thellier_measurements.txt  read in with  55  records  
File  ./af_measurements.txt  read in with  14  records  
All records stored in  ./magic_measurements.txt  
 
 

The data in these files can be plotted and interpreted with dmag_magic.py, zeq_magic.py, or thellier_magic.py depending on the experiment.

Note that there are more examples of data file formats and import schemes in the sections on anisotropy of anhysteretic and thermal remanences.

sufar4_asc_magic.py

[Essentials Chapter 13]; [MagIC Database] [sufar4_asc_magic docs]

The Agico Kappabridge instrument comes with the SUFAR program which makes the measurements and saves the data in a txt file like that in sufar4_asc_magic_example.txt in the sufar4_asc_magic directory. These data were measured on a KLY4S instrument with a spinning mode. Import them into the MagIC format:

%sufar4_asc_magic.py -f sufar4-asc_magic_example.txt -loc U1356A -spc 0 -ncn 5  
anisotropy tensors put in  ./rmag_anisotropy.txt  
bulk measurements put in  ./magic_measurements.txt  
specimen info put in  ./er_specimens.txt  
sample info put in  ./er_samples.txt  
site info put in  ./er_sites.txt  
 
         You can now import your data into the Magic Console and complete data entry,  
         for example the site locations, lithologies, etc. plotting can be done  
          with aniso_magic.py

tdt_magic.py

[Essentials Chapter 10] and [MagIC] [tdt_magic docs]

This program imports the default data format for the ThellierTool Program of Leonhardt et al. (2004). After importing, the data can be viewed with thellier_magic.py.

The file format of the ThellierTool .tdt format is:

Thellier-tdt  
50.12 0 0 0 0  
2396D 0.00 1821.50 4.6 49.4  
2396D 200.00 1467.5 7.4 44.5  
2396D 200.11 1808.3 4.3 34.4  
2396D 200.13 1421.89 6.1 46  
2396D 230.00 1317.69 6.9 45.3  
2396D 230.11 1711.2 4.3 32.9  
2396D 260.00 1150.89 8.4 44.7  
2396D 200.12 1475.92 4.3 34.7  
...

The first two lines are headers. The first column of the second line is the applied field in μT. The rest of this line is azimuth and plunge of the fiducial line and dip direction and dip of the bedding plane. The data columns are: specimen, treatment, intensity, declination and inclination. the intensity is the magnetization in 10-3 A/m. The treatment is of the form XXX.YY where XXX is the temperature and YY is 00, 11, 12, 13, 14 OR 0, 1, 2, 3, 4 where 0 is the NRM step, 11/1 is the pTRM acquisition step, 12/2 is the pTRM check step and 13/3 is the pTRM tail check. 14/4 is the additivity check.

Use the program TDT_magic.py to import the ThellierTool tdt formatted file TDT_magic_example.dat into the MagIC format. The field was 52.12 μT applied along the 0,0 direction.

$ tdt_magic.py -f tdt_magic_example.dat -loc TEST -dc 50.12 0 0  
results put in  magic_measurements.txt

5.2.67 measurements_normalize.py

[MagIC Database] [measurements_normalize docs]

This program takes specimen weights or volumes from an er_specimen.txt formatted file and normalizes the magnetic moment data to make magnetizations. Weights must be in kilograms and volumes in m3.

Use the program measurements_normalize.py to generate weight normalized magnetizations for the data in the file magic_measurements.txt in the —it measurements_normalize directory. Use the specimen weights in the file specimen_weights.txt.

$ magic_select.py -f magic_measurements.txt \  
     -key magic_method_codes LP-IRM has -F irm_measurements.txt  
205  records written to  ./irm_measurements.txt  
$ measurements_normalize.py -f irm_measurements.txt \  
      -fsp specimen_weights.txt -F irm_norm_measurements.txt  
Data saved in  ./irm_norm_measurements.txt

5.2.68 mk_redo.py

MagIC [mk_redo docs]

The programs zeq_magic.py and thellier_magic.py as well as the two GUIs Demag GUI and Thellier GUI make pmag_specimen formatted files which can be used for further data reduction either by plotting or contributing to site means, etc. Sometimes it is useful to redo the calculation, using anisotropy corrections or a change in coordinate systems, etc. The re-doing of these specimen level calculations is handled by, for example zeq_magic_redo.py or thellier_magic_redo.py. These programs use magic_measurements formatted files and perform calculations as dictated by a “redo” file which has the specimen name, bounds for calculation and, in the case of the demagnetization data interpretation, the type of calculation desired (best-fit lines with directional estimation magic method code:DE-BFL, best-fit planes with those with magic method code DE-BFP, etc.).

Make “redo” files from the existing pmag_specimen formatted file in the data files downloaded from the MagIC website as in download_magic.py and examine them as follows:

$ mk_redo.py  
$ cat zeq_redo  
sr01a1 DE-BFL 473 823 A  
sr01a2 DE-BFL 473 848 A  
sr01c2 DE-BFL 473 823 A  
sr01d1 DE-BFL 373 798 A  
.......  
$ cat thellier_redo  
sr01a1 573 823  
sr01a2 573 848  
sr01c2 673 823  
sr01d1 373 823  
 
.....

Note that the temperature steps are in kelvin and the AF demagnetization steps are in Tesla as required in the MagIC data base.

5.2.69 nrm_specimens_magic.py

MagIC [nrm_specimens_magic docs]

After making NRM measurements, it is frequently useful to look at the directions in equal area projection to get a “quick look” at the results before proceeding to step wise demagnetization. The data in the magic_measurements files are usually in specimen coordinates - not geographic, so we need a way to rotate the data into geographic and or stratigraphic coordinates and save them in a pmag_specimens formatted file for plotting with eqarea_magic.py. The program nrm_specimens_magic.py will do this for you.

Get into the directory you made for the download_magic.py example.
Use nrm_specimens_magic.py to convert the NRM measurements in magic_measurements.txt to geographic coordinates saved in a file named nrm_specimens.txt. The orientation data are in the file er_samples.txt. Then plot the specimen directions for the entire study using eqarea_magic.py:

$ nrm_specimens_magic.py -crd g  
er_samples.txt  read in with  271  records  
Data saved in  nrm_specimens.txt  
 
$ eqarea_magic.py -f nrm_specimens.txt  
177  records read from  ./nrm_specimens.txt  
All  
sr01a1 SO-SUN   324.1    66.0  
sr01a2 SO-SUN   325.3    62.1  
sr01c2 SO-SUN   344.7    63.8  
sr01d1 SO-CMD-NORTH   327.8    64.5  
sr01e2 SO-SUN   333.7    67.0  
...  
 
 

The first command created a file nrm_specimens.txt and the second created an equal area projection of the NRM directions in geographic coordinates which should look like this:

PIC

5.2.70 orientation_magic.py

[Essentials Chapter 9] and [Preparing for MagIC] [orientation_magic docs]

Try to import the file orientation_example.txt into the er_samples.txt and er_sites.txt files using orientation_magic.py. Click here for details about the orient.txt file format. It has field information for a few sites. The samples were oriented with a Pomeroy orientation device (the default) and it is desirable to calculate the magnetic declination from the IGRF at the time of sampling (also the default). Sample names follow the rule that the sample is designated by a letter at the end of the site name (convention #1 - which is the default). So we do this by:

orientation_magic.py -f orient_example.txt  
saving data...  
Data saved in  ./er_samples.txt  and  ./er_sites.txt  

5.2.71 parse_measurements.py

This program reads in the magic_measurements.txt file and creates an er_specimens.txt file. The specimen volumes and/or weights can then be put in columns labeled specimen_volume and specimen_weight respectively. Volumes must be in m3 and weights in kg. (Yes you can do the math...).

Try this out on the magic_measurements.txt file created in the irmaq_magic.py example. Pretend you have a bunch of specimen weights you want to use to normalize the NRM with the program measurements_normalize.py.

$ parse_measurements.py  
site record in er_sites table not found for:  318-U1359A-002H-1-W-109  
site record in er_sites table not found for:  318-U1359A-002H-2-W-135  
site record in er_sites table not found for:  318-U1359A-002H-3-W-45  
site record in er_sites table not found for:  318-U1359A-002H-4-W-65  
.....

The program appears a bit flustered because you have no er_sites.txt file in this directory. If you DID, you would overwrite whatever site name was in that file onto the specimen table. This allows you to carry the changes in that table through to the specimen table (see orientation_magic.py.)

5.2.72 pca.py

[Essentials Chapter 11] [pca docs]

This program calculates best-fit lines, planes or Fisher averages through selected treatment steps. The file format is a simple space delimited file with specimen name, treatment step, intensity, declination and inclination. Calculate the best-fit line through the first ten treatment steps in data file zeq_example.txt:

 $ pca.py -dir L 1 10 -f zeq_example.dat  
eba24a DE-BFL  
0 0.00 339.9 57.9 9.2830e-05  
1 2.50 325.7 49.1 7.5820e-05  
...  
eba24a DE-BFL 10    2.50  70.00    8.8   334.9    51.5

According to the help message, this is: specimen name, calculation type, N, beg, end, MAD, declination and inclination. The calculation type is the MagIC method code for best-fit lines (see Essentials Appendix ??.)

5.2.73 plotxy.py

[plotxy docs]

This program makes a simple XY plot from any arbitrary input file. You can specify which columns are X and Y, bounds on the columns, symbol color and size, axis labels and other options.

$ plotxy.py -f plotXY_example.txt

5.2.74 plot_cdf.py

[plot_cdf docs]

This program plots cumulative distribution functions of a single column of input data. Use as an example, a normally distributed set of 1000 data points generated by gaussian.py. Use the defaults of zero mean with a standard deviation of 1.

$ gaussian.py -n 1000 > gaussian.out  
  $ plot_cdf.py -f gaussian.out  
S[a]ve  plot, <Return> to quit a  
1  saved in  CDF_.svg

which should have generated a plot something like this:

PIC

5.2.75 plot_magic_keys.py

MagIC [plot_magic_keys docs]

This program will plot any column in a specified MagIC formatted file against any other column in the same file.

5.2.76 plot_map_pts.py

high resolution instructions [plot_map_pts docs]

NOTE: This program only works if you have installed basemap (full version of Canopy Python as opposed to the free version.) It may not work with free versions. plot_map_pts.py will generate a simple map of the data points in a file (lon lat) on the desired projection. If you want to use high resolution or the etopo20 meshgrid (-etp option), You must install the etopo20 data files and can use install_etopo.py for that. There are many options, so check the documentation (-h option) for details.

Draw a set of 200 uniformly distributed points on the globe with the program uniform.py. Plot these on an orthographic projection with the viewing point at a longitude of 0 and a latitude of 30N. If you have installed the high resolution data sets, use the -etp option to plot the topographic mesh. Plot the points as large (size = 10) white dots. Also note that for some reason the .svg output does not place nicely with illustrator.

$ uniform.py -n 200 >uniform.out  
$ plot_map_pts.py -f uniform.out -prj ortho -eye 30 0 -etp -sym wo 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf

which should produce a plot similar to this:

PIC

5.2.77 plotdi_a.py

[Essentials Chapter 11] [plotdi_a docs]

Place the following declination, inclination α95 data in a space delimited file called
plotdi_a_example.dat.




DecInc α95



39.1 37.5 5.0
30.3 36.2 15
29.9 45.6 7
34.6 28.4 3



Make a plot of these data using plotdi_a.py:

$ plotdi_a.py -f plotdi_a_example.dat  
 S[a]ve to save plot, [q]uit, Return to continue:  a  
1  saved in  eq.svg

which makes the plot:

PIC

5.2.78 pmag_results_extract.py

[:MagIC] [pmag_results_extract docs]

This program extracts a tab delimited txt file from a pmag_results formatted file. This allows you to publish data tables that have identical data to the data uploaded into the MagIC database. Try this out in the directory created for the download_magic.py example:

$ pmag_results_extract.py -f pmag_results.txt  
data saved in:  ./Directions.txt ./Intensities.txt ./SiteNfo.txt

This creates tab delimited files that can be incorporated into a paper, for example. You can also export the information in LaTeX format if you prefer.

5.2.79 pt_rot.py

[Essentials Chapter 16] [pt_rot docs]

This program reads in a file with location, age and destination plate and rotates the data into the destination plate coordinates using the rotations and methods in Essentials Appendix A.3.5. Alternatively, you can supply your own rotation parameters with the -ff option.

First, save the location of Cincinnati (39.1 latitude, -84.7 longitude) in a file called pt_rot.input using either the UNIX cat function or your favorite text editor. Save the data as a lon/lat pair separated by a space. Then use the program pt_rot.py to rotate Cincinnati to (South) African (saf) coordinates at 80 Ma. Plot both points using plot_mapPTS.py. You should save the lon, lat, plate, age, destination_plate information in a file called pt_rot_example.dat in the following:

$ awk ’{print $1,$2}’ pt_rot.input > lon_lat  
$ -prj moll -f lon_lat -sym g^ 10 -R -B -eye 0 0  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf  
$ pt_rot.py -f pt_rot.input pt_rot_example.dat  
299.763594614 34.6088989286  
$ pt_rot.py -f pt_rot_example.dat >pt_rot.out  
$ plot_mapPTS.py -f pt_rot.out -sym ro 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf

The two plots will look like these:

PIC

Now, use the program to rotate a selection of North American poles from 180-200 Ma (in the file nam_180-200.txt in the pt_rot directory) to Pangea A coordinates (finite rotation pole and angle in nam_panA.frp. Note that plot_mapPTS.py reads in longitude/latitude pairs, while pt_rot.py reads in latitude longitude pairs.

$ plot_mapPTS.py -f nam_180-200.txt -prj ortho -eye 60 90 -sym c^ 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf  
 
$ pt_rot.py -ff nam_180-200.txt nam_panA.frp  
165.005966833 75.1055653383  
272.368103052 83.1542552681  
281.233199772 63.3738811186  
260.36973769 70.7986591555  
250.91392318 70.3067026787  
251.438099128 65.9369002914  
255.371917029 63.4115373574  
204.92422073 71.7944043036  
.....  
$ pt_rot.py -ff nam_180-200.txt nam_panA.frp > pt_rot_panA.out  
$ plot_mapPTS.py -f pt_rot_panA.out -prj ortho -eye 60 90 -sym ro 10  
please wait to draw points  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  Map_PTS.pdf

These plots should look like this:

PIC

5.2.80 qqlot.py

[Essentials Appendix B.1.5] [qqplot docs]

Makes a quantile-quantile plot of the input data file against a normal distribution. The plot has the mean, standard deviation and the D statistic as well as the Dc statistic expected from a normal distribution. Use qqplot.py to test whether the data generated with gaussian.py is in fact normally distributed. (It will be 95% of the time!).

$ gaussian.py  -F  gauss.out  
$ qqplot.py -f gauss.out  
mean,sigma, d, Dc  
0.02069909 0.849042146783 0.0528626896977 0.0886  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  qq.svg

which generates this plot:

PIC

5.2.81 quick_hyst.py

[Essentials Chapter 5] and [MagIC Database] [quick_hyst docs]

hysteresis_magic.py makes plots of hysteresis loops and calculates the main hysteresis parameters. For a quick look with no interpretation, you can use quick_hyst.py. Try it out on the data file hysteresis_magic_example.dat in the hysteresis_magic directory.

$ quick_hyst.py -f hysteresis_magic_example.dat -fmt svg  
IS06a-1 1 out of  8  
working on t:  273  
S[a]ve plots, [s]pecimen name, [q]uit, <return> to continue  
 a  
1  saved in  LO:__SI:_IS06_SA:_IS06a_SP:_IS06a-1_TY:_hyst_.svg  
IS06a-1 1 out of  8  
working on t:  273  
S[a]ve plots, [s]pecimen name, [q]uit, <return> to continue  
 q  
Good bye

which makes a plot like this:

PIC

5.2.82 revtest.py

[Essentials Chapter 12] [revtest docs]

Use the Tauxe et al., (2010) reversals test implemented in revtest.py to test whether the two modes in the data set revtest_example.txt are antipodal or not:

$ revtest.py -f revtest_example.txt  
doing first mode, be patient  
doing second mode, be patient  
s[a]ve plots, [q]uit: a  
2  saved in  REV_Y.svg  
1  saved in  REV_X.svg  
3  saved in  REV_Z.svg  

which produces these plots:

PIC

Because the 95% confidence bounds for each component overlap each other, the two directions are not significantly different.

5.2.83 revtest_magic.py

[Essentials Chapter 12] and MagIC [revtest_magic docs]

Same as revtest.py but for pmag_sites MagIC formatted files. Try it out on the data file revtest_sites.txt. Then try using customize_criteria.py to change or create a pmag_criteria.txt file that fits your needs and redo the reversals test using only the selected sites.

 revtest_magic.py -f revtest_magic_example.txt  
doing first mode, be patient  
doing second mode, be patient  
s[a]ve plots, [q]uit: q  
$ revtest_magic.py -f revtest_sites.txt -exc

5.2.84 revtest_mm1990.py

[revtest_mm1990 docs]

This program tests whether two sets of observations (one with normal and one with reverse polarity) could have been drawn from distributions that are 180 apart using the McFadden and McElhinny (1990) implementation of the Watson (1983) V statistic test for a common mean. The implementation of the V statistic test in this program is the same as in watsons_v.py. Use revtest_mm1990.py to perform a reversal test on data from the Ao et al., 2013 study of Early Pleistocene fluvio-lacustrine sediments of the Nihewan Basin of North China.

Lets plot the combined data set Ao_etal2013_combined.txt using eqarea.py:

$ eqarea.py -f Ao_etal2013_combined.txt

PIC

Conduct a reversal test between the normal directions Ao_etal2013_norm.txt and reversed directions Ao_etal2013_rev.txt:

$ MM1990_revtest.py -f Ao_etal2013_norm.txt -f2 Ao_etal2013_rev.txt  
be patient, your computer is doing 5000 simulations...  
 
Results of Watson V test:  
 
Watson’s V:           2.7  
Critical value of V:  6.0  
‘‘Pass‘‘: Since V is less than Vcrit, the null hypothesis that the  
 two populations are drawn from distributions that share a  
 common mean direction (antipodal to one another) cannot be rejected.  
 
M&M1990 classification:  
 
Angle between data set means: 2.9  
Critical angle for M&M1990:   4.3  
The McFadden and McElhinny (1990) classification for this test is: ‘A’

As with watsons_v.py, this program produces a plot that displays the cumulative distribution (CDF) of the Watson V values resulting from the Monte Carlo simulation. In the plot below, the red line is the CDF, the green solid line is the value of the V statistic calculated for the two populations and the dashed blue line is the critical value for the V statistic. In this example, the data are consistent with the two directional groups sharing a common mean as evidenced by V (green solid line) being a lower value than the critical value of V (dashed blue line).

PIC

5.2.85 s_eigs.py

[Essentials Chapter 13] [s_eigs docs]

Convert the .s format data in s_eigs_example.dat to eigenvalues and eigenvectors:

$ s_eigs.py -f s_eigs_example.dat  
0.33127186 239.53  44.70 0.33351338 126.62  21.47 0.33521473  19.03  37.54  
0.33177859 281.12   6.18 0.33218277 169.79  73.43 0.33603862  12.82  15.32  
0.33046979 283.57  27.30 0.33328307 118.37  61.91 0.33624712  16.75   6.13  
...

5.2.86 s_geo.py

[Essentials Chapter 13] [s_geo docs]

Rotate the .s data into geographic coordinates using s_geo.py. The input format is the 6 tensor elements of s, followed by the azimuth and plunge of the X axis.

$ s_geo.py -f s_geo_example.dat  
0.33412680 0.33282733 0.33304587 -0.00015289 0.00124843 0.00135721  
0.33556300 0.33198264 0.33245432 0.00087259 0.00024141 0.00096166  
0.33584908 0.33140627 0.33274469 0.00131845 0.00118816 0.00002986  
...

5.2.87 s_hext.py

[Essentials Chapter 13] [s_hext docs]

Take the output from the s_geo.py example and calculate Hext statistics:

$ s_geo.py -f s_geo_example.dat | s_hext.py  
F =  5.79 F12 =  3.55 F23 =  3.66  
N =  8  sigma =  0.000641809950  
0.33505     5.3    14.7    25.5   124.5    61.7    13.3   268.8    23.6  
0.33334   124.5    61.7    25.1   268.8    23.6    25.5     5.3    14.7  
0.33161   268.8    23.6    13.3     5.3    14.7    25.1   124.5    61.7  

5.2.88 s_tilt.py

[Essentials Chapter 13] [s_tilt docs]

Rotate the .s data saved in s_tilt_example.dat into stratigraphic coordinates:

$ s_tilt.py -f s_tilt_example.dat  
0.33455709 0.33192658 0.33351630 -0.00043562 0.00092778 0.00105006  
0.33585501 0.33191565 0.33222935 0.00055959 -0.00005316 0.00064731  
0.33586669 0.33084923 0.33328408 0.00142267 0.00013233 0.00009202  
0.33488664 0.33138493 0.33372843 -0.00056597 -0.00039086 0.00004873  
.  
.

5.2.89 s_magic.py

[MagIC] [MagIC Database] [s_magic docs]

Import .s format file output from the s_tilt.py example into an rmag_anisotropy formatted file. Files of the rmag_anisotropy format can be plotted with aniso_magic.py. To see how this works, use the program s_magic.py as follows:

$ s_tilt.py -f s_tilt_example.dat >s_magic_example.dat  
$ s_magic.py -f s_magic_example.dat  
data saved in  ./rmag_anisotropy.txt

This creates the output file rmag_anisotropy.txt by default, which can be plotted with the program aniso_magic.py.

5.2.90 scalc.py

[Essentials Chapter 14] [scalc docs]

Calculate the S scatter statistic for a set of VGPs saved in scalc_example.txt. Repeat using a Vandamme (Vandamme et al., 1994) variable cutoff. Then get the bootstrap bounds on the calculation.

$ scalc.py -f scalc_example.txt  
99    19.5     90.0  
$ scalc.py -f scalc_example.txt -v  
89    15.2     32.3  
$ scalc.py -f scalc_example.txt -v -b  
89    15.2    13.3     16.8    32.3

Using no cutoff, the VGP scatter was 19.5. The Vandamme co-latitude cutoff was 32.3 which threw out 6 points and gave a scatter of 15.2.

5.2.91 scalc_magic.py

[Essentials Chapter 14] [scalc_magic docs]

This is the same as scalc.py but works on pmag_results formatted files. Try it out on the pmag_results.txt file in the directory created for the download_magic.py example. Use a VGP co-latitude cutoff of 30.

$ scalc_magic.py -f pmag_results.txt  -c 30  
13    17.8     30.0  

5.2.92 site_edit_magic.py

[site_edit_magic docs]

There are frequently outlier directions when looking at data at the site level. Some paleomagnetists throw out the entire site, while some arbitrarily discard individual samples, assuming that the orientations were ‘bad’. Lawrence et al. (2009) suggested a different approach in which common causes of misorientation are specifically tested. For example, if the arrow indicating drill direction (see orientation conventions) was drawn the wrong way on the sample in the field, the direction would be off in a predictable way. Similarly (and more commonly) extraneous marks on the sample were used instead of the correct brass mark, the directions will fall along a small circle which passes through the correct direction. The program site_edit_magic.py plots data by site on an equal area projection, allows the user to select a particular specimen direction and plots the various pathological behaviors expected from common misorientations. If the stray directions can reasonably be assigned to a particular cause, then that sample orientation can be marked as ‘bad’ with a note as to the reason for exclusion and the directions from that sample can be excluded from the site mean, and the reason can be documented in the MagIC database.

Try this out using the data downloaded in the download_magic.py example.

$ site_edit_magic.py  
....  
sr11  
specimen, dec, inc, n_meas/MAD,| method codes  
sr11a1:   340.6    74.1 8 / 0.9 | LP-DIR-AF:DE-BFL  
sr11b1:   353.9    70.3 10 / 2.1 | LP-DIR-AF:DE-BFL  
sr11c1:   342.6    69.4 8 / 1.3 | LP-DIR-AF:DE-BFL  
sr11e2:   341.5    73.4 8 / 0.9 | LP-DIR-AF:DE-BFL  
sr11f1:   136.6     9.4 14 / 3.5 | LP-DIR-T:DE-BFP  
sr11g2:   345.5    77.0 8 / 0.8 | LP-DIR-AF:DE-BFL  
sr11i1:    77.1     0.8 12 / 3.5 | LP-DIR-AF:DE-BFP  
sr11j3:    97.4   -13.2 8 / 11.8 | LP-DIR-AF:DE-BFL  
 
 Site lines planes  kappa   a95   dec   inc  
sr11 6  2       5     28.4     18.0     73.6  6.7445  
s[a]ve plot, [q]uit, [e]dit specimens, [p]revious site, <return> to continue:  
 e  
Enter name of specimen to check: sr11j3  
Triangle: wrong arrow for drill direction.  
Delta: wrong end of compass.  
Small circle:  wrong mark on sample. [cyan upper hemisphere]  
Mark this sample as bad? y/[n]  n  
Mark another sample, this site? y/[n]  n  
.....  
sr20  
no orientation data for  sr20j  
specimen, dec, inc, n_meas/MAD,| method codes  
sr20c1:   339.1    56.2 9 / 2.1 | LP-DIR-T:DE-BFL  
sr20d2:   341.9    60.3 9 / 1.7 | LP-DIR-AF:DE-BFL  
sr20e1:   332.1    55.3 10 / 1.0 | LP-DIR-AF:DE-BFL  
sr20f1:   337.1    54.0 9 / 3.7 | LP-DIR-T:DE-BFL  
sr20g1:   340.1    60.2 9 / 1.6 | LP-DIR-T:DE-BFL  
sr20i1:   330.4    31.9 11 / 0.6 | LP-DIR-AF:DE-BFL  
sr20i2:   330.7    31.1 9 / 2.3 | LP-DIR-T:DE-BFL  
 
 Site lines planes  kappa   a95   dec   inc  
sr20 7  0      38      9.9    335.1     50.0  6.8434  
s[a]ve plot, [q]uit, [e]dit specimens, [p]revious site, <return> to continue:  
 e  
Enter name of specimen to check: sr20i1  
Triangle: wrong arrow for drill direction.  
Delta: wrong end of compass.  
Small circle:  wrong mark on sample. [cyan upper hemisphere]  
Mark this sample as bad? y/[n]  y  
Reason: [1] broke, [2] wrong drill direction, [3] wrong compass direction,  
[4] bad mark, [5] displaced block [6] other 2  
Mark another sample, this site? y/[n]  n

Compare behaviors for sr11, sample sr11XXX and sr20, sample sr20i:

PIC

The latter plausibly fits the “wrong mark” hypothesis, while the former does not fit any of the common pathologies. The sample orientation in the er_samples.txt file has a note that it is ‘bad’ and will be ignored by the PmagPy programs when calculating site means.

5.2.93 specimens_results_magic.py

MagIC [specimens_results_magic docs]

Starting from the magnetometer output files, paleomagnetists interpret the data in terms of vectors, using software like zeq_magic.py or thellier_magic.py. The interpretations from these programs are stored in pmag_specimen formatted files. Once a pmag_specimens format file has been created, the data are usually averaged by sample and/or by site and converted into V[A]DMs and/or VGPs and put in a pmag_results formatted file along with the location and age information that are available. Data must be selected or rejected according to some criteria at each level (for example, the specimen MAD direction must be less than some value or the site κ must be greater than some value). This onerous task can be accomplished using the program specimens_results_magic.py. This program has many optional features, so the reader is encouraged just to look at the documentation.

Test it out by using the data files created in the site_edit_magic.py example. Block the interpretation of some of the samples owing to ‘bad’ orientation. Change the selection criteria using customize_criteria.py. Then generate a new pmag_specimen.txt file using mk_redo.py, zeq_magic_redo.py and combine_magic.py. Then do the averaging by site with specimens_results_magic.py. The age bounds of the data are 0-5 Ma (-age 0 5 Ma). Use the existing (or modified) criteria (-exc option). There are some paleointensity data, so use the current site latitude for the VADM calculation (-lat option). Calculate the data in the geographic coordinate system (-crd g). Then extract the data into tab delimited tables using pmag_results_extract.py

$ mk_redo.py  
$ zeq_magic_redo.py -crd g  
Processing  173  specimens - please wait  
Recalculated specimen data stored in  ./zeq_specimens.txt  
$ combine_magic.py -F pmag_specimens.txt -f zeq_specimens.txt thellier_specimens.txt  
File  ./zeq_specimens.txt  read in with  173  records  
File  ./thellier_specimens.txt  read in with  50  records  
All records stored in  ./pmag_specimens.txt  
$ specimens_results_magic.py -age 0 5 Ma -exc -lat -crd g  
Acceptance criteria read in from  pmag_criteria.txt  
 
 Pmag Criteria stored in  pmag_criteria.txt  
 
 sites written to  pmag_sites.txt  
 results written to  pmag_results.txt  
$ pmag_results_extract.py -fa er_ages.txt  
data saved in:  ./Directions.txt ./Intensities.txt ./SiteNfo.txt

Note that this procedure is more or less automated in Pmag GUI.

5.2.94 stats.py

[Essentials Chapter 11] [stats docs]

Calculates gaussian statistics for sets of data. Calculate the mean of the data generated in the gaussian.py example and saved in gauss.out:

$ stats.py -f gauss.out  
100 10.12243251 1012.243251 2.79670530387 27.6287868663

which according to the help message is:

      N, mean, sum, sigma, (%) , stderr, 95% conf.  
      where sigma is the standard deviation  
      where % is sigma as percentage of the mean  
      stderr is the standard error and  
      95% conf.=  1.96*sigma/sqrt(N)

5.2.95 strip_magic.py

[Essentials Chapter 15] and [MagIC]

Follow the instructions for download_magic.py but search for Tauxe and Hartl and 1997. Download the smartbook (text file) and unpack it into a new directory using the download_magic.py command and the file zmab0094214tmp02.txt as the input file (in the Tauxe and Hartl directory). First run strip_magic.py to see what is available for plotting, then plot the inclination data versus depth (pos). Then plot the VGP latitudes versus age:

$ download_magic.py -f zmab0094214tmp03.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
..........  
$ strip_magic.py  
available objects for plotting:  [’all’]  
available X plots:  [’age’, ’pos’]  
available Y plots:  [’lon’, ’int’, ’lat’]  
available method codes:  [’LP-PI-IRM’, ’LP-PI-REL’]  
$ strip_magic.py -x pos -y int -mcd LP-PI-IRM  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  strat.svg  
 %strip_magic.py -x age -y lat  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  strat.svg

The last command made this plot:

PIC

5.2.96 sundec.py

[Essentials Chapter 9] [sundec docs]

Use the program sundec.py to calculate azimuth of the direction of drill. You are located at 35 N and 33 E. The local time is three hours ahead of Universal Time. The shadow angle for the drilling direction was 68 measured at 16:09 on May 23, 1994.

The program sundec.py works either by interactive data entry or by reading from a file.

Save the following in a file called sundec_example.dat:

3 35 33 1994 5 23 16 9 68

which is:

ΔGMT lat lon year mon day hh mm shadow_angle

We can analyze this file with either:

$ sundec.py -f sundec_example.dat  
  154.2

or

$ sundec.py < sundec_example.dat  
  154.2

or by manual input:

$ sundec.py -i  
Time difference between Greenwich Mean Time (hrs to ADD to  
      GMT for local time):  
<cntl-D> to quit 3  
Year:  <cntl-D to quit> 1994  
Month:  5  
Day:  23  
hour:  16  
minute:  9  
Latitude of sampling site (negative in southern hemisphere): 35  
Longitude of sampling site (negative for western hemisphere): 33  
Shadow angle: 68  
  154.2  
Time difference between Greenwich Mean Time (hrs to ADD to  
      GMT for local time):  
<cntl-D> to quit ^D  
 Good-bye  

In any case, the declination is 154.2.

5.2.97 thellier_magic.py

[Essentials Chapter 10] and [MagIC] [thellier_magic docs]

NB: This program is no longer maintained. You should try thellier_gui.py instead.

To see how thellier_magic.py works, download the text file from a recent paper by Shaar et al. (2011) from the MagIC database:

http://earthref.org/MAGIC/doi/0.1016/j.epsl.2010.11.013

Unpack the data with download_magic.py. The original interpretations are stored in a file called pmag_specimens.txt, but thellier_magic.py looks for a specimen file called thellier_specimens.txt. To create such a file, use the program mk_redo.py followed by thellier_magic_redo.py. Then you can take a look at the data using thellier_magic.py and change some of the interpretations, for example adding one for the specimen s2s0-01 which was rejected by Shaar et al. (2011). This has both non-linear TRM acquisition and ATRM anisotropy data, so the corrections will have to be applied using thellier_magic_redo.py after exiting the thellier_magic.py program.

Here is a transcript of a session that does all this:

$ download_magic.py -f zmab0100159tmp01.txt  
working on:  ’er_locations’  
er_locations  data put in  ./er_locations.txt  
working on:  ’er_sites’  
er_sites  data put in  ./er_sites.txt  
working on:  ’er_samples’  
er_samples  data put in  ./er_samples.txt  
....  
$ mk_redo.py  
$ thellier_magic_redo.py  
Processing  112  specimens - please wait  
uncorrected thellier data saved in:  ./thellier_specimens.txt  
 
$ thellier_magic.py  
.....  
16     550   323.6    -4.9 4.370e-08     4.2  
17     560   322.1    -2.5 3.240e-08     4.9  
Looking up saved interpretation....  
Saved interpretation:  
specimen     Tmin  Tmax  N  lab_field  B_anc  b  q  f(coe)  Fvds  beta  MAD  Dang  Drats  Nptrm  Grade  R  MD%  sigma  Gmax  
 
s1p1-01    0  560 18 75.0 79.4 -1.059  54.2 0.919 0.897 0.015      2.0     1.5     1.7 8 A  0.998 -1 0.016     4.9  
 
pTRM direction=    210.8     85.0  MAD:     3.7  
Saved interpretation:  
specimen     Tmin  Tmax  N  lab_field  B_anc  b  q  f(coe)  Fvds  beta  MAD  Dang  Drats  Nptrm  Grade  R  MD%  sigma  Gmax  
 
s1p1-01    0  560 18 75.0 79.4 -1.059  54.2 0.919 0.897 0.015      2.0     1.5     1.7 8 A  0.998 -1 0.016     4.9  
 
pTRM direction=    210.8     85.0  MAD:     3.7  
 
      s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation,  
       [p]revious, [s]ample, [q]uit:  
 
Return for next specimen  
s  
Enter desired specimen name (or first part there of): s2s0-01  
s2s0-01 214 of  269  
index step Dec   Inc  Int       Gamma  
0     0   194.5   -25.2 3.380e-07  
1     100   194.1   -25.2 3.360e-07     0.0  
2     200   192.1   -25.4 3.330e-07    11.5  
3     250   194.8   -25.5 3.290e-07    76.7  
4     300   193.6   -25.3 3.070e-07    11.7  
5     350   191.6   -26.7 1.460e-07     8.1  
6     375   188.4   -33.6 3.420e-08     5.5  
7     405    36.9   -27.6 6.800e-09     6.1  
8     430    29.8   -12.9 2.930e-09     6.2  
Looking up saved interpretation....  
    None found :(  
 
               s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation, [p]revious, [s]ample, [q]uit:  
 
Return for next specimen  
b  
 
Enter index of first point for calculation:  [ 0 ]  
return to keep default  0  
Enter index  of last point for calculation:  [ 8 ]  
return to keep default  8  
Optimization terminated successfully.  
         Current function value: 0.000000  
         Iterations: 82  
         Function evaluations: 157  
Optimization terminated successfully.  
         Current function value: 0.000000  
         Iterations: 46  
         Function evaluations: 89  
Non-linear TRM corrected intensity=  79.7699845088  
Save this interpretation? [y]/n  
y  
s2s0-02 215 of  269  
index step Dec   Inc  Int       Gamma  
0     0   295.5    -9.9 6.560e-07  
1     100   304.9   -10.1 6.480e-07    88.7  
2     200   298.8   -10.5 6.420e-07    84.8  
3     250   299.8   -10.5 6.330e-07    87.9  
4     300   299.2   -10.5 5.860e-07    27.0  
5     350   295.1    -9.5 2.270e-07     4.5  
6     375   280.1   -21.7 4.940e-08     2.0  
7     405   169.3   -16.3 1.450e-08     2.1  
8     430   151.9   -15.5 1.070e-08     2.6  
Looking up saved interpretation....  
    None found :(  
 
               s[a]ve plot, set [b]ounds for calculation, [d]elete current interpretation, [p]revious, [s]ample, [q]uit:  
 
Return for next specimen  
q

You should have gotten plots that look like these:

PIC

When you exit the program, you will have modified the thellier_specimens.txt file to include the new interpretation.

5.2.98 thellier_magic_redo.py

[Essentials Chapter 10] and [MagIC] [thellier_magic_redo docs]

This program is no longer maintained. This function is embedded in thellier_gui.py when you import a specimen file.

This program allows the recalculation of paleointensity experimental data derived from Thellier type experiments. It allows correction for remanence anisotropy from AARM or ATRM ellipsoids (stored in rmag_anisotropy format files (see [Essentials Chapter 13] and the aarm_magic.py and atrm_magic.py) and non-linear TRM corrections, if TRM aquisition data are available (see also trmaq_magic.py).

Apply the corrections to the data you re-interpreted in the thellier_magic.py example, and create a new pmag_specimens.txt formatted file, use the program thellier_magic_redo.py. First, create a “redo” file using , then re-do the calculations using thellier_magic_redo.py, including anisotropy and non-linear TRM corrections. Combine all the specimen files into one using combine_magic.py.

$ mk_redo.py -f thellier_specimens.txt  
$ thellier_magic_redo.py -ANI -NLT  
Processing  112  specimens - please wait  
WARNING: NO FTEST ON ANISOTROPY PERFORMED BECAUSE OF MISSING SIGMA - DOING CORRECTION ANYWAY  
....  
anisotropy corrected data saved in:  ./AC_specimens.txt  
$ combine_magic.py -F pmag_specimens.txt -f thellier_specimens.txt AC_specimens.txt NLT_specimens.txt  
File  ./thellier_specimens.txt  read in with  112  records  
File  ./AC_specimens.txt  read in with  112  records  
non-linear TRM corrected data saved in:  ./NLT_specimens.txt  
All records stored in  ./pmag_specimens.txt

5.2.99 tk03.py

[Essentials Chapter 16] [tk03 docs]

Sometimes it is useful to generate a distribution of synthetic geomagnetic field vectors that you might expect to find from paleosecular variation of the geomagnetic field. The program tk03.py generates distributions of field vectors from the PSV model of Tauxe and Kent (2004). Use this program to generate a set of vectors for a latitude of 30 and plot them with eqarea.py.

$ tk03.py -lat 30 >tk03.out  
$ eqarea.py -f tk03.out

which should give you a plot something like this:

PIC

5.2.100 uniform.py

[uniform docs]

It is at times handy to be able to generate a uniformly distributed set of directions (or geographic locations). This is done using a technique described by Fisher et al. (1987).

Draw 50 directions from a uniform distribution and save to a file called uniform.out. Then plot the directions with eqarea.py:

$  uniform.py -F uniform.out -n 50  
$ eqarea.py -f uniform.out

which should yield a plot something (but not exactly) like this:

PIC

5.2.101 update_measurements.py

[MagIC Database] [update_measurements docs]

This program is no longer maintained. This function is embedded in Pmag GUI.

In the SIO laboratory, our default specimen naming scheme has a logical relationship to the sample and site names. Sometimes there is no simple relationship and the MagIC import procedure can’t parse the sample name into a site name, or the original concept of a site gets changed, for example if several lava flows turn out to be the same one. In any case, there must be a way to change the way data get averaged after the measurement data get converted to a magic_measurments formatted file. To do this, the sample, site relationship can be established in the er_sample.txt file for example via the procedure described in orientation_magic.py docuementation. After the correct relationships are in the er_samples.txt file, these must be propagated throughout the rest of the MagIC tables, starting with the magic_measurements.txt file. The program update_measurements.py will do this at the measurements level. specimens_results_magic.py will then propagate the changes through to the pmag_results.txt file.

5.2.102 upload_magic.py

[MagIC Database] [upload_magic docs]

This function is embedded in Pmag GUI.

This program takes all the MagIC formatted files and puts them into a file which can be imported into the MagIC console software for uploading into the MagIC database. As an example, we can “repackage” the file used file downloaded in the download_magic.py example. You could re-interpret that data or fix records with errors, then re-upload the corrected file.

In the upload_magic directory in the example data_files directory, someone has entered all the site latitudes as longitudes and vice versa (surprisingly common!). You can fix this in the er_sites.txt file, by swapping the headers sample_lat/sample_lon in the er_samples.txt file and site_lat/site_lon in the er_sites.txt file then run specimens_results_magic.py to propogate the changes through to the results table. Be sure to set the criteria switch (-exc), the latitude switch (-lat), the geographic coordinate switch (-crd g) and the age switch (-age 0 5 Ma). Then re-package the files for uploading.

AFTER fixing the er_samples.txt and er_sites.txt files, do the following:

$ specimens_results_magic.py -exc -lat -age 0 5 Ma -crd g  
Acceptance criteria read in from  pmag_criteria.txt  
 
 Pmag Criteria stored in  pmag_criteria.txt  
 
 sites written to  pmag_sites.txt  
 results written to  pmag_results.txt  
 
 
$ upload_magic.py  
Removing:  [’citation_label’, ’compilation’, ’calculation_type’, ’average_n_lines’,  
 ’average_n_planes’, ’specimen_grade’, ’site_vgp_lat’, ’site_vgp_lon’, ’direction_type’,  
 ’specimen_Z’, ’magic_instrument_codes’, ’cooling_rate_corr’, ’cooling_rate_mcd’]  
./er_expeditions.txt is bad or non-existent - skipping  
file  ./er_locations.txt  successfully read in  
er_locations written to  ./upload.txt  
file  ./er_samples.txt  successfully read in  
only first orientation record from er_samples.txt read in  
er_samples written to  ./upload.txt  
file  ./er_specimens.txt  successfully read in  
only measurements that are used for interpretations  
.....  
now converting to dos file ’upload_dos.txt’  
Finished preparing upload file

5.2.103 vdm_b.py

[Essentials Chapter 2] [vdm_b docs]

Use this program to try to recover the original field intensity b from a Virtual Dipole Moment of 71.59 ZAm2 for the latitude 22.

$ cat>vdm_b_example.dat  
71.59e21 22  
^D  
$ vdm_b.py -f vdm_b_example.dat  
 3.300e-05

Compare this answer with the original in b_vdm.py, noting that the original input was in millitesla, while this is in tesla.

5.2.104 vector_mean.py

[Essentials Chapter 2] [vector_mean docs]

Create a set of vector data with tk03.py for a latitude of 30N. Calculate the vector mean of these data.

$ tk03.py -lat 30 >vector_mean_example.dat  
$ vector_mean.py -f vector_mean_example.dat  
    1.3    49.6    2.289e+06 100

5.2.105 vgp_di.py

[Essentials Chapter 2] [vgp_di docs]

Use the program vgp_di.py to convert the following:





λpϕp λs ϕs




68 191 33 243




Put the data into a file vgp_di_example.dat for example using cat on a *NIX operating system. Here is a transcript of one way to use the program which spits out declination, inclination:

$ vgp_di.py -f vgp_di_example.dat  
  335.6    62.9

Just for good measure, you could try it out on the data in the di_vgp.py example. See if you get the right answer!

5.2.106 vgpmap_magic.py

[Essentials Chapter 2], [MagIC Database] and high resolution instructions [vgpmap_magic docs]

Make a plot of the VGPs calculated for the dataset downloaded in the download_magic.py example. Install the high resolution and the etopo20 data files. Use install_etopo.py for that. and plot the data on an orthographic projection with the viewpoint at 60N and the Greenwich Meridian (longitude = 0). Make the points black dots with a size of 10pts. Save the file in png format

 vgpmap_magic.py -crd g -prj ortho -eye 60 0 -etp -sym ko 10 -fmt png  
 S[a]ve to save plot, Return to quit:  a  
1  saved in  VGP_map.png

You should have a plot like this one:

PIC

To use this program, you need to have basemap, which is not part of the free Python software distribution. Also, if you don’t have the high resolution installed by install_etopo.py, just leave the -etp switch off to get a plain plot.

5.2.107 watsons_f.py

[Essentials Chapter 11] [watsons_f docs]

First generate two data files with fishrot.py with κ = 15, N = 10 and I = 42, with D = 10. for the first and 20 for the second:

$ fishrot.py -k 15 -n 10 -I 42 -D 10 >watsons_f_example_file1.dat  
$ fishrot.py -k 15 -n 10 -I 42 -D 20 > watsons_f_example_file2.dat

To compare these two files using watsons_f.py:

$ watsons_f.py -f watsons_f_example_file1.dat -f2 watsons_f_example_file2.dat  
7.74186876032 3.2594 3.2594

The first number is Watson’s F statistic for these two files (see Essentials Chapter 11) and the second is the number to beat for the two files to be drawn from the same fisher distribution (share a common mean). In this case the data fail this test (F is greater than the required number). Your results may vary!

5.2.108 watsons_v.py

[Essentials Chapter 11] [watsons_v docs]

Use the two data files generated in the example for watsons_f.py and repeat the test using Watson’s V w statistic.

$ watsons_v.py -f watsons_f_example_file1.dat -f2 watsons_f_example_file2.dat  
Doing  5000  simulations  
50  
100  
150  
200  
...  
Watson’s V,  Vcrit:  
         10.5        6.8  
 S[a]ve to save plot, [q]uit without saving:  a  
1  saved in  Watsons_v_watsons_v_example_file1.dat_watsons_f_example_file2.dat.svg  
       

which generates the plot:

PIC

The two files are significantly different because Watson’s V (10.5 in this example) is greater than the V crit value estimated using Monte Carlo simulation (6.8).

Note that your results may vary in detail because every instance of fishrot.py generates a different randomly drawn dataset.

5.2.109 zeq.py

[Essentials Chapter 9] [zeq docs]

Use the program zeq.py to 1) plot a Zijderveld diagram of the data in
zeq_example.txt. b) Calculate a best-fit line from 15 to 90 mT. c) Rotate the data such that the best-fit line is projected onto the horizontal axis (instead of the default, North). d) Calculate a best-fit plane from 5 to 80 mT. Save these plots.

$ zeq.py -f zeq_example.dat -u mT

By selecting ‘b’, you can pick the bounds and choose ’l’ for best-fit line or ‘p’ for best-fit plane. You can rotate the X-Y axes by selecting ‘h’ and setting the X axis to 312. Finally, you can save your plots with the ’a’ option. You should have saved something like these plots:

PIC

The equal area projection has the X direction (usually North in geographic coordinates) to the top. The red line is the X axis of the Zijderveld diagram. Solid symbols are lower hemisphere. The solid (open) symbols in the Zijderveld diagram are X,Y (X,Z) pairs. The demagnetization diagram plots the fractional remanence remaining after each step. The green line is the fraction of the total remanence removed between each step.

5.2.110 zeq_magic.py

[Essentials Chapter 9] & [MagIC Database] [zeq_magic docs]

This program is no longer maintained and has been superseded by Demag GUI.

Plot the AF demagnetization data available in the file you got in the download_magic.py example using zeq_magic.py. Use geographic coordinates, where orientations are available.

$ zeq_magic.py -crd g  
sr01a1 0 out of  177  
    looking up previous interpretations...  
Specimen  N    MAD    DANG  start     end      dec     inc  type  component  
sr01a1 9     1.8     1.2   200.0   550.0   331.0    64.4  DE-BFL  A  
 
g: 0      0.0  C  4.065e-05   324.1    66.0  
g: 1    100.0  C  3.943e-05   330.5    64.6  
g: 2    150.0  C  3.908e-05   324.9    65.5  
g: 3    200.0  C  3.867e-05   329.4    64.6  
g: 4    250.0  C  3.797e-05   330.3    64.5  
g: 5    300.0  C  3.627e-05   330.1    64.0  
g: 6    350.0  C  3.398e-05   327.0    64.4  
g: 7    400.0  C  2.876e-05   328.2    64.0  
g: 8    450.0  C  2.148e-05   323.8    65.2  
g: 9    500.0  C  1.704e-05   326.0    63.9  
g: 10    525.0  C  1.200e-05   326.5    63.7  
g: 11    550.0  C  5.619e-06   325.5    64.4  
 
 g/b: indicates  good/bad measurement.  ‘‘bad‘‘ measurements excluded from calculation  
 
  set s[a]ve plot, [b]ounds for pca and calculate, [p]revious, [s]pecimen,  
   change [h]orizontal projection angle,   change [c]oordinate systems,  
 [d]elete current interpretation(s), [e]dit data,   [q]uit:  
 
<Return>  for  next specimen  
 

The plots will look similar to the zeq.py example, but the default here is for the X-axis to be rotated to the specimen’s NRM direction (note how the X direction is rotated off from the top of the equal area projection).

5.2.111 zeq_magic_redo.py

[Essentials Chapter 9] & [MagIC Database] [zeq_magic_redo docs]

The functionality of this program has been incorporated into Demag GUI so you should use that.

In the zeq_magic directory, use the program zeq_magic_redo.py to create a pmag_specimens formatted file with data in geographic coordinates from the sample coordinate file zeq_specimens.txt. Assuming that sample orientations are in a file called er_samples.txt, use mk_redo.py first to create file called zeq_redo. Then use zeq_magic_redo.py to create two pmag_specimen formatted files: one in specimen coordinates zeq_specimens_s.txt and one in geographic coordinates zeq_specimens_g.txt. Combine these into one file called pmag_specimens.txt.

Note that indented lines belong with the line above as a single line.

$ mk_redo.py  
$ zeq_magic_redo.py -F zeq_specimens_s.txt -crd s  
Processing  175  specimens - please wait  
Recalculated specimen data stored in  ./zeq_specimens_s.txt  
 
$ zeq_magic_redo.py -F zeq_specimens_g.txt -crd g  
Processing  175  specimens - please wait  
Recalculated specimen data stored in  ./zeq_specimens_g.txt  
$ combine_magic.py -F pmag_specimens.txt   -f zeq_specimens_s.txt zeq_specimens_g.txt

5.3 Complaints department

The best way to raise issues with any of the software or its documentation is to post to the issues page of the Github project: https://github.com/ltauxe/PmagPy/issues. Lisa Tauxe (ltauxe@usd.edu) can also be contacted by email with bug reports, suggestions and requests.

Chapter 6
The MagIC database and file formats

A number of the programs in PmagPy were developed to take advantage of the MagIC database and aid getting data in and out of it. So, we need some basic understanding of what MagIC is and how it is structured. MagIC is an Oracle database that is part of the EarthRef.org collection of databases and digital reference material. Anyone interested in the MagIC database should first become a registered EarthRef.org user. To do this, go to http://earthref.org and click on the Register link in the Topmenu. Registration is not required for access to data or browsing around, but is required for uploading of data into the MagIC database, something which we sincerely hope you will have a chance to do. After you register, go to http://earthref.org/MAGIC/search.

You can download data using the MagIC search website. After downloading, the data can be unpacked and examined using various tools in the PmagPy package, for example using Pmag GUI.

Paleomagnetic and rock magnetic data are collected and analyzed in a wide variety of ways with different objectives. Data sets can be extremely large or can be the barest boned data summaries published in legacy data tables. The goal of MagIC has been to have the flexibility to allow a whole range of data including legacy data from publications or other databases to new studies which include all the measurements, field photos, methodology, and so on. The general procedure for the future will be to archive the data at the same time that they are published. So, to smooth the path, it is advisable to put your data into the MagIC format as early in the process as possible. All data that enters the database must be converted to the standard MagIC format either as a set of MagIC tables, or as one combined text file. These can then be uploaded into the MagIC database.

6.1 Structure of the database tables

The MagIC database is organized around a series of data tables. The complete data model can be found here: http://earthref.org/MAGIC/metadata.htm.

Each MagIC table has a one line header of the form:

tab table_name

“tab” (or “tab delimited”) means that the table is tab delimited. In theory other delimiters are possible, but PmagPy only uses tab delimited formats. The table_name is one of the table names. The tables are of four general types: EarthRef tables (er_) shared in common with other EarthRef databases, MagIC tables (magic_) common to both rock magnetic and paleomagnetic studies, Paleomagnetic tables (pmag_), data reduction useful in paleomagnetic studies, Rock magnetic tables (rmag_), data reduction useful for rock magnetic studies. Most studies use only some of these tables. Here are some useful tables for a typical paleomagnetic study (starred are required in all cases):



table Brief description


er_locations* geographic information about the location(s) of the study
er_sites locations, lithologic information, etc. for the sampling sites
er_samples orientation, sampling methods, etc. for samples
er_specimens specimen weights, volumes
er_ages age information.
er_images images associated with the study (field shots, sample
photos, photomicrographs, SEM images, etc.
er_mailinglist contact information for people involved in the study


magic_measurements measurement data used in the study
magic_instruments instruments used in the study


pmag_specimens interpretations of best-fit lines, planes, paleointensity, etc.
pmag_samples sample averages of specimen data
pmag_sites site averages of sample data
pmag_results averages, VGP/V[A]DM calculations, stability tests, etc.
pmag_criteria criteria used in study for data selection


rmag_susceptibility experiment for susceptibility parameters
rmag_anisotropy summary of anisotropy parameters
rmag_hysteresis summary of hysteresis parameters
rmag_remanence summary of remanence parameters
rmag_results summary results and highly derived data
products (critical temperatures, etc.)
rmag_criteria criteria used in study for data selection


The second line of every file contains the column headers (meta-data) describing the included data. For example, an er_sites table might look like this:







tab er_sites
er_site_nameer_location_name site_lithology site_type site_lat site_lon
AZ01 Azores basalt lava flow 37.80 -25.80
...






Although data can be entered directly into Excel spreadsheets by hand, it is easier to generate the necessary tables as a by-product of ordinary data processing without having to know details of the meta-data and method codes. The section on PmagPy describes how to use the PmagPy software for data analysis and generate the MagIC data tables automatically for the most common paleomagnetic studies involving directions and/or paleointensities. See also Pmag GUI.

6.2 A word about method codes

The MagIC database tags records with “method codes” which are short codes that describe various methods associated with a particular data record. The complete list is available here: http://earthref.org/MAGIC/methods.htm. Most of the time, you do not need to know what these are (there are over a hundred!), but it is helpful to know something about them. These are divided into several general categories like ‘geochronology methods’ and ‘field sampling methods’. Method codes start with a few letters which designate the category (e.g., GM or FS for geochronogy and field sampling respectively). Then there is a second part and possibly also a third part to describe methods with lesser or greater detail. The current (version 2.4) method codes that describe various lab treatment methods to give you a flavor for how they work are listed in this table:




LT-AF-D Lab Treatment Alternating field: Double demagnetization
with AF along X,Y,Z measurement
followed by AF along -X,-Y,-Z measurement
LT-AF-G Lab Treatment Alternating field: Triple demagnetization
with AF along Y,Z,X measurement
followed by AF along Y and AF along Z measurement
LT-AF-I Lab Treatment Alternating field: In laboratory field
LT-AF-Z Lab Treatment Alternating field: In zero field
LT-CHEM Lab Treatment Cleaning of porous rocks by chemical leaching with HCl
LT-FC Lab Treatment Specimen cooled with laboratory field on
LT-HT-I Lab Treatment High temperature treatment: In laboratory field
LT-HT-Z Lab Treatment High temperature treatment: In zero field
LT-IRM Lab Treatment IRM imparted to specimen prior to measurement
LT-LT-I Lab Treatment Low temperature treatment: In laboratory field
LT-LT-Z Lab Treatment Low temperature treatment: In zero field
LT-M-I Lab Treatment Using microwave radiation: In laboratory field
LT-M-Z Lab Treatment Using microwave radiation: In zero field
LT-NO Lab Treatment No treatments applied before measurement
LT-NRM-APAR Lab Treatment Specimen heating and cooling: Laboratory
field anti-parallel to the NRM vector
LT-NRM-PAR Lab Treatment Specimen heating and cooling: Laboratory
field parallel to the NRM vector
LT-NRM-PERP Lab Treatment Specimen heating and cooling:
Laboratory field perpendicular to the NRM vector
LT-PTRM-I Lab Treatment pTRM tail check: After zero field step,
perform an in field cooling
LT-PTRM-MD Lab Treatment pTRM tail check: After in laboratory field step,
perform a zero field cooling at same temperature
LT-PTRM-Z Lab Treatment pTRM tail check: After in laboratory field step,
perform a zero field cooling at a lower temperature
LT-T-I Lab Treatment Specimen cooling: In laboratory field
LT-T-Z Lab Treatment Specimen cooling: In zero field
LT-VD Lab Treatment Viscous demagnetization by applying MU-metal screening
LP-X Lab Treatment Susceptibility
LT-ZF-C Lab Treatment Zero field cooled, low temperature IRM imparted
LT-ZF-CI Lab Treatment Zero field cooled, induced M measured on warming



For uploading to the database, all the individual tables should be assembled into a single file. Each individual data table is separated from the next by a series of ‘>>>>>>>>>>’ symbols, so a typical upload file might look like this:

tab   er_locations  
location_type er_citation_names location_end_lat location_begin_lon er_location_name location_begin_lat location_end_lon  
Drill Site This study 19.0 156.0 801C 19.0 156.0  
Drill Site This study 19 156 801 19 156  
>>>>>>>>>>  
tab   er_samples  
er_citation_names sample_lithology er_site_name er_sample_name sample_type er_location_name sample_lat sample_lon sample_class  
This study   Submarine Basaltic Glass 16r5113 16r5113 Lava Flow 801C 19 156 Igneous: Extrusive  
This study   Submarine Basaltic Glass 17r1026 17r1026 Lava Flow 801C 19 156 Igneous: Extrusive  
.  
.  
.

Correctly formatted MagIC data tables can be assembled into a suitable upload text file by using the program upload_magic.py which reads in all MagIC tables in a given directory and puts them together as in the above example. You can invoke upload_magic.py on the command line or call it within Pmag GUI. upload_magic.py creates a file called upload_magic.txt which can be uploaded into the MagIC database.

Chapter 7
Introduction to Python Programming

Type ’python’ on your command line.

You have just fired up Python and you are in an interactive mode with the prompt >>>. Now, you can just start typing commands. After each command, press the return key and see what happens:

>>> a=2  
>>> a  
2  
>>> b=2  
>>> c=a+b  
>>> c  
4  
>>> c+=1  
>>> c  
5  
>>> a=2; b=2; c=a+b; c  
4  
>>> d,e,f=4,5,6 # note syntax! d=4;e=5;f=6  
>>> # note also that the symbol ’#’ means the rest of the line is a comment!

To get out of Python interactive mode and back to your beloved command line, type the control key (here-after ˆ  ) and D at the same time. From your school experience with algebra, you will recognize a,b, and c in the above session as variables and + as an operation. If you have previous programming experience, you may have been surprised that we didn’t declare variables up front (C programmers always have to). And, the variables above are behaving as integers, not floating point variables (no decimals) but they are not letters between i and n. And there were funny lines that seemed to set three numbers at once:

>>> a=2; b=2; c=a+b; c  
and  
>>> d,e,f=4,5,6 # note syntax! d=4;e=5;f=6

Here are some rules governing variables and operations in Python:

7.1 A first look at NumPy

First of all - how do you pronounce ‘NumPy’? According to Important People at Enthought (e.g, Robert Kern) and the SciPy email list, it should be pronounced “Num” as in “Number” and “Pie” as in, well, pie, or Python. But it is also pretty fun to say Numpee!

So, that out of the way, what can NumPy do for us? Turns out, a whole heck of a lot! For now, we will just scratch the surface. It can, for example, give us the value of π as numpy.pi. Note how the module name comes first, then the name of the function we wish to use. In this case, the function just returns the value of π.

To use NumPy functions, we must first “import” the module with the command import. The first time you do this after installing Python, it may take a while, but after that it should be very quick.

There are four styles of the import command which all do pretty much the same thing, but differ in how you have to call the function after importing:

>>> import numpy  
>>> numpy.pi  
3.1415926535897931

This makes all the functions in NumPy available to you, but you have to call them with the numpy.FUNC syntax.

>>> import numpy as np  # or anything else!  
>>> np.pi  
3.1415926535897931

Importing as np does the same import as import numpy, but allows you to call NumPy anything you want for brevity and convenience. It has become a standard convention in scientific Python, including throughout the NumPy and SciPy source code and documentation, to import numpy as np. To import all the functions from NumPy and not have to type numpy at all:

>>> from numpy  import *  
>>> pi  
3.1415926535897931

This imports all the umpty-ump functions, which is a heavy load on your memory and can lead to namespace collisions (see good housekeeping tip below), but you can also just import a few, like pi or sqrt:

>>> from numpy import pi, sqrt # pi and square root  
>>> pi  
3.1415926535897931  
>>> sqrt(4)  
2.0

Did you notice how sqrt(4) where 4 was an integer, returned a floating point variable (2.0)?

Good housekeeping Tip #1: We tend to import modules using the two options above. That way we know what module the functions we are using are come from - especially because we don’t know off-hand ALL the functions available in any given module and there might be conflicts with my own function names or two different modules could have the same function (like math and numpy). These conflicts are known as namespace collisions and can be avoiding by following the best practice of using syntax like: “import numpy as np.”

Here is a (partial) list of some useful NumPy functions:



absolute(x) absolute value
arccos(x) arccosine
arcsin(x) arcsine
arctan(x) arctangent
arctan2(y,x) arctangent of y/x in correct quadrant (***very useful!)
cos(x) cosine
cosh(x) hyperbolic cosine
exp(x) exponential
log(x) natural logarithm
log10(x) base 10 log
sin(x) sine
sinh(x) hyperbolic sine
sqrt(x) square root
tan(x) tangent
tanh(x) hyperbolic tangent


Note that in the trigonometric functions, the argument is in RADIANS! You can convert from degrees to radians by multiplying by: numpy.pi/180.0. Also notice how these functions have parentheses, as opposed to numpy.pi which has none. The difference is that these take arguments, while numpy.pi just returns the value of π.

You can always use your Python interpreter as a calculator:

>>> import numpy as np  
>>> a = 2  
>>> b = -12  
>>> c = 16  
>>> quad1 = (-b + np.sqrt(b**2-4.0*a*c))/(2.0*a)  
>>> quad1  
4.0  
>>> y = np.sin(np.pi/6.0)  
>>> y  
0.5

7.2 Variable types

The time has come to talk about variable types. We’ve been very relaxed up to now, because we don’t have to declare them up front and we can often even change them from one type to another on the fly. But - variable types matter, so here goes. Python has integer, floating point (both long and short), string and complex variable types. It is pretty clever about figuring out what is required. Here are some examples:

>>> number = 1 # an integer  
>>> Number = 1.0 # a floating point  
>>> NUMBER = "1" # a string  
>>> complex = 1j # a complex number with imaginary part 1  
>>> complex(3,1) # the complex number 3+1i

Try doing math with these!

>>> number+number  
2     [ an integer]  
>>> number+Number  
2.0   [ a float]  
>>> NUMBER+NUMBER  
11 [a string]  
>>> number+NUMBER  [Gives you an angry error message]

Lesson learned: you can’t add a number and a string. And string addition is different! But you really have to be careful with this. If you multiply a float by an integer, it is possible that you will convert the float to an integer when you really wanted all those numbers after the decimal! So, if you want a float, use a float.

You can convert from one type to another (if appropriate) with:

 int(Number); str(number);  float(NUMBER);  
 long(Number); complex(real,imag)  
 

long() converts to a double precision floating point and complex() converts the two parts to a complex number.

There is another kind of variable called “boolean”. These are: true, false, and, or, and not. For the record, the integer ‘1’ is true and ‘0’ is false. These can be used to control the flow of the program as we shall learn later.

7.3 Data Structures

In previous programming experience, you may have encountered arrays, which are a nice way to group a sequence of numbers that belong together. In Python we also have arrays, but we also have more flexible data structures, like lists, tuples, and dictionaries, that group arbitrary variables together, like strings and integers and floats - whatever you want really. We’ll go through some attributes of the various data structures, starting with lists.

Lists

Examples:

>>> mylist = ["a",2.0,"400","spam",42,[24,2]] # defines a list  
>>> mylist[2] # refers to the third item  
"400"  
>>> mylist[-1] # refers to the last item  
[24,2]  
>>> mylist[1] = 26.3   # replaces the second item  
>>> del mylist[3] # deletes the fourth element

To slice out a chunk of the middle of a list:

>>> newlist = mylist[1:3]

This takes items 2 and 3 out (note it takes out up to, but not including, the last item number - don’t ask me why). Or, we can slice it this way:

>>> newlist = mylist[3:]

which takes from the fourth item (starting from 0!) to the end.

To copy a list BEWARE! You can make a copy - but it isn’t an independent copy (like in, e.g., Fortran), but it is just another name for the SAME OBJECT, so:

>>> mycopy = mylist  
>>> mylist[2] = "new"  
>>> mycopy[2]  
"new"

See how mycopy got changed when we changed mylist? To spawn a new list that is a copy, but an independent entity:

>>> mycopy = mylist[:]

Now try:

>>> mylist[2] = 1003  
>>> mycopy[2]  
"new"

So now mycopy stayed the way it was, even as mylist changed.

Python is “object oriented”, a popular concept in coding circles. We’ll learn more about what that means later, but for right now you can walk around feeling smug that you are learning an object oriented programming language. O.K., what is an object? Well, mylist is an object. Cool. What do objects have that might be handy? Objects have “methods” which allow you to do things to them. Methods have the form: object.method()

Here are two examples:

>>> mylist.append("me too") # appends a string to mylist  
>>> mylist.sort() # sorts the list alphanumerically  
>>> mylist  
[2.0, 42, [24, 2], "400", "a", "me too", "spam"]

For a complete list of methods for lists, see: http://docs.python.org/tutorial/datastructures.html#more-on-lists

More about strings

Numbers are numbers. While there are more kinds of numbers (complex, etc.), strings can be more interesting. Unlike in some languages, they can be denoted with single, double or triple quotes: e.g., ’spam’, ”Sam’s spam”, or

‘‘‘‘‘‘  
Hi there we can type as  
many lines as we want  
‘‘‘‘‘‘

Strings can be added together (newstring = ”spam” + ”alot”). They can be sliced (newerstring = newstring[0:3]). but they CANNOT be changed in place - you can’t do this: newstring[0]=”b”. To find more of the things you can and cannot do to strings, see:

http://docs.python.org/tutorial/introduction.html#strings

Data structures as objects

Tuples

What? Tuples? Tuples consist of a number of values separated by commas. They are denoted with parentheses.

>>> t = 1234, 2.0, "hello"  
>>> t  
(1234, 2.0, "hello")  
>>> t[0]  
1234

Tuples are sort of like lists, but like strings, their elements cannot be changed. However, you can slice, concatenate, etc. For more see:

http://docs.python.org/tutorial/datastructures.html#tuples-and-sequences

Dictionaries!

Dictionaries are denoted by {}. They are also somewhat like lists, but instead of integer indices, they use alphanumeric ‘keys’: Dictionaries are awesome. So here is a bit more about them.

To define one:

>>> Telnos = {"lisa":46084,"lab":46531,"jeff":44707} # defines a dictionary

To return the value associated with a specific key:

>>> Telnos["lisa"]  
46084

To change a key value:

>>> Telnos["lisa"] = 46048

To add a new key value:

>>> Telnos["newguy"] = 48888

Dictionaries also have some methods. One useful one is:

>>> Telnos.keys()  
["lisa","lab","jeff","newguy"]

which returns a list of all the keys.

For a more complete introduction to dictionaries and their use, see:

http://docs.python.org/tutorial/datastructures.html#dictionaries

N-dimensional arrays

Arrays in Python have many similarities to lists. Unlike lists, however, arrays have to be all of the same data type (dtype), usually numbers (integers or floats), although there is something called a character array. Also, the size and shape of an array must be known a priori and not determined on the fly like lists. For example, we can define a list with L=[], then append to it as desired, but not so with arrays - they are much pickier and we’ll see how to set them up later.

Why use arrays when you can use lists? They are far more efficient than lists particularly for things like matrix math. But just to make things a little confusing, there are several different data objects that are loosely called arrays, e.g., arrays, character arrays and matrices. These are all subclasses of ndarray. Below is a brief introduction to arrays and matrices using numpy:

Here are a few ways of making arrays:

>>> import numpy as np  
>>> A = np.array([[1,2,3],[4,2,0],[1,1,2]])  
>>> A  
array([[1, 2, 3],  
       [4, 2, 0],  
       [1, 1, 2]])  
>>> B = np.arange(0,10,1).reshape(2,5)  
>>> B  
array([[0, 1, 2, 3, 4],  
       [5, 6, 7, 8, 9]])  
>>> C = np.array([[1,2,3],[4,5,6]],np.int32)  
>>> C  
array([[1, 2, 3],  
       [4, 5, 6]])  
>>> D = np.zeros((2,3)) # Notice the zeros and the size is specified by a tuple.  
>>> D  
array([[ 0.,  0.,  0.],  
       [ 0.,  0.,  0.]])  
>>> E = np.ones((2,4))  
>>> E  
array([[ 1.,  1.,  1.,  1.],  
       [ 1.,  1.,  1.,  1.]])  
>>> F = np.linspace(0,10,14)  
>>> F  
array([  0.        ,   0.76923077,   1.53846154,   2.30769231,  
         3.07692308,   3.84615385,   4.61538462,   5.38461538,  
         6.15384615,   6.92307692,   7.69230769,   8.46153846,  
         9.23076923,  10.        ])  
>>> G = np.ndarray(shape=(2,2), dtype=float)  
>>> G   # note how this is initialized with really low numbers (but not zeros).  
array([[  1.90979621e-313,   2.75303490e-308],  
       [  1.08539798e-071,   3.05363949e-309]])

Note the difference between np.linspace(start,stop,N) and np.arange(start,stop,step). The function np.linspace creates an array with 14 evenly spaced elements between the start and stop values while np.arange creates an array with elements at colorbluestep intervals between the starting and stopping values. In some of the online examples, you will find the short-cuts for arange() and linspace as r_(-5,5,20j) and r_(-5,5,1.) respectively.

Python arrays have methods like dtype, ndim, shape, size, reshape(), ravel(), transpose() etc. Did you notice how some of these require parentheses and some don’t? The answer is that some of these are functions and some are classes, both of which we will get to later.

Let’s see what the methods can do. First, arrays made in the above example are of different data types. To find out what data type an array is, just use the method dtype as in:

>>> D.dtype  
dtype("float64")

Arrays, unlike lists, have dimensions and shape. Dimensions tell us how many axes there are with axes defined as in this illustration:

PIC

As shown above, our A array has two dimensions (axis 0 and 1). To get Python to tell us this, we use the ndim method:

>>> A = np.array([[1,2,3],[4,2,0],[1,1,2]]) # just to remind you  
>>> A.ndim  
2

Notice how zeros, ones and ndarray used a shape tuple in order to define the arrays in the examples above. The shape of an array is how many elements are along each axis. So, naturally we see that the C array is a 2x3 array. Python returns a tuple with the shape information using the shape method:

>>> C.shape  
(2, 3)

Let’s say we don’t want a 2x3 array for the sequence in the array C, but we want a 3x2 array. Python can reshape an array with a different shape tuple like this:

>>> C.reshape((3,2))  
array([[1, 2],  
       [3, 4],  
       [5, 6]])

And sometimes we just want all the elements lined up along one axis. We could do that with reshape using a tuple with the size of the array (the total number of elements). You can see that this is 6 here. We could even get python to tell us what the size is (C.size) and use that in the reshape size tuple. Alternatively, we can use the ravel() method which doesn’t require us to know the size in advance:

>>> C.ravel()  
array([1, 2, 3, 4, 5, 6])

There are other ways to reshape, slice and dice arrays. The syntax for slicing of arrays is similar to that for lists:

>>> B = A[0:2] # carve the top two lines off of matrix A from above  
array([[1, 2, 3],  
       [4, 2, 0]])

Lots of applications in Earth Science require the transpose of an array:

>>> A.transpose() # this is the same as A.T  
array([[1, 4, 1],  
       [2, 2, 1],  
       [3, 0, 2]])

Also, we can concatenate two arrays together with the - you guessed it - concatenate() method. For a lot more tricks with arrays, go to the NumPy Reference website here: http://docs.scipy.org/doc/numpy/reference/.

To convert the A array to a list: L=A.tolist(), from a list or tuple to an array: A=numpy.array(L), or from a list, a tuple or an array to a NumPy array: a=numpy.asarray(L)

Let’s go a bit deeper into slicing of arrays. First a review of lists. You will recall that in python, indexing starts with 0, so for the list L=[0,2,4,6,8], L[1] is 2. The index of the last item is -1, so L[-1]=8. To find out what the index for the number 4 is, for example, we have the index() method: L.index(4), which will return the number 2. We actually already used this method when we implemented command line arguments, but it wasn’t really explained. We know that to reassign a given index a new value we use the syntax L[1]=2.5. And to use a part of a list (a slice) we use, e.g., B=L[2:4], which defines B as a list with L’s elements 2 and 3 (4 and 6). And you also know that B=L[2:] takes all the elements from 2 to the end. From these examples, you can infer that the basic syntax for slicing is [start:stop:step]; if the step is omitted it is assumed to be 1.

Arrays (and matrices) work in a similar fashion to lists, but these are multidimensional objects, so things get hairy fast. The basic syntax is the same: [start:stop:step], or i:j:k. but with Python arrays, we step through all the j’s for each i at step k. This is best shown with examples:

>>> import numpy as np  
>>> A = np.linspace(0,29,30)  
>>> B = A.reshape(5,6)  
array([[  0.,   1.,   2.,   3.,   4.,   5.],  
       [  6.,   7.,   8.,   9.,  10.,  11.],  
       [ 12.,  13.,  14.,  15.,  16.,  17.],  
       [ 18.,  19.,  20.,  21.,  22.,  23.],  
       [ 24.,  25.,  26.,  27.,  28.,  29.]])  
>>> B[1:3,:-1:2]  
array([[  6.,   8.,  10.],  
       [ 12.,  14.,  16.]])

Let’s pick apart the statement B[1:3,:-1:2] to see if we can understand what it does. The first part alone returns lines 2 and 3:

>>> B[1:3]  
array([[  6.,   7.,   8.,   9.,  10.,  11.],  
       [ 12.,  13.,  14.,  15.,  16.,  17.]])

Here j goes from [:-1], in other words, we all but the last element:

>>> B[1:3,:-1]  
array([[  6.,   7.,   8.,   9.,  10.],  
       [ 12.,  13.,  14.,  15.,  16.]])

And finally, we have the step of 2, which takes every other element:

>>> B[1:3,:-1:2]  
array([[  6.,   8.,  10.],  
       [ 12.,  14.,  16.]])

For more on array slicing (indexing), see:

http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html

7.4 Pandas

Now that you have fallen in love with Python and Numpy, we have a new treat for you: pandas. Pandas is a powerful module available with the Python distribution that you should have installed in the Installing PmagPy section. It supports elegant data structures and tools for wrangling the data which allow fast and user-friendly data analysis. There are two basic data structures: the Series (a one-dimensional array like object that is a data array with an associated array of indices which can be numbers or text) and the DataFrame (a spreadsheet like data table).

First, invoke pandas by importing it, then create some data Series using several approaches. The common convention, utilized throughout pandas documention, is for pandas to be imported as pd:

>>> import pandas as pd  
>>> obj = pd.Series([1,2,3,4]) # create first pandas object with numeric indices  
>>> obj  
0    1  
1    2  
2    3  
3    4  
dtype: int64  
>>> obj2 = pd.Series([1,2,3,4],index=["a","b","c","d"]) # create second pandas object with text indices  
>>> obj2  
a    1  
b    2  
c    3  
d    4  
dtype: int64  
>>> Telnos = {"lisa":46084,"lab":46531,"jeff":44707} # create a dictionary  
>>> obj3 = pd.Series(Telnos) # create a pandas Series from the dictionary.  
>>> obj3  
jeff    44707  
lab     46531  
lisa    46084  
dtype: int64

Another very useful pandas object is the DataFrame. DataFrames have both row and column indices and are like a dictionary of Series so are much more flexible than a NumPy array object. Here is an example:

>>> data={"telnos":["44707","46531","46084"],  
...        "names":["jeff","lisa","lab"],  
...        "rooms":["300D RH","300E RH","1162 SvH"]}  
>>> frame = pd.DataFrame(data)  
>>> frame  
  names     rooms telnos  
0  jeff   300D RH  44707  
1  lisa   300E RH  46531  
2   lab  1162 SvH  46084  
 

There are many methods associated with pandas objects which allow searching, massaging and generally fiddling with the data. You should check out some of the many http://pandas.pydata.org/pandas-docs/version/0.18.1/tutorials.html on pandas on the web.

7.5 Python Scripts

Are you tired of typing yet? Python scripts are programs that can be run and re-run from the command line. You can type in the same stuff you’ve been doing interactively into a script file (ending in .py). You can edit scripts with your favorite text editor (NOT Word!). And then you can run them like this:

%python < myscript.py

On a Mac or Unix system, you can put in a header line identifying the script as python (#!/usr/bin/env python), make it executable (chmod a+x) and run it like this:

$ myscript.py

On PCs, you can just type the name of the script without the header or the chmod command. But, because PCs don’t come standard with a cat command, you have to create the file with Notepad, instead of with cat for all the examples using cat.

Here is an example that creates a script using the Unix cat command, makes it executable and then runs it:

$ cat > printmess.py  
#!/usr/bin/env python  
# simple Python test program (printmess.py)  
print "test message"  
^-D  
$ chmod a+x printmess.py  
$ ./printmess.py  
test message

In a Python script on Unix machines (including MacOS), the first line MUST be:

#! /usr/bin/env python

so that the file is interpreted as Python. Unlike Fortran or C, you CANNOT start with a comment line (try switching lines 1 and 2 and see what happens).

The second line is a comment line. Anything to the right of # is assumed to be a comment. Notice that print goes by default to your screen. Because the message is a string, you can use single or double quotes for the test message. You can get an apostrophe in your output by using double quotes and quote marks by using single quotes, i.e.,

#! /usr/bin/env python  
# simple Python test program 2 (printmess2.py)  
print "The pump don’t work ’cuz the vandals took the handles"  
print "She said \"I know what it’s like to be dead\""

produces:

$ ./printmess2.py  
The pump don’t work ’cuz the vandals took the handles  
She said ‘‘I know what it’s like to be dead‘‘  
%

In the second print statement, \” is necessary to prevent an error (try it). This is an example of a Python ‘escape code’. These are used to escape some special meaning, as in an end-quote for a string in this example. We use the backslash to say that we really really want a quote mark here. Other escape codes are listed here:

http://www.python-course.eu/variables.php

Here’s another example of a program - this one has an typo in line 4:

#! /usr/bin/env python  
abeg = 2.1  
aend = 3.9  
adif = aend - abge  
print "adif = ", adif

You had intended to type ”abeg” but typed ”abge” instead. When you run the program, you get an error message:

Traceback (most recent call last):  
  File ‘‘./undeclared.py‘‘, line 4, in <module>  
    adif = aend - abge  
NameError: name ’abge’ is not defined

Error messages are a desirable feature of Python. You don’t want the program to run by assigning some arbitrary value to abge and giving you a wrong answer. Yet many languages will do exactly that.

7.6 Code blocks

Any reasonable programming language must provide a way to group blocks of code together, to be executed under certain conditions. In Fortran, for example, there are if statements and do loops which are bounded by the statements if, endif and do, endo respectively. Many of these programs encourage the use of indentation to make the code more readable, but do not require it. In Python, indentation is the way that code blocks are defined - there are no terminating statements. Also, the initiating statement terminates in a colon. The trick is that all code indented the same number of spaces (or tabs) to the right belong together. The code block terminates when the next line is less indented. A typical Python program looks like this:

program statement  
block 1 top statement:  
    block 1 statement  
    block 1 statement \  
        ha-ha i can break the indentation convention!  
    block 1 statement  
    block 2 top statement:  
        block 2 statement  
        block 2 statement  
        block 3 top statement:  
            block 3 statement  
            block 3 statement  
            block 4 top statement: block 4 single line of code  
        block 2 statement  
        block 2 statement  
    block 1 statement  
    block 1 statement  
program statement

Exceptions to the code indentation rules are:

Good housekeeping Tip #2: Always use only spaces or only tabs in your code indentation. Spaces (4 of them together) are the preferred indentation method in Python. Whatever you do BE CONSISTENT because tabs are not the same as spaces in Python even if you can’t tell the difference just by looking at it.

In the following, you will be shown how Python uses code blocks to create “do” and “while” loops, and “if” statements.

The for loop

Looping through lists

Here is an example of a simple “for loop”:

#!/usr/bin/env python  
mylist=[42,"spam","ocelot"]  
for i in range(0,len(mylist),1): # note absence of Indices list, start and step  
   print mylist[i]  
print "All done"

This script creates the list mylist with the line mylist=[42,”spam”,”ocelot”]. The length of mylist is an integer value returned by len(mylist). The script uses this integer as the ‘stop’ value in the range() function, which returns a list of integers from 0 to the stop value MINUS ONE at intervals of one. [ The minus one convention can be hard to get used to, but it is typical of Python syntax (and also of C) so just deal with it.] Anyway, range(start,stop,step) is just like numpy.arange(start,stop,step), but returns integers instead of floats. As in numpy.arange(), there is a short hand form when the minimum is zero and the interval is one, so we could (and will) just use the command range(stop).

Python makes i step through the list of numbers from 0 to 2, printing the ith element of mylist. Note how the print command is indented - this is the program block that is executed for each i. Note also that the line could have been on the previous line after the colon, because there is only one line in the program block. But never-mind, this way works too. When i finishes it’s business, the program block terminates. At that point, the program prints out the ’All done’ string. There is no “enddo” statement or equivalent in Python.

But, Python is far more fun than the old-school for i in syntax in the above code snippet. In Python we can just step through a list directly. Here is another script which does just that (why not?):

#!/usr/bin/env python  
mylist=[42,"spam","ocelot"]  
for item in mylist: # note absence of range statement  
    print item  
print "All done"

Note that of course we could have used any variable name instead of ‘item’, but it makes sense to use variable names that mean what they do. It is easier to understand what ‘item’ stands for than just the Fortran style of i.

Here is an example with a little more heft to it. It creates a table of trigonometry functions, spitting them out with a formatted print statement:

#! /usr/bin/env python  
import numpy as np  
deg2rad = np.pi/180. # remember conversion to radians  
for theta in range(90): # short form of range, returns [0,1,2...89]  
   ctheta = np.cos(theta*deg2rad) # define ctheta as cosine of theta  
   stheta = np.sin(theta*deg2rad)# define stheta as  sine of theta  
   ttheta = np.tan(theta*deg2rad)   # define ttheta as tangent of theta  
   print "%5.1f %8.4f %8.4f %8.4f" %(theta, ctheta, stheta, ttheta)  
 

Let’s pick this program apart a bit. First, notice the use of the variable deg2rad to convert from degrees to radians. Also notice how deg2rad is defined: deg2rad = np.pi/180. using the NumPy function for π and the decimal point after 180. While in this case, it makes absolutely no difference (try it!), it is a good practice to use floating point numbers unless you want your variable to stay an integer. In fact:

Good housekeeping Tip #3: Always use a decimal if you want your variable to be a floating point variable. In the above example, 180. or 180.0 give the same floating point number.

The expression ctheta = np.cos(theta*deg2rad) uses the numpy cosine function. Ideally, theta should be a floating point number while in fact it is an integer in this expression, but fortunately Python figures that out and converts it to a floating point number. Note that we could have also converted theta to a float first with the command float(theta).

print "%5.1f %8.4f %8.4f %8.4f" %(theta, ctheta, stheta, ttheta)

To make the output look nice, we do not use

print theta, ctheta, stheta, ttheta

which would space the numbers irregularly among the columns and put out really long numbers. Instead, we explicitly specify the output format. The output format is given in the quotes. The format for each number follows the %, 5.1f is for 5 spaces of floating point output, with 1 space to the right of the decimal point. The single blank space between %5.1f and %8.4f is included in the output, in fact any text there is reproduced exactly in the output, thus to put commas between the output numbers, write:

print "%5.1f, %8.4f, %8.4f, %8.4f" %(theta, ctheta, stheta, ttheta)

Tabs (\t) would be formatted like this:

print "%5.1f ’\t’ %8.4f’\t’  %8.4f,\t’ %8.4f" %(theta, ctheta, stheta, ttheta)

Looping through arrays

We just learned that for loops with lists just step through item by item. In n-dimensional arrays, they steps through row by row (like in slicing). For example,

>>> for r in B:  
...    print r  
...  
[ 0.  1.  2.  3.  4.  5.]  
[  6.   7.   8.   9.  10.  11.]  
[ 12.  13.  14.  15.  16.  17.]  
[ 18.  19.  20.  21.  22.  23.]  
[ 24.  25.  26.  27.  28.  29.]

If you really want to step through element by element, you can use the ravel() method which flattens an N-dimensional array to a single dimension:

>>> for e in B.ravel():  
...    print  e  
...  
0.0  
1.0  
2.0  
3.0  
etc.

For more on looping (or iterating), see:

http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html

If and while blocks

The “for loop” is just one way of controlling flow in Python. There are also if and while code blocks. These execute code blocks the same way as for loops (colon terminated top statements, indented text, etc.). For both of these, the code block is executed if the top statement is TRUE. For the “if” block, the code is executed once, but in a “while” block the code keeps executing as long as the statement remains TRUE.

The key to flow control therefore is in the top statement of each code block; if it is TRUE, then execute, otherwise skip it. To decide if something is TRUE or not (in the boolean sense), we need to evaluate a statement using comparisons. Here’s a handy table with comparisons (relational operators) in different languages:

Comparisons






F 77 F90 C MATLAB PYTHON meaning






.eq. == == == == equals
.ne. /= != ~= != does not equal
.lt. < < < < less than
.le. <= <= <= <= less than or equal to
.gt. > > > > greater than
.ge. > = > = > = > = greater than or equal to
.and. .and. & & and
.or. .or. —— or






These operators can be combined to make complex tests. Here is a juicy complicated statement:

   if ( (a > b and c <= 0) or d == 0):  
       code block  
   

There are rules for the order of operations for these things like, multiplication gets done before addition. But these are easy to forget. You can look it up in the documentation if you are unsure or, better, just put in enough parenthesis to make it completely clear to anyone reading your code.

Good housekeeping Tip #4: Use parentheses liberally - make the order of operation completely unambiguous even if you could get away with fewer.

One nice aspect of Python compared to C is that if you make a mistake and type, for example,

      if (a = 0):  
         

you will get an error message during compilation. In C this is a valid statement with a completely different meaning than is intended!

Finer points of ‘if’ blocks

The simplest ‘if’ block works just like we have described:

if (2+2)==4: # note the use of ’==’ and parentheses in comparison statement  
   print "I can put two and two together!"

However, as in any other reasonable programming language, there are whistles and bells to the ‘if’ code blocks. In Python these are: elif and else. A code block gets executed if the top if statement is FALSE and the elif statement is TRUE. If both the top if and the elif statements are FALSE but the else statement is TRUE, then Python will execute the block following the else. Consider these examples:

#!/usr/bin/env python  
mylist=["jane","doug","denise"]  
if "susie" in mylist:  
    pass # don’t do anything  
if "susie" not in mylist:  
   print "call susie and apologize!"  
   mylist.append("susie")  
elif "george" in mylist: # if first statement is false, try this one  
    print "susie and george both in list"  
else: # if both statements are false, do this:  
   print "susie in list but george isn’t"

While loops

As already mentioned, the ‘while’ block continues executing as long as the while top statement is TRUE. In other words, the if block is only executed once, while the while block keeps looping until the statement turns FALSE. Here are a few examples:

#!/usr/bin/env python  
a=1  
while a < 10:  
    print a  
    a+=1  
print "I’m done counting!"

Code blocks in interactive Python

All of these program blocks can also be done in an interactive session also using indentation. The interactive shell responds with ’.....’ instead of ’>>>’ once you type a statement it recognizes as a top statement. To signal that you are done with the program block, simply hit return:

>>> a=1  
>>> while a<10:  
....    print a  
....    a+=1  
....[return to execute block]

7.7 File I/O in Python

Python would be no better than a rather awkward graphing calculator (and we haven’t even gotten to the graphing part yet) if we couldn’t read data in and spit data out. You learned a rudimentary way of spitting stuff out already using the print statement, but there is a lot more to file I/O in Python. We would like to be able to read in a variety of file formats and output the data any way we want. In the following we will explore some of the more useful I/O options in Python.

Reading data in

From a file

If you are using Python interactively or want interactivity in a script, use the command: raw_input(). It acts as a prompt and reads in whatever is supplied prior to a return as a string.

X=[] # make a list to put the data in  
ans=float(raw_input("Input numeric value for X:  "))  
X.append(ans) # append the value to X  
print X[-1] # print the last item in the list

In this example, the variable ans will be read in as a string variable, converted to a float and appended to the list, X. raw_input() is a simple but rather annoying way to enter things into a program. Another (less annoying) way is put the data in a file (e.g., myfile.txt) with cat, paste, Excel (saved as a text file), or whatever and read it into Python. The procedure is straight-forward: we must first open the file, then read it in and parse lines into the desired variables.

To open a file we use the command open(), one of Python’s built-in functions. For a complete list of these, see:

http://docs.python.org/library/functions.html

The open() function returns an object, complete with methods, like readlines() which, yes, reads all the lines. Suppose you have a file containing the coordinates of some seismic stations which you want to plot on a map or something. (In fact, it is in the data_files/LearningPython directory. The file is called station.list and its first five lines are:

   9.02920   38.76560 2442 AAE  
  42.63900   74.49400 1645 AAK  
  37.93040   58.11890  678 ABKT  
  51.88370 -176.68440  116 ADK  
 -13.90930 -171.77730  706 AFI  
 

Here is a script (ReadStations.py that will open a file station.list, read in the data and print it out line by line.

#!/usr/bin/env python  
f=open(’station.list’)  
StationNFO=f.readlines()  
for line  in StationNFO:  
    print line

If you run this script, you will get this behavior:

$ ReadStations.py  
   9.02920   38.76560 2442 AAE  
 
  42.63900   74.49400 1645 AAK  
 
  37.93040   58.11890  678 ABKT  
 
  51.88370 -176.68440  116 ADK  
  etc.  
  

The function open() has some bells and whistles to it and has the form open(name[, mode[, buffering]) where the stuff in square brackets is optional. The ‘name’ argument is the file name to open and ‘mode’ is the way in which it should be opened, most commonly for reading ’r’, writing ’w’ or appending ’a’. We use the form ’rU’ for unformatted reading because we often want to read in files that were saved in Dos, Mac OR Unix line endings and ’rU’ figures all that out for you. Just in case you are curious, Unix lines end in ’\n’, Mac files in ’\r’ and Dos (and windows) lines end in ’\r\n’. we never use the ’buffering’ argument and don’t know what it does.

If you are curious about the line endings, try typing out the ‘representation’ of the line repr(line) in the above script and you get all the stuff that is normally invisible like the apostrophes and the line terminations:

$ ReadStations.py  
’   9.02920   38.76560 2442 AAE \n’  
’  42.63900   74.49400 1645 AAK \n’  
’  37.93040   58.11890  678 ABKT\n’  
’  51.88370 -176.68440  116 ADK \n’  
’ -13.90930 -171.77730  706 AFI \n’  
etc.

Notice how in our first version, printing the line also printed the line feed (\n) as an extra line. To clean this off of each line, we can use the string strip() function:

    print line.strip("\n")

Putting this into the code results in this behavior:

$ ReadStations.py  
   9.02920   38.76560 2442 AAE  
  42.63900   74.49400 1645 AAK  
  37.93040   58.11890  678 ABKT

Let’s say you want to read in the data table into lists called Lats, Lons, and StaIDs (the first three columns). You need to split each line into its columns and append the correct column into the appropriate list. Some languages automatically split on the spaces but Python reads in the entire line as a string and ignores the spaces or other possible delimiters (commas, semi-colons, tabs, etc.). To split the line, we use the string function split([sep]) where [sep] is an optional separator. If no separator is specified (e.g., line.split()), it will split on spaces. Anything could be a separator, but the most common ones are ’,’, ’;’, and ’\t’. The latter is how a tab appears if you were to, say, print out the representation of the line, which shows all the invisibles.

Here is a slightly modified version of ReadStations.py, ParseStations.py which parses out the lines and puts numbers (floats or integers) in the right lists:

#!/usr/bin/env python  
Lats,Lons,StaIDs,StaName=[],[],[] ,[]# creates lists to put things in  
StationNFO=open("station.list").readlines() # combines the open and readlines methods!  
for line  in StationNFO:  
    nfo=line.strip("\n").split() # strips off the line ending and splits on spaces  
    Lats.append(float(nfo[0])) # puts float of 1st column into Lats  
    Lons.append(float(nfo[1]))# puts float of 2nd column into Lons  
    StaIDs.append(int(nfo[2])) # puts integer of 3rd column into StaIDs  
    StaName.append(nfo[3])# puts the ID string into StaName  
    print Lats[-1],Lons[-1],StaIDs[-1] # prints out last thing appended

From Standard Input

Python can also read from standard input. To do this, we need a system specific module, called sys which among other things has a stdin method. So, instead of specifying a file name in the open command, we could substitute the following line:

#!/usr/bin/env python  
import sys  
Lats,Lons,StaIDs,StaName=[],[],[] ,[]# creates lists to put things in  
StationNFO=sys.stdin.readlines() # reads from standard input  
for line  in StationNFO:  
    nfo=line.strip("\n").split() # strips off the line ending and splits on spaces  
    Lats.append(float(nfo[0])) # puts float of 1st column into Lats  
    Lons.append(float(nfo[1]))# puts float of 2nd column into Lons  
    StaIDs.append(int(nfo[2])) # puts integer of 3rd column into StaIDs  
    StaName.append(nfo[3])# puts the ID string into StaName  
    print Lats[-1],Lons[-1],StaIDs[-1] # prints out last thing appended

The program can be invoked with:

$ ReadStations.py < station.list

We could also use command line switches by reading in arguments from the command line. In the following example, we use the switch ’-f’ with the following argument begin the file name:

Command line switches
#!/usr/bin/env python  
import sys  
Lats,Lons,StaIDs,StaName=[],[],[] ,[]# creates lists to put things in  
if ’-f’ in sys.argv:  # look in list of command line arguments  
    file=sys.argv[sys.argv.index(’-f’)+1] # find index of ’-f’ and increment by one  
StationNFO=open(file,’rU’).readlines() # open file  
for line  in StationNFO:  
    nfo=line.strip("\n").split() # strips off the line ending and splits on spaces  
    Lats.append(float(nfo[0])) # puts float of 1st column into Lats  
    Lons.append(float(nfo[1]))# puts float of 2nd column into Lons  
    StaIDs.append(int(nfo[2])) # puts integer of 3rd column into StaIDs  
    StaName.append(nfo[3])# puts the ID string into StaName  
    print Lats[-1],Lons[-1],StaIDs[-1] # prints out last thing appended

This version can be invoked with:

$ ReadStations.py -f  station.list

Reading numeric files

In the special case where the data in a file are entirely numeric, you can read in the file with a special numpy function loadtxt(). This reads the data into a list whereby each element of the list is a list of numbers from each line.

Writing data out

Let’s say we have a Python module that will convert latitudes and longitudes to UTM coordinates. O.K. We really do have one that we downloaded from here:

http://code.google.com/p/pyproj/issues/attachmentText?id=27\&aid=-80884174771817564\&name=UTM.py\&token=46ab62caa041c3f240ca0e55b7b25ad6

We wrote a script (ConvertStations.py) to convert each of the stations in my list to their UTM equivalents (assuming these were in a WGS-84 ellipsoid). It would be nice if after having done this to the data, we could then write it out somehow, preferably to a file. Of course we could use the print command like this:

#!/usr/bin/env python  
import UTM # imports the UTM module  
Ellipsoid=23-1 # UTMs code for WGS-84  
StationNFO=open("station.list").readlines()  
for line  in StationNFO:  
    nfo=line.strip("\n").split()  
    lat=float(nfo[0])  
    lon=float(nfo[1])  
    StaName= nfo[3]  
    Zone,Easting, Northing=UTM.LLtoUTM(Ellipsoid,lon,lat)  
    print StaName, ": ", Easting, Northing, Zone  
 

which spits out something like this:

$ ConvertStations.py  
AAE :  474238.170087 998088.469113 37P  
AAK :  458516.115522 4720850.45385 43T  
ABKT :  598330.712671 4198681.92944 40S  
ADK :  521722.179764 5748148.625 1U  
AFI :  416023.683618 8462168.07766 2L  
etc.

I could save the output with a UNIX re-direct:

ConvertStations.py > mynewfile  
 

But we yearn for more. So, more elegantly, we can open an output file [for appending ‘a’ or (over)writing ’w’] write a formatted string using the write method on the output file object with format string:

#!/usr/bin/env python  
import UTM # imports the UTM module  
outfile=open("mynewfile","w") # creates outfile object  
Ellipsoid=23-1 # UTMs code for WGS-84  
StationNFO=open(’station.list’).readlines()  
for line  in StationNFO:  
    nfo=line.strip("\n").split()  
    lat=float(nfo[0])  
    lon=float(nfo[1])  
    StaName= nfo[3]  
    Zone,Easting, Northing=UTM.LLtoUTM(Ellipsoid,lon,lat)  
    outfile.write(’%s  %s  %s  %s\n’%(StaName, Easting, Northing, Zone))

The only significant changes are 1) the object outfile is opened for writing. Note that this will clobber anything in a pre-existing file by that name and 2) the output file gets written to in the statement with a write method on the output file object:

    outfile.write(’%s  %s  %s  %s\n’%(StaName, Easting, Northing, Zone))

The write statement uses the syntax: ’format string’%(list of variables tuple). Format strings have these rules:

x,y=4.82,2.3e3  
print ’%7.1f,%s\t%10.3e’%(x,’hi there’,y)  
   4.8,hi there  2.300e+03

A session using the script (ConvertStations2.py and a peek at the resulting file could look like this:

$ ConvertStations2.py  
$ head mynewfile  
AAE  474238.170087  998088.469113  37P  
AAK  458516.115522  4720850.45385  43T  
ABKT  598330.712671  4198681.92944  40S  
ADK  521722.179764  5748148.625  1U  
AFI  416023.683618  8462168.07766  2L  
ALE  509467.666259  9161062.29194  20X  
ALQ  366981.843985  3868044.56906  13S  
ANMO  366981.843985  3868044.56906  13S  
ANTO  482347.254856  4413225.7807  36S  
AQU  368638.770654  4690300.1797  33T

The Unix head command types out the first 10 lines of a file but has no DOS equivalent.

7.8 Functions

So far you have learned how to use functions from program modules like NumPy. You can imagine that there are many bits of code that you might write that you will want to use again and again, say converting between degrees and radians and back, or finding the great circle distance between two points on Earth, or converting between UTM and latitude/longitude coordinates (as in UTM.py, my new favorite package). The basic structure of a program with a Python function is:

#!/usr/bin/env python  
 
def FUNCNAME(in_args):  
    """  
    DOC STRING  
    """  
    some code that the functions does something  
    return out_args  
 
FUNCNAME(in_args) # this calls the function

Line by line analysis

def FUNCNAME(in_args):

The first line must have ’def’ as the first three letters, must have a function name with parentheses and a terminal colon. If you want to pass some variables to the function, they go where in_arg sits, separated by commas. There are no output variables here.

There are four different ways to handle argument passing.

1) You could have a function that doesn’t need any arguments at all:

 #!/usr/bin/env python  
def gimmepi():  
    """  
    returns pi  
    """  
    return 3.141592653589793  
print gimmepi()

2) You could use a set list of what are called ‘formal’ variables that must be passed:

#!/usr/bin/env python  
def deg2rad(degrees):  
    """  
    converts degrees to radians  
    """  
    return degrees*3.141592653589793/180.  
print "42 degrees in radians is: ",deg2rad(42.)

3) You could have a more flexible need for variables. You signal this by putting *args in the in_args list (along with any formal variables you want):

#!/usr/bin/env python  
def print_args(*args):  
    """  
    prints argument list  
    """  
    print "You sent me these arguments: "  
    for arg in args:  
        print arg  
print_args(1,4,"hi there")  
print_args(42)

4) You can use a keyworded, variable-length list by putting **kwargs in for in_args:

#!/usr/bin/env python  
def print_kwargs(**kwargs):  
    """  
    prints keyworded argument list  
    """  
    for key in kwargs:  
        print "%s  %s" %(key, kwargs[key])  
 
print_kwargs(arg1="one",arg2=42,arg3="ocelot")

Doc String

Although you can certainly write functional code without a document string, make a habit of always including one. Trust me - you’ll be glad you did. This docstring can later be used to remind you of what you thought you were doing years later. It can be used to print out a help message by the calling program and it also lets others know what you intended. Notice the use of the triple quotes before and after the documentation string - that means that you can write as many lines as you want.

Function body

This part of the code must be indented, just like in a for loop, or other block of code.

Return statement

You don’t need this unless you want to pass back information to the calling body (see, for example print_kwargs() above). Python separates the entrance and the exit. See how it can be done in the gimme_pi() example above.

Main program as function

It is considered good Python style to treat your main program block as a function too. (This helps with using the document string as a help function and building program documentation in general.) In any case, we recommend that you just start doing it that way too. In this case, we have to call the main program with the final (not indented) line main():

#!/usr/bin/env python  
def print_kwargs(**kwargs):  
    """  
    prints keyworded argument list  
    """  
    for key in kwargs:  
        print "%s  %s" %(key, kwargs[key])  
 
def main():  
    """  
    calls function print_kwargs  
    """  
    print_kwargs(arg1="one",arg2=42,arg3="ocelot")  
main()  # runs the main program

Notice how in the above examples, all the functions preceded the main function. This is because Python is an interpreter and not compiled - so it won’t know about anything declared below as it goes through the script line by line. On the other hand, we’ve been running lots of functions and they were not in the program we used to call them. The trick here is that you can put a bunch of functions in a separate file (in your path) and import it as a module, just like we did with NumPy. Your functions can then be called from within your program in the same way as for NumPy.

So let’s say we put all the above functions in a file called myfuncs.py:

def gimmepi():  
    """  
    returns pi  
    """  
    return 3.141592653589793  
 
def deg2rad(degrees):  
    """  
    converts degrees to radians  
    """  
    return degrees*3.141592653589793/180.  
 
def print_args(*args):  
    """  
    prints argument list  
    """  
    print "You sent me these arguments: "  
    for arg in args:  
        print arg

We could then just import the module myfuncs from within another program, or just interactively. We can use the functions, or just call for help:

$ python  
>>> import myfuncs  
>>> print myfuncs.gimmepi()  
3.14159265359  
>>> print myfuncs.print_args.__doc__  
 
    prints argument list  
 
>>>

Scope of variables

Inside a function, variable names have their own meaning which in many cases will be different from inside the calling function. So, variables names declared inside a function stay in the function. This is true unless you declare them to be “global”. Here is an example in which the main program “knows” about the functions variable V:

def myfunc():  
    global V  
    V=123  
def main():  
    myfunc()  
    print V  
main()

In addition to being able to write your own functions, of course Python has LOTS of modules and a gazzillion functions. The Enthought distribution that you are using comes with scientific python modules that have functions for plotting, numerical recipes, image manipulation, animation, and so much more.

7.9 Classes

Before we go any further, we need to learn some basic concepts about classes. These are the basis of “object oriented programming” OOP (that again!). Class objects lie behind plotting, for example, and a rudimentary understanding of what they are and how they work will come in handy when we start doing anything, but the simplest plotting exercises.

A class object is created by a call to a “class definition” which which can be thought of as a blueprint for the class object. Here is an simple example of a class definition:

class Circle:  
    """  
    This is simple example of a class  
    """  
    pi=3.141592653589793  
    def __init__(self,r):  
        self.r=r  
    def area(self):  
        return 0.5*self.pi*self.r**2  
    def circumference(self):  
        return 2.*pi*self.r  

Saving this class in a file called Shapes.py we can use it in a Python session in a manner similar to function modules:

>>> import Shapes # import the class  
>>> r=4.0  
>>> C=Shapes.Circle(r) # create a class instance with r=4.  
>>> C.pi  # retrieve an attribute  
3.141592653589793  
>>> C.area() # retrieve a method  
25.132741228718345  
>>> C.r=2.0 # change the value of r  
>>> C.area() # get a new area  
6.283185307179586

In spite of superficial similarities, classes are not the same as functions. Although the Shape module is imported just the same as any other, to use it, we first have to create a class “instance” (C=Shapes.Circle(r)). C is an object with “attributes” (variables) and “methods”. All methods (parts that start with “def”), have an argument list. The first argument has to be a reference to the class instance itself, or “self”, followed by any variables you want to pass into the method. So the __init__ method initializes the instance attributes of an object. In the above case, it defined the attribute r, which gets passed in when the class is first called. Asking for any attribute (note the lack of parentheses), retrieves the current value of that attribute. Attributes can be changed (as in C.r=2.0).

The other methods (area and circumference) are defined like any function except note the use of ’self’ as the first argument. This is required in all class method definitions. In our case, no other parameters are passed in because the only one used is r, so the argument list consists of only self. Calling these methods returns the current values of these methods.

You can make a subclass (child) of the parent class which has all the attributes and methods of the parent, but may have a few attributes and methods of its own. You do this by setting up another class definition within a class.

So, the bottom line about classes is that they are in the same category of things as variables, lists, dictionaries, etc. That is, they are ‘data structures’ - they hold data, and the methods to process that data. If you are curious about classes, there’s lots more to know about classes that you can find in useful tutorials online:

(e.g., http://www.sthurlow.com/python/lesson08/)

7.10 Matplotlib

So far you have learned the basics of Python, and NumPy. But Python was sold as a way of visualizing data and we haven’t yet seen a single plot (except a stupid one in the beginning). There are many plotting options within the Python umbrella. The most mature of these is matplotlib, a popular graphics module of Python. Actually matplotlib is a collection of a bunch of other modules, toolkits, methods and classes. For a fairly complete and readable tour of matplotlib, check out these links:

http://matplotlib.sourceforge.net/Matplotlib.pdf

and here:

http://matplotlib.sourceforge.net/

A first plot

Let’s start by reviewing the simple plot script (matplotlib1.py):

#!/usr/bin/env python  
import matplotlib  
matplotlib.use("TkAgg") # my favorite backend  
import pylab # module with matplotlib  
pylab.plot([1,2,3])  # plot some numbers  
pylab.ylabel("Y") # label the y-axis  
pylab.show() # reveal the plot

The first step should be obvious by now, it imports matplotlib. Figures are rendered on “backends” so they appear on screen. There are a lot of different back-ends with slightly different looks. Some work better on different operating systems. We use the very old school backend called “TkAgg” backend because it “works”. So step 2 sets the backend: matplotlib.use(“TkAgg”). The module matplotlib itself contains a lot of other modules. One of these, pylab is the “business end” that has a lot of plotting methods and classes. It must be loaded alongside matplotlib, so step 3 is: import pylab. After that the fun starts.

In the above example, we call the plot method with a list as an argument. As we mentioned, matplotlib uses the concept of “classes” to make plots and this has just happened behind the scenes. We could have named the plot instance with a the figure() method (e.g., fig=pylab.figure()) and then referred to it later with the command fig.plot([1,2,3]), but we don’t have to in this simple case - the class instance is implied and is the “current plot”. You can tell this, if you do the above example in interactive mode:

>>> import matplotlib  
>>> matplotlib.use("TkAgg")  
>>> import pylab  
>>> pylab.plot([1,2,3])  
[<matplotlib.lines.Line2D object at 0x4bd6eb0>]

The bit about [<matplotlib.lines.Line2D object at 0x4bd6eb0>] is Python’s way of telling you that you just created an object and something about it. In any case, when you give plot() a single sequence of values (as above), it assumes they are y values and supplies the x values for you.

Attributes of the pylab class, such as the Y axis label can be changes with the ylabel method. As you can imagine, there are LOTS of methods, including, surprise, an xlabel method.

When we are done customizing the plot instance, we can view it with the show method. When that gets executed, we will get a plot something like this:

PIC

Once that happens, we won’t be able to change the plot any more and in fact, we won’t get our terminal back until the little plot window is closed. You can save your plot with the little disk icon in a variety of formats. Adobe Illustrator likes .svg, or .eps while Microsoft products like .png file formats.

If you find it annoying to always have to close figures with the little red button, or save them with the disk icon, you can tweak the program like this:

#!/usr/bin/env python  
import matplotlib  
matplotlib.use("TkAgg")  
import pylab  
pylab.ion()  # turn on interactivity  
pylab.plot([1,2,3])  
pylab.ylabel("Y")  
pylab.draw() # draw  the current plot  
ans=raw_input(’press [s] to save figure, any other key to quit: ’)  
if ans=="s":  
    pylab.savefig("myfig.eps")

The method pylab.savefig(FILENAME.FMT). The .FMT can be one of several, e.g., .eps, .svg, .ps, .pdf, .png, .gif, .jpg, etc.). Some of these (the vector graphics ones like pdf, ps, eps and svg) can be opened in Adobe Illustrator for modification.

As mentioned earlier, if you give plot() a single sequence of values, it assumes they are y values and supplies the x values for you. Garbage in, garbage out. But plot() takes an arbitrary number of arguments of the form: (X1,Y 1, line_style_1, X2,Y 2, line_style_2, etc.), where ’line_style’ is a string that specifies the line style as illustrated in this script called matplotlib2.py

#!/usr/bin/env python  
import matplotlib  
matplotlib.use("TkAgg")  
import pylab  
import numpy as np  
x = np.arange(0,360,10)  
r = x*np.pi/180.  
c = np.cos(r)  
s = np.sin(r)  
pylab.plot(x,c,’r--’,x,s,’g^’)  
pylab.show()

which produces the plot:

PIC

From the code, you can probably figure out that a line style of ’r–’ is a red dashed line, and ’g^’ are green triangles. There are many other attributes that can be controlled: linewidth, dash style, etc. and we invite you to check out the matplotlib documentation.

By now, you should understand enough about classes, keyword argument passing and other pythonalia to be able to figure things out on your own. But don’t panic, I’m going to lead you through a few more examples, which we hope will speed you on your plotting way.

Multiple figures and more customization

As already mentioned, pylab has the concept of “current figure” which subsequent commands refer to. In the preceding examples, we only had one figure, so we didn’t have to name it, but for fancier figures with several plots, we can create named figure objects by invoking a figure instance:

fig = pylab.figure(num=1,figsize=(5,7)).

Notice the syntax whereby figsize is a method with width and height (in inches) specified by a tuple and num is the figure number. Notice that these are keyword arguments, and that there are many more: consult the list of **kwargs in the online documentation located here:

http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.figure

Once we have a figure instance (sometimes called a “container”), we can do all kinds of things, including adding subplots. To do this, we can use the syntax:

 fig.add_subplot(211)  
 

Here the argument 211 means 2 rows, one column and this is the first plot. To make plots side by side, you would use: fig.add_subplot(121) for 1 row, two columns, etc.

After each add_subplot command, that subplot becomes the current figure for plotting on. If you want more freedom, say, you want to make a subplot at an arbitrary place, use the add_axes([left, bottom,width, height]) 0 method, e.g., add_axes([0.1,0.1,0.7,0.3]). The values are 0-1 in relative figure coordinates.

To illustrate these new concepts, consider the example code, matplotlib3.py:

#!/usr/bin/env python  
import matplotlib  
matplotlib.use("TkAgg")  
import pylab  
import numpy as np  
def f(t):  
    return np.exp(-t)*np.cos(2.*np.pi*t)  
t1 = np.arange(0.,5.,0.1)  
t2 = np.arange(0.,5.,0.02)  
fig = pylab.figure(num=1,figsize=(7,5))  
fig.add_subplot(211)  
pylab.plot(t1,f(t1),’bo’)  
pylab.plot(t1,f(t1),’k-’)  
fig.add_subplot(212)  
pylab.plot(t2,np.cos(2*np.pi*t2),’r--’)  
pylab.xlabel(’Time (ms)’)  
fig.add_axes([.6,.75,.25,.10])  
pylab.plot([0,1],[0,1],’r-’,[0,1],[1,0],’r-’)  
pylab.ylabel(’Inset’)  
pylab.show()

which produces:

PIC

By now, you should be able to figure out what everything in that script does by yourself!

Adding text

We already met xlabel and ylabel. But text can be added in a other ways, e.g., using the title, text, legend and arrow methods. Let’s decorate one of our early examples to show how some of these things work:

#!/usr/bin/env python  
import matplotlib  
matplotlib.use("TkAgg")  
import pylab  
import numpy as np  
x = np.arange(0,360,10)  
r = x*np.pi/180.  
c = np.cos(r)  
s = np.sin(r)  
s2 = np.sin(r)**2  
pylab.plot(x,c,"r--",x,s,"g^",x,s2,"k-")  
pylab.title("Fun with trig")  
pylab.text(250,-.5,"pithy note")  
pylab.legend(["cos(x)","sin(x)","$\sin(x^2$)"],"lower left")  
pylab.xlabel(r"$\theta$")  
pylab.annotate("triangles!", xy=(175,0), xytext=(110,-.25), arrowprops=dict(facecolor="black", shrink=0.05))  
pylab.show()

which produces this plot:

PIC

The title appears at the top of the plot. Text labels get places at the x and y coordinates on the plot and the legend will appear in the upper/lower right/left corner as specified in the string. The pylab.text(x,y,string, kwargs) method also has optional key word arguments, specifying font, size, color and the like. The legend ’labelist’ is a list of labels for each plot element. So, every line or point style that you want in your legend, append a label to the label list after the relevant plot command. Also note that the legend and xlabel methods use a special format for strings (r’LateX String’) which allows embedded LaTeX equation syntax to make scientific equations look right - so now you have to learn LaTeX!. Finally, the arrow gets drawn with the annotate method, which has a lot of other attributes as well. Check the matplotlib documentation for details.

There are lots of graphing styles possible with matplotlib, e.g., histograms, pie charts, contour plots, whisker plots, etc. I’m just going to show you a few examples. The best thing to do is to look through the online documentation for a plot that looks like what you need, then modify it. This is ALWAYS a good approach - start with something that works and fiddle with it until it suits your own particular needs.

Chapter 8
Jupyter Notebooks

Data analysis in Python benefits from an increasingly robust ecosystem of packages for scientific computing and plotting. A tool that has gained increasingly widespread use for conducting and presenting data analysis is the Jupyter notebook. The Jupyter notebook builds on the IPython project which began as a way to bring increased interactivity to data analysis in Python (P�rez & Granger2007) and has evolved to include an interactive notebook environment that seeks to enable the full trajectory of scientific computing from initial analysis and visualization onwards to collaboration and through to publication. The project has expanded to enable a reproducible interactive computing environment for many other programming languages (such as R and Julia) and the language-agnostic parts of its architecture have recently been rebranded as Project Jupyter (http://jupyter.org). Jupyter notebooks allow for executable code, results, text and graphical output to coherently coexist in a single document. With these combined components, they are excellent tools both for conducting data analysis and presenting results.

Underlying the PmagPy programs that are accessible through the command line and the GUI interfaces (e.g., Pmag GUI) are two main function libraries: pmag.py and pmagplotlib.py. The functions within these modules can be imported and called upon within a Jupyter notebook running a Python 2.7 kernel. In addition to these functions, Nick Swanson-Hysell created a module called ipmag.py that is in active development and contains functions that replicate and extend functionality that is found within the PmagPy command line programs for use in the notebook environment.

To show some of what is possible in terms of data analysis using PmagPy in a notebook environment, we have created example notebooks that are available for download from this repository: https://github.com/PmagPy/2016_Tauxe-et-al_PmagPy_Notebooks. These notebooks can also be viewed as static webpages here: http://pmagpy.github.io/Example_PmagPy_Notebook.html and http://pmagpy.github.io/Additional_PmagPy_Examples.html.

The main example notebook (Example_PmagPy_Notebook.ipynb) combines data from two different studies for the sake of developing a mean paleomagnetic pole from the upper portion of a sequence of volcanics in the North American Midcontinent Rift (?Swanson-Hysell et al.2014). The two data files used within the notebook can be downloaded from the MagIC database. The digital object identifier (doi) search option allows for the data files to be readily located as https://earthref.org/MagIC/doi/10.1139/e74-113/ and https://earthref.org/MagIC/doi/10.1002/2013GC005180/. Downloading these data files from the database and putting them into folders within a local ‘Project Directory’ allows them to be accessed within the Jupyter notebook. Within the notebook, these data are unpacked into their respective MagIC formatted tab delimited data files. The data are then loaded into dataframes and filtered using several different criteria (stratigraphic height and polarity). Several functions from the ipmag module are used for making equal area projections and calculating statistics. In addition to combining the data sets to calculate the mean pole, the code in the notebook conducts a bootstrap fold test on the data using the approach of Tauxe & Watson (1994) as well as a common mean test. The data recombinations and calculations done in this notebook are examples of portions of the data analysis workflow which are often difficult to document and reproduce. The examples illustrate a small sliver of the potential for the use of notebooks for data manipulation and analysis of paleomagnetic data. Additional functionality available within PmagPy is demonstrated within the additional PmagPy examples notebook (Additional_PmagPy_Examples.ipynb) as small vignettes of example code. Functions related to paleomagnetic and rock magnetic data analysis are shown as examples. The notebook also illustrates some of the interactivity that can be built into the notebook utilizing IPython widgets.

To execute and edit notebook code yourself, you can clone or download this repository: https://github.com/PmagPy/2016_Tauxe-et-al_PmagPy_Notebooks. If you installed the Enthought Canopy or Anaconda Python distribution you will have the IPython and Jupyter packages installed. If you have another Python distribution you will want to make sure that you have IPython and Jupyter (installation instructions can currently be found here: http://ipython.org/install.html and here: http://jupyter.readthedocs.org/en/latest/install.html). To view and edit the notebooks, type ipython notebook or jupyter notebook on the command line. This command will open up a local IPython server in your default web browser within which you can use the directory to navigate to the data_files folder. Click on it to open and edit. You will see something like this:

PIC

Notebooks are constructed as a series of ‘cells’ which can be text or code. To view the ‘source’ of a text cell, just click on it. To render it, click on the ‘run’ button (sideways triangle on the toolbar). Similarly, to run the code in a code cell, click on the cell and then the ‘run’ button (or use the shift+enter short cut). To execute the entire notebook, click on the ‘Cell’ button and choose ’Run All’.

The easiest way to access PmagPy functionality in the notebook is to follow the regular install instructions using pip. This automatically adds all of the pmagpy module into your Python path. You can go straight to looking at the data!

If you didn’t install using pip, you can also do a developer install.

A third way to acces PmagPy inside a notebook is to set the path to the location of your downloaded PmagPy folder within the notebook itself. The default location on MacOS is in a folder called PmagPy in your home directory (e.g., /Users/YOUR_NAME_HERE/PmagPy) and on Windows is in the Documents folder (e.g., ’C:
Users
YOUR_NAME_HERE
Documents
PmagPy
PmagPy’. To do this you must import os and then have a line that reads sys.path.insert(0, ‘/Users/......’) where the path reflects your own path. If PmagPy is in your path you should be able to run the first code block which imports the PmagPy modules and makes them availible available to use within the notebook. The notebooks also import several other Python modules useful for scientific computing (numpy), data manipulation (pandas) and plotting (matplotlib).

Now you are ready to look at some data. In the code block under the heading ‘Reading data from MagIC format results files’, data are read in from a file downloaded and unpacked from the MagIC database. The notebook shows how to read in the data into a pandas DataFrame, and plot the directions on an equal area projection:

PIC

There are several other tricks shown off in the notebook, which should be enough to get you started using ipmag in a Python notebook environment. Conducting data analysis using PmagPy in a notebook allows for the underlying code of statistical tests to be available and for the decisions made in the specific implementation of such tests to be transparently presented.

Although there is much much more to do in Python, this documentation is aimed at getting and using PmagPy, so that’s it for this chapter. Congratulations if you made it to the end!

Chapter 9
Troubleshooting

9.1 Problems with installing Python or PmagPy

  1. Nothing works as advertised!
  2. I’ve installed the proper Python distribution, but PmagPy doesn’t work

  3. I am installing pmagpy into the Anaconda Python distribution
  4. My path is correctly set, but I still get this error message:
     -bash: eqarea.py: command not found

  5. If you are using the pip installation of PmagPy and you have any problems, you should start by uninstalling and reinstalling pmagpy and pmagpy-cli. To do this, use these commands:
     
    pip install --upgrade --force-reinstall pmagpy  
    pip install --upgrade --force-reinstall pmagpy-cli

    To test that they have installed properly, you can run the command:

    pip list

    which will show you all of your pip-installed packages. If you don’t see pmagpy and pmagpy-cli, you’ll need to install them.

  6. If your computer uses multiple languages, or a language other than English, you may get an error message like this:
     
      File ‘‘/Applications/Canopy.app/appdata/canopy-1.7.4.3348.macosx-x86_64/Canopy.app/Contents/lib/python2.7/contextlib.py’’, line 17, in __enter__  
      return self.gen.next()  
      File ‘‘/Users/***/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/__init__.py’’, line 1000, in _open_file_or_url  
      encoding = locale.getdefaultlocale()[1]  
      File ‘‘/Applications/Canopy.app/appdata/canopy-1.7.4.3348.macosx-x86_64/Canopy.app/Contents/lib/python2.7/locale.py’’, line 543, in getdefaultlocale  
      return _parse_localename(localename)  
      File ‘‘/Applications/Canopy.app/appdata/canopy-1.7.4.3348.macosx-x86_64/Canopy.app/Contents/lib/python2.7/locale.py’’, line 475, in _parse_localename  
      raise ValueError, ’unknown locale: %s’ % localename  
      ValueError: unknown locale: UTF-8  
      

    You can fix this by opening your .bashrc file and adding these lines:

      export LC_ALL=en_US.UTF-8  
      export LANG=en_US.UTF-8  
      

9.2 Problems updating Canopy

If you are using Canopy, you will need to keep it up to date. Currently, that means the version number should be 1.6.1 or higher. In general, you can update by opening the Canopy application; it will automatically check for updates. However, if you haven’t updated for a long time, Canopy can get “stuck” in an old version where it can no longer update. If that is the case, you will need to fully uninstall and reinstall canopy, following the directions on the Enthought site. For a full explanation, see https://support.enthought.com/hc/en-us/articles/204469570-Canopy-shows-no-updates-available-reinstalling-from-the-websiteEnthought website. For the full uninstall instructions, see https://support.enthought.com/hc/en-us/articles/204469700-Uninstalling-and-resetting-Canopy.

9.3 Error loading dlls (windows only)

Some of the modules used in PmagPy have dependencies that are not called directly. If you are missing one of those dependencies, the programs may fail in odd ways. In general, it is worthwhile to open Canopy, go into the package manager, and update all packages. More specifically, if you see an error message like this:

ImportError: DLL load failed: The specified procedure could not be found  

you may be missing the MKL package. Open Canopy, go to package manager, and select “Available packages”. Scroll down to MKL, and install it. This should fix the problem! Note that windows users who are having trouble opening one of the GUIs should try this solution whether or not they actually see this error message.

9.4 Problems with a specific program

  1. Thellier auto_interpreter crashes with a Segmentation Fault (on Mac).

9.5 Other problems

Report a problem not listed above: e-mail ltauxe@ucsd.edu. Include the following information: 1) the version of PmagPy that you are using, 2) your operating system, 3) any error messages that you got, 4) the datafile that is giving trouble, if relevant.

Bibliography

    Ben-Yosef, E., Ron, H., Tauxe, L., Agnon, A., Genevey, A., Levy, T., Avner, U., & Najjar, M. (2008). Application of copper slag in geomagnetic archaeointensity research. J. Geophys. Res., 113, doi:10.1029/2007JB005235.

    Besse, J. & Courtillot, V. (2002). Apparent and true polar wander and the geometry of the geomagnetic field over the last 200 myr. J. Geophys. Res, 107, doi:10.1029/2000JB000050.

    Carter-Stiglitz, B., Solheid, P., Egli, R., & Chen, A. (2006). Tiva canyon tuff (ii): Near single domain standard reference material available. The IRM Quarterly, 16(1), 1.

    Cogn�, J. (2003). Paleomac: A macintosh application for treating paleomagnetic data and making plate reconstructions. Geochem. Geophys. Geosys., 4, 1007, doi:10.1029/2001GC000227.

    Day, R., Fuller, M. D., & Schmidt, V. A. (1977). Hysteresis properties of titanomagnetites: grain size and composition dependence. Phys. Earth Planet. Inter., 13, 260–266.

    Fairchild, L.M., Swanson-Hysell, N.L., & Tikoo, S.M. (2016). A matter of minutes: Breccia dike paleomagnetism provides evidence for rapid crater modification. Geology, 10.1130/G37927.1.

    Gee, J. S., Tauxe, L., & Constable, C. (2008). Amsspin - a labview program for measuring the anisotropy of magnetic susceptibility (ams) with the kappabridge kly-4s. Geochem. Geophys. Geosyst., 9, Q08Y02,doi:10.1029/2008GC001976.

    Gradstein, F., Ogg, J., & Smith, A. (2004). Geologic Time Scale 2004. Cambridge: Cambridge University Press.

    Jelinek, V. (1977). The statistical theory of measuring anisotropy of magnetic susceptibility of rocks and its application. Brno, Geophyzika, (pp. 1–88).

    Korte, M. & Constable, C. (2011). Improving geomagnetic field reconstructions for 0-3 ka. Phys. Earth Planet. Int., 188, 247–259.

    Korte, M., Constable, C., Donadini, F., & Holme, R. (2011). Reconstructing the holocene geomagnetic field. Earth and Planetary Science Letters, 312, 497–505.

    Lawrence, K. P., Tauxe, L., Staudigel, H., Constable, C., Koppers, A., McIntosh, W. C., & Johnson, C. L. (2009). Paleomagnetic field properties near the southern hemisphere tangent cylinder. Geochem. Geophys. Geosyst., 10, Q01005, doi:10.1029/2008GC00207.

    Leonhardt, R., Heunemann, C., & Kra�sa, D. (2004). Analyzing absolute paleointensity determinations: Acceptance criteria and the software thelliertool4.0. Geochem. Geophys. Geosys., 5, Q12016, doi:10.1029/2004GC000807.

    McFadden, P. L. & McElhinny, M. W. (1990). Classification of the reversal test in palaeomagnetism. Geophys. J. Int., 103, 725–729.

    Paterson, G., Tauxe, L., Biggin, A., Shaar, R., & Jonestrask, L. (2014). On improving the selection of thellier-type paleointensity data. Geochem. Geophys. Geosys., 15.

    P�rez, F. & Granger, B. (2007). IPython: a system for interactive scientific computing. Computing in Science and Engineering, 9, 21–29.

    Sbarbori, E., Tauxe, L., Gogichaishvili, A., Urrutia-Fucugauchi, J., & Bohrson, W. (2009). Paleomagnetic behavior of volcanic rocks from isla socorro, mexico. Earth Planets and Space, (pp. in press).

    Shaar, R., Ben Yosef, E., Ron, H., Tauxe, L., Agnon, A., & Kessel, R. (2011). Geomagnetic field intensity: How high can it get? how fast can it change? constraints from iron-age copper-slag. Earth and Planetary Science Letters, 301, 297–306.

    Shaar, R., Ron, H., Tauxe, L., Kessel, R., Agnon, A., Ben Yosef, E., & Feinberg, J. (2010). Testing the accuracy of absolute intensity estimates of the ancient geomagnetic field using copper slag material. Earth and Planetary Science Letters, 290, 201–213.

    Shaar, R. & Tauxe, L. (2013). Thellier_gui: An integrated tool for analyzing paleointensity data from thellier-type experiments. Geochem. Geophys. Geosys., 14.

   Swanson-Hysell, N. L., A. A. Vaughan, M. R. Mustain, and K. E. Asp, Confirmation of progressive plate motion during the Midcontinent Rift’s early magmatic stage from the Osler Volcanic Group, Ontario, Canada. Geochem., Geophys., Geosyst., 15, 2039–2047, 2014.

    Tauxe, L. (1998). Paleomagnetic Principles and Practice. Dordrecht: Kluwer Academic Publishers.

    Tauxe, L., Banerjee, S. K., Butler, R., & van der Voo, R. (2010). Essentials of Paleomagnetism. Berkeley: University of California Press.

    Tauxe, L., Bertram, H., & Seberino, C. (2002). Physical interpretation of hysteresis loops: Micromagnetic modelling of fine particle magnetite. Geochem., Geophys., Geosyst., 3, DOI 10.1029/ 2001GC000280.

    Tauxe, L., Constable, C., Johnson, C., Miller, W., & Staudigel, H. (2003). Paleomagnetism of the southwestern u.s.a. recorded by 0-5 ma igneous rocks. Geochem., Geophys., Geosyst., (pp. DOI 10.1029/2002GC000343).

    Tauxe, L. & Hartl, P. (1997). 11 million years of oligocene geomagnetic field behaviour. Geophys. J. Int., 128, 217–229.

    Tauxe, L. & Kent, D. V. (2004). A simplified statistical model for the geomagnetic field and the detection of shallow bias in paleomagnetic inclinations: Was the ancient magnetic field dipolar?, volume 145, (pp. 101–116). American Geophysical Union: Washington, D.C.

    Tauxe, L., Kylstra, N., & Constable, C. (1991). Bootstrap statistics for paleomagnetic data. J. Geophys. Res, 96, 11723–11740.

    Tauxe, L., Luskin, C., Selkin, P., Gans, P. B., & Calvert, A. (2004). Paleomagnetic results from the snake river plain: Contribution to the global time averaged field database. Geochem., Geophys., Geosyst., Q08H13, doi:10.1029/2003GC000661.

    Tauxe, L. & Staudigel, H. (2004). Strength of the geomagnetic field in the cretaceous normal superchron: New data from submarine basaltic glass of the troodos ophiolite. Geochem. Geophys. Geosyst., 5(2), Q02H06, doi:10.1029/2003GC000635.

    Tauxe, L., Stickley, C., Sugisaki, S., Bijl, P., Bohaty, S., Brinkhuis, H., Escutia, C., Flores, J.-A., Houben, A., Iwai, M., Jimenez-Espejo, F., McKay, R., Passchier, S., Pross, J., Riesselman, C., Roehl, U., Sangiorgi, F., Welsh, K., Klaus, A., Fehr, J., Bendle, J., Dunbar, R., Gonzalez, S., Hayden, T., Katsuki, K., Olney, M., Pekar, S., Shrivastava, P., van de Flierdt, T., Williams, T., & Yamane, M. (2012). Chronostratigraphic framework for the iodp expedition 318 cores from the wilkes land margin: constraints for paleoceanographic reconstruction. Paleoceanography, 27, doi:10.1029/2012PA002308.

    Tauxe, L. & Watson, G. S. (1994). The fold test: an eigen analysis approach. Earth Planet. Sci. Lett., 122, 331–341.

    Torsvik, T., M�ller, R., van der Voo, R., Steinberger, B., & Gaina, C. (2008). Global plate montion frames: toward a unified model. Rev. Geohys., 46, RG3004, doi:10.1029/2007RG000227.

    Vandamme, D. (1994). A new method to determine paleosecular variation. Phys. Earth Planet. Int., 85, 131–142.

    Watson, G. (1983). Large sample theory of the langevin distributions. J. Stat. Plann. Inference, 8, 245–256.