2 Getting Started - Tools and Best Practices
2.1 Setting Up Your Python Environment
Before writing any code, you need a working Python environment on your machine. This section covers the essentials: installing Python, running code in Jupyter Notebooks, and installing the libraries used throughout this book.
2.1.1 Installing Python
There are two main routes to getting Python installed:
Anaconda (recommended for scientific work)
Anaconda is a Python distribution designed for data science and scientific computing. It comes pre-installed with many of the libraries used in this book, including pandas, numpy, matplotlib, and jupyter. This means you can get started without having to install each library individually.
After downloading and installing Anaconda, you will have access to the Anaconda Navigator, a graphical interface for launching Jupyter Notebooks, managing environments, and installing packages.
For a lighter alternative, Miniconda installs just the core Python interpreter and the conda package manager, letting you add only the libraries you need.
Standard Python from python.org
You can also download Python directly from python.org. This gives you a minimal installation and you will need to install libraries yourself using pip. This is a perfectly valid approach, but for newcomers to scientific Python, Anaconda tends to be the smoother experience.
2.1.2 Jupyter Notebooks
Most of the examples in this book are designed to be run in Jupyter Notebooks. A notebook is an interactive document that combines code, output, and narrative text in a single place. You write code in cells, run each cell individually, and see the results immediately below.
This makes notebooks ideal for data exploration and learning. You can experiment with a line of code, see the result, adjust it, and move on, all without leaving the document.
To launch a notebook:
- If you installed Anaconda, open Anaconda Navigator and click Launch under Jupyter Notebook.
- Alternatively, open a terminal and type:
jupyter notebookThis will open your web browser with the Jupyter file explorer, from which you can create a new notebook.
Key concepts:
- Code cells contain Python code. Press
Shift + Enterto run a cell and move to the next one. - Markdown cells contain formatted text, which is useful for adding notes and explanations alongside your code.
- Kernel is the Python process running behind the scenes. If things behave unexpectedly, try restarting the kernel from the Kernel menu and running your cells again from the top.
2.1.3 Beyond Notebooks: IDEs
As your Python code becomes more complex, you may want to move beyond notebooks into an Integrated Development Environment (IDE). An IDE provides tools like code completion, debugging, file management, and version control integration that make larger projects easier to manage.
Two popular choices for Python are:
- Visual Studio Code (VS Code): free, lightweight, and highly extensible. It also has excellent support for running Jupyter Notebooks directly within the editor.
- PyCharm: a more full-featured Python IDE with a free community edition.
You do not need an IDE to follow this book. Jupyter Notebooks are sufficient for all the examples. However, if you find yourself building larger scripts or reusable functions, an IDE is worth exploring.
2.1.4 Installing Packages
Python’s strength lies in its ecosystem of libraries. When you need a library that is not already installed, you can add it using pip (Python’s built-in package manager) or conda (if you are using Anaconda).
pip install lasioor
conda install -c conda-forge lasioThe core libraries used throughout this book are:
| Library | Purpose | Install |
|---|---|---|
pandas |
Tabular data manipulation | Included with Anaconda |
numpy |
Numerical computing | Included with Anaconda |
matplotlib |
Plotting and visualisation | Included with Anaconda |
lasio |
Reading LAS well log files | pip install lasio |
dlisio |
Reading DLIS well log files | pip install dlisio |
seaborn |
Statistical visualisation | Included with Anaconda |
If you installed Python via Anaconda, most of these are already available. You will only need to install lasio and dlisio separately.
2.2 Getting Started With Python
This book is not a full Python course, but it helps to understand a few core concepts before we start working with well log data.
The goal of this section is simple: give you enough Python to follow the examples confidently, without trying to reteach the entire language.
2.2.1 Variables and basic data types
A variable is just a named container for a value.
well_name = "15/9-19"
top_depth = 3120.0
has_density_log = TrueIn practice, you will use variables to store curve names, depths, cut-offs, file paths, and intermediate calculations.
2.2.2 Lists and dictionaries
A list stores an ordered collection of values.
A dictionary stores key-value pairs.
curves = ["GR", "RHOB", "NPHI"]
curve_units = {"GR": "API", "RHOB": "g/cc", "NPHI": "v/v"}These are used constantly in this book for selecting groups of curves, storing labels, and mapping lithology or formation metadata.
2.2.3 For loops
A for loop is used when you want to repeat a task for each item in a collection.
curves = ["GR", "RHOB", "NPHI"]
for curve in curves:
print(f"Processing curve: {curve}")This is one of the most useful patterns in well log work.
Instead of writing similar code three or ten times, you loop through your curves once.
2.2.4 While loops
A while loop repeats as long as a condition is true.
depth = 3000
while depth <= 3010:
print(depth)
depth += 0.5while loops are less common than for loops in data analysis, but they are useful when you do not know in advance how many iterations you need.
Always make sure your condition will eventually become false, otherwise you create an infinite loop.
2.2.5 Conditionals (if, elif, else)
Conditionals let your code make decisions.
gr = 115
if gr < 75:
facies = "clean"
elif gr < 110:
facies = "mixed"
else:
facies = "shaley"In this book, conditionals are used for quality control, data screening, and simple rule-based classification.
2.2.6 Functions
A function packages reusable logic into a single block.
def calc_vsh(gr, gr_min=25, gr_max=150):
return (gr - gr_min) / (gr_max - gr_min)
vsh = calc_vsh(95)
print(vsh)Functions make your code easier to test, reuse, and debug. If you find yourself copying and pasting the same code block, that is usually a sign it should become a function.
2.2.7 F-strings
F-strings (formatted string literals) are used throughout this book for creating labels, titles, and print statements. They let you embed variables directly inside a string by prefixing it with f and placing variables inside curly braces:
well_name = "15/9-19"
top_depth = 3120.0
print(f"Well {well_name} starts at {top_depth} m")This prints: Well 15/9-19 starts at 3120.0 m
You can also format numbers inside f-strings. This is particularly useful when displaying calculated values:
porosity = 0.18734
print(f"Porosity: {porosity:.2f}") # Porosity: 0.19
print(f"Porosity: {porosity:.1%}") # Porosity: 18.7%2.2.8 List comprehensions
A list comprehension is a concise way to create a new list by transforming or filtering an existing one. Instead of writing a full for loop:
# Standard for loop
clean_curves = []
for curve in ["GR", "RHOB", "NPHI", "BS"]:
if curve != "BS":
clean_curves.append(curve)You can write:
# List comprehension
clean_curves = [curve for curve in ["GR", "RHOB", "NPHI", "BS"] if curve != "BS"]Both produce ['GR', 'RHOB', 'NPHI']. List comprehensions appear in several examples later in this book, so it helps to recognise the pattern even if you prefer the explicit loop.
2.2.9 Imports and libraries
Python has a large ecosystem of libraries. You import what you need at the top of your file or notebook.
import pandas as pd
import matplotlib.pyplot as plt
import lasioMost examples in this book use these libraries:
pandasfor tabular datamatplotlibfor plottinglasiofor reading LAS files
2.2.10 A quick note on errors
When code fails, Python tells you where and why through a traceback.
Typical early issues are:
- misspelled variable names
- wrong curve names
- incorrect file paths
- missing library installs
The practical habit to build is to read the final line of the traceback first, then work upward to your own code.
2.2.11 Keep it practical
You do not need to master every Python feature before starting this book. If you can understand variables, lists, loops, conditionals, and functions, you are in a good position to follow along.
2.3 Working with pandas
pandas is the most important library in this book. Nearly every example loads well log data into a pandas DataFrame, a two-dimensional table with labelled columns and an index. If you are familiar with spreadsheets, a DataFrame is the Python equivalent.
2.3.1 Creating a DataFrame
You can create a DataFrame from a dictionary, where the keys become column names:
import pandas as pd
data = {
'DEPTH': [3120.0, 3120.5, 3121.0, 3121.5, 3122.0],
'GR': [45.2, 48.1, 67.3, 89.4, 92.1],
'RHOB': [2.35, 2.38, 2.41, 2.55, 2.58],
}
df = pd.DataFrame(data)In practice, you will rarely build DataFrames by hand. Most of the time you will create them by reading a LAS file with lasio or loading a CSV file with pd.read_csv().
2.3.2 Inspecting data
Once you have a DataFrame, there are several methods for quickly understanding what you are working with:
df.head() # First 5 rows
df.describe() # Summary statistics (mean, min, max, etc.)
df.info() # Data types and non-null counts
df.columns # List of column names
df.shape # Number of rows and columnsThese are the first things you should reach for when loading a new dataset. They tell you how many rows and columns you have, what the curves are called, and whether anything is missing.
2.3.3 Selecting columns
To select a single column, use square brackets with the column name. This returns a pandas Series:
gr = df['GR']To select multiple columns, pass a list of names. This returns a new DataFrame:
subset = df[['GR', 'RHOB']]2.3.4 Filtering rows
You can filter rows using a condition. The condition creates a boolean mask (True/False for each row), which is then used to select only the rows where the condition is True:
# Rows where gamma ray exceeds 100 API
high_gr = df[df['GR'] > 100]
# Rows within a specific depth range
zone = df[(df['DEPTH'] >= 3500) & (df['DEPTH'] <= 3600)]For more precise selection, pandas provides .loc[] (select by label) and .iloc[] (select by position):
df.loc[0:4, 'GR'] # Rows 0-4, GR column (by label)
df.iloc[0:5, 1] # First 5 rows, second column (by position)2.3.5 Handling missing data
Well log data almost always contains gaps. In pandas, missing values are represented as NaN (Not a Number). Several methods help you identify and handle them:
df.isna().sum() # Count of missing values per column
df.dropna() # Remove rows with any missing values
df['GR'].fillna(-999) # Replace NaN with a specific valueUnderstanding where data is missing is a key part of any petrophysical workflow, and we will explore this in detail in later chapters.
2.3.6 Grouping and aggregation
When working with multi-well datasets, you often need to perform operations on a per-well basis. The .groupby() method splits the data by a column and lets you apply functions to each group:
# Average gamma ray per well
df.groupby('WELL')['GR'].mean()The .apply() method lets you run a custom function on each row or group, which is used in later chapters when merging formation data with well logs.
2.4 Working with NumPy
NumPy is the numerical computing library that sits underneath pandas. While you will mostly interact with data through pandas DataFrames, several examples in this book use NumPy directly for calculations and array operations.
2.4.1 NumPy arrays
A NumPy array is a grid of values, all of the same type. You can create one from a list:
import numpy as np
depths = np.array([3120.0, 3120.5, 3121.0, 3121.5, 3122.0])Pandas columns are built on top of NumPy arrays, so you can use NumPy functions directly on DataFrame columns.
2.4.2 Common operations
These are the NumPy functions you will encounter most often in this book:
np.mean(df['GR']) # Mean value
np.percentile(df['GR'], 95) # 95th percentile
np.arange(0, 200, 10) # Evenly spaced values: 0, 10, 20, ..., 190
np.count_nonzero(df['GR'] > 100) # Count values meeting a conditionNumPy also handles the mathematical rescaling used in some of the visualisation examples, such as converting neutron porosity to density units for crossover fills.
You do not need to be a NumPy expert to follow this book. Knowing that it provides fast numerical operations and recognising the np. prefix in code examples is sufficient.
2.5 Learning Python
This book is not aimed at teaching you the fundamentals of Python from scratch. However, if you are new to the language or want to strengthen your foundations, there are several excellent resources available. These cover everything from the basics of the language through to more advanced concepts such as Object Oriented Programming (OOP) and beyond.
2.5.1 Books
- Learning Python by Mark Lutz is a comprehensive reference for the Python language. It is detailed and thorough, making it a good book to have on your shelf.
- Automate the Boring Stuff with Python by Al Sweigart is freely available online and takes a practical, project-based approach to learning Python. It is an excellent starting point for complete beginners.
- Python Crash Course by Eric Matthes covers the fundamentals quickly and includes several hands-on projects.
2.5.2 Online courses and tutorials
There are numerous online platforms offering Python courses. Rather than listing specific courses that may change or go behind paywalls, here are platforms I have found consistently useful:
- Real Python provides well-written tutorials on a wide range of Python topics, from the basics through to advanced data science.
- The official Python tutorial is thorough and always up to date.
- Udemy, Coursera, and EdX all host beginner and intermediate Python courses, many of which are free or low cost.
One piece of advice that I have learned over the years: take one good beginner course and then start working on your own projects. It is easy to fall into “tutorial hell” where you are constantly watching videos and running practice scripts but never building anything real. The best way to learn is by applying what you have learned to problems you actually care about.
2.5.3 Geoscience-specific resources
- Software Underground is a community of geoscientists who code. Their annual Transform conferences include free tutorials on Python for geoscience, many of which are available on YouTube.
- My own YouTube channel covers applications of Python and machine learning in the geoscience domain, and many of the topics in this book have accompanying videos.
2.5.4 Using AI to help you learn
Tools like ChatGPT and Claude can be useful learning companions. You can ask them to explain a piece of code, generate a practice exercise, or help you debug an error message. This can speed up the learning process significantly, especially when you are stuck on a specific concept.
However, be aware that AI-generated code is not always correct. It is important to understand what the code is doing rather than simply copying and pasting it. Use these tools as a tutor, not as a shortcut.
2.6 Writing Better Python Code
As you start writing more Python, a few simple habits will make your code easier to understand, maintain, and share with others. You do not need to follow these rigidly from day one, but being aware of them early will save you time later.
2.6.1 Use meaningful variable names
When working quickly, it is tempting to use short variable names like x, d, or v1. However, when you come back to your code weeks or months later, these names will not tell you anything.
# Unclear
x = 95
v = (x - 25) / (150 - 25)
# Clear
gr_value = 95
vshale = (gr_value - gr_clean) / (gr_shale - gr_clean)In well log work, using the curve mnemonic or a descriptive name makes a significant difference to readability.
2.6.2 Add comments that explain the why
Comments should explain why something is done, not what the code does. The code itself already tells you the what.
# Bad: multiply by 0.6
result = gri * 0.6
# Good: apply Larionov correction for Tertiary rocks
clay_shale_ratio = 0.6
result = gri * clay_shale_ratio2.6.3 Avoid magic numbers
A magic number is a value that appears in your code without explanation. If the same value appears in multiple places and you later need to change it, you have to find and update every instance. Assigning it to a named variable solves both problems:
# Instead of:
if gr > 150:
flag = "high"
# Use:
GR_SHALE_CUTOFF = 150
if gr > GR_SHALE_CUTOFF:
flag = "high"This makes the intent of the code immediately clear, even to someone who has never seen it before.
2.7 Python Output for Geoscience Data
As geoscientists, one thing we either love (or hate depending on your point of view) are tables! We see them everywhere from geological reports to lab test results. Oftentimes, this data is either in the form of a report or an Excel spreadsheet. However, what do we use if we want to work with tables in Python?
You may be familiar with using Pandas to create, manipulate and display tabular data, which is great, but sometimes you want to add a bit of style to the table in the console. If you do, then there are a few simple python libraries that can quickly display this kind of data directly in the console without needing to touch Pandas.
This is great if you are working with lists or dictionaries, and don’t want to have to manage several Pandas dataframes.
Also, the output from these libraries is really handy when you are creating a command line interface (CLI) only app for your colleagues. It maintains readability and also makes things interesting.
In this section we will look at three simple libraries that can improve your tabular output to the console:
- pprint
- tabulate
- rich
2.7.1 pprint
pprint is a module that is part of the Python standard library. It is used to format Python objects, such as dictionaries, lists and tuples into a format that you can actually see and quickly scan.
In this example, we will take some well and curve metadata from a las header and store it in a dictionary.
from pprint import pprint
las_header = {
"Well": {
"WELL": "Random-02",
"FLD": "North Sea",
"SRVC": "Example Energy Ltd.",
"STRT": 3120.0,
"STOP": 3245.0,
"STEP": 0.5,
},
"Curves": [
{"mnemonic": "GR", "unit": "API", "null": -999.25, "count_nulls": 0},
{"mnemonic": "RHOB", "unit": "g/cc","null": -999.25, "count_nulls": 12},
{"mnemonic": "NPHI", "unit": "v/v", "null": -999.25, "count_nulls": 4},
],
"Params": {"NULL": -999.25, "COMP": "ExampleCo"},
}
print("Raw print():")
print(las_header) If we just use the conventional print method from Python, we end up with the following output, which can be difficult to read and find the right keys. Especially if we are dealing with nested dictionaries.

However, when we use pprint with the following setup, we can control the width of the output and indentations and preserve the order of the dictionaries by setting sort_dicts=False
print("\nPretty print():")
pprint(las_header, width=100, indent=2, sort_dicts=False)We get back a much nicer and much more readable output.
Using pprint to format the dictionary console output.
All of our dictionary keys and values are nicely formatted, making it easier to read and find values that we are looking for.
To make it even better, we can use cprint to bring some colour to our output.
from prettyprinter import cpprint # coloured pretty print
well = {
"well": {"name": "Random-02", "field": "North Sea", "null": -999.25},
"curves": [
{"mnemonic": "GR", "unit": "API", "nulls": 0, "depth_range": [3120.0, 3245.0]},
{"mnemonic": "RHOB", "unit": "g/cc", "nulls": 12, "depth_range": [3120.0, 3245.0]},
{"mnemonic": "NPHI", "unit": "v/v", "nulls": 4, "depth_range": [3120.0, 3245.0]},
],
"qc": {"despike": True, "gap_fill": "linear"},
}
cpprint(well) # nice defaults, colour if your terminal supports ANSIWhen we run the above code, we get back the following output which helps improve the readability by highlight different data types in different colours.
pprint dictionary output provides a more readable structure compared to the default Python output.
2.7.2 tabulate
tabulate is another great little library I have used a number of times to create tabular output.
It takes lists or dictionaries and converts them into clean and easy-to-read tables.
For example, if we have some metadata about our well log data, we can put it into a format that is better than the standard print method.
from tabulate import tabulate
curves = [
{"Mnemonic": "GR", "Unit": "API", "Samples": 251, "Nulls": 0, "p99": 118},
{"Mnemonic": "RHOB", "Unit": "g/cc", "Samples": 251, "Nulls": 12, "p99": 2.86},
{"Mnemonic": "NPHI", "Unit": "v/v", "Samples": 251, "Nulls": 4, "p99": 0.39},
]
print(tabulate(curves, headers="keys", tablefmt="github",
floatfmt=".2f", colalign=("left","left","right","right","right")))When we run the above code, we get the following output containing a summary of our curves.
Simple example of a table generated using tabulate.
Whilst the above code is manually entered, we could automate the process of summarising las files by creating a custom function to which we can pass our las file into.
import lasio
import numpy as np
from tabulate import tabulate
def summarise_curves(lasfile, tablefmt="github"):
"""
Summarise curves from a LAS file into a tabular report.
Parameters
----------
lasfile : str or lasio.LASFile
Path to a LAS file, or an already-loaded lasio.LASFile object.
tablefmt : str
Table style format passed to tabulate (default: 'github').
"""
las = lasio.read(lasfile) if isinstance(lasfile, str) else lasfile
null = las.well.NULL.value if "NULL" in las.well else -999.25
curves = []
for curve in las.curves:
data = las[curve.mnemonic]
samples = len(data)
nulls = np.count_nonzero(data == null)
valid = data[data != null]
p99 = np.percentile(valid, 99) if len(valid) > 0 else np.nan
curves.append({
"Mnemonic": curve.mnemonic,
"Unit": curve.unit,
"Samples": samples,
"Nulls": nulls,
"p99": p99,
})
return tabulate(curves,
headers="keys",
tablefmt=tablefmt,
floatfmt=".2f",
colalign=("left","left","right","right","right"))
# Example usage
print(summarise_curves("Random-02.las"))tabulate also comes with a range of built-in table styles, which can make your output table more visually appealing and readable.
With just a small change, you can output the same data as a plain text table, a grid, or even Markdown-ready for dropping straight into a report.
from tabulate import tabulate
formations = [
{"Formation": "Random Sandstone Fm", "Period": "Early Jurassic", "Lithology": "Sandstone", "Avg Porosity": 0.21},
{"Formation": "Shaleshire Fm", "Period": "Late Triassic", "Lithology": "Shale", "Avg Porosity": 0.07},
{"Formation": "Carbonate Ridge Fm", "Period": "Early Cret.", "Lithology": "Limestone", "Avg Porosity": 0.16},
]
for style in ["github", "grid", "fancy_grid"]:
print(f"\nTable format: {style}\n")
print(tabulate(formations, headers="keys", tablefmt=style, floatfmt=".2f"))When the above code is run, we get back the following output showing the different styles.
Example of different tabulate table styles.
2.7.3 rich
rich is a great library which allows you to fully customise the terminal output by changing text colour, displaying colour coded syntax, and creating very nice tables.
I previously wrote about this in the following article:
Bring Your Python Terminal to Life With Colour and Clarity
To install the library, we call upon the following:
pip install richAnd to get started using rich for tables, we can call upon the following code.
from rich.table import Table
from rich.console import Console
from rich import box
console = Console()
table = Table(title="Curve Inventory - Random-02", box=box.SIMPLE_HEAVY)
table.add_column("Mnemonic", style="cyan", no_wrap=True)
table.add_column("Unit", justify="center")
table.add_column("Samples", justify="right")
table.add_column("Nulls", justify="right")
table.add_column("p99", justify="right")
rows = [
("GR", "API", "251", "0", "118"),
("RHOB", "g/cc", "251", "12", "2.86"),
("NPHI", "v/v", "251", "4", "0.39"),
]
for r in rows:
table.add_row(*r)
console.print(table)We can see that we first need to create a Console() object, which allows us to have complete control over the formatting of text within the terminal.
Next, we create a Table() object, and we can simply add columns to the table by calling upon table.add_column()
Once we have the columns of our table setup, we can begin populating a list of our data and then adding the rows iteratively to the table by calling upon table.add_row()
When we run the app, we can get the following output.
Not only does rich allow us to display tables, it can also be handy for displaying tree and hierarchy structures.
For example, we may have a series of well log curves in several wells, and we have built up a dictionary containing that information. We can then create a Tree() instance and then loop over the contents of the dictionary to add it to the tree.
from rich.console import Console
from rich.tree import Tree
console = Console()
project = {
"Random-01": ["GR", "RHOB", "NPHI", "DT"],
"Random-02": ["GR", "RHOB", "NPHI", "DT", "PEF"],
"Random-03": ["GR", "RHOB", "DT"],
}
root = Tree("[bold]Project: West Random Basin[/]")
for well, curves in project.items():
well_node = root.add(f"[cyan]{well}[/] ([green]{len(curves)} curves[/])")
for c in curves:
well_node.add(f"[white]{c}[/]")
console.print(root)We can also get more complex with tables and trees, and combine both of them to give a nice output for a Stratigraphic Overview
from rich.console import Console
from rich.tree import Tree
from rich.table import Table
from rich import box
console = Console()
stratigraphy = {
"West Random Basin": {
"Random North Field": {
"Random Sandstone Fm": {
"meta": {"period": "Early Jurassic", "age_Ma": "201–174"},
"members": [
{"Member": "Upper SS", "Lithology": "[gold1]Sandstone[/]", "Env": "Shoreface", "Age (Ma)": 176, "N:G": 0.72},
{"Member": "Middle SL", "Lithology": "[khaki1]Siltstone[/]", "Env": "Lower shore", "Age (Ma)": 179, "N:G": 0.41},
{"Member": "Lower SS", "Lithology": "[gold1]Sandstone[/]", "Env": "Delta front", "Age (Ma)": 182, "N:G": 0.66},
],
},
"Shaleshire Fm": {
"meta": {"period": "Late Triassic", "age_Ma": "227–208"},
"members": [
{"Member": "Upper Sh", "Lithology": "[grey62]Shale[/]", "Env": "Offshore", "Age (Ma)": 211, "N:G": 0.05},
{"Member": "Lower Sh", "Lithology": "[grey62]Shale[/]", "Env": "Basinal", "Age (Ma)": 219, "N:G": 0.03},
],
},
},
"Random South Field": {
"Carbonate Ridge Fm": {
"meta": {"period": "Early Cretaceous", "age_Ma": "145–125"},
"members": [
{"Member": "Upper Ls", "Lithology": "[bright_white]Limestone[/]", "Env": "Platform", "Age (Ma)": 131, "N:G": 0.58},
{"Member": "Lower Dl", "Lithology": "[aquamarine1]Dolomite[/]", "Env": "Shoal", "Age (Ma)": 138, "N:G": 0.64},
],
}
}
}
}
root = Tree("[bold]Stratigraphic Overview[/]")
for basin, fields in stratigraphy.items():
basin_node = root.add(f"[cyan]{basin}[/]")
for field, formations in fields.items():
field_node = basin_node.add(f"[green]{field}[/]")
for fm_name, fm_data in formations.items():
period = fm_data["meta"]["period"]
age_band = fm_data["meta"]["age_Ma"]
fm_node = field_node.add(f"[bold white]{fm_name}[/] • [magenta]{period}[/] ([italic]{age_band} Ma[/])")
# Mini table per formation
tbl = Table(box=box.SIMPLE_HEAVY, show_header=True, header_style="bold", expand=False, padding=(0,1))
tbl.add_column("Member")
tbl.add_column("Lithology")
tbl.add_column("Env")
tbl.add_column("Age (Ma)", justify="right")
tbl.add_column("N:G", justify="right")
for m in fm_data["members"]:
tbl.add_row(
m["Member"],
m["Lithology"],
m["Env"],
f"{m['Age (Ma)']}",
f"{m['N:G']:.2f}",
)
fm_node.add(tbl)
console.print(root)When we run the above code, we get back the following output, which is very readable and would be a great addition to a console based reporting app.