How to work with fundamental data
Fundamental data pertains to the economic drivers that determine the value of securities. The nature of the data depends on the asset class:
- For equities and corporate credit, it includes corporate financials, as well as industry and economy-wide data.
- For government bonds, it includes international macro data and foreign exchange.
- For commodities, it includes asset-specific supply-and-demand determinants, such as weather data for crops.
We will focus on equity fundamentals for the U.S., where data is easier to access. There are some 13,000+ public companies worldwide that generate 2 million pages of annual reports and more than 30,000 hours of earnings calls. In algorithmic trading, fundamental data and features engineered from this data may be used to derive trading signals directly, for example, as value indicators, and are an essential input for predictive models, including ML models.
Financial statement data
The Securities and Exchange Commission (SEC) requires U.S. issuers—that is, listed companies and securities, including mutual funds—to file three quarterly financial statements (Form 10-Q) and one annual report (Form 10-K), in addition to various other regulatory filing requirements.
Since the early 1990s, the SEC made these filings available through its Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system. They constitute the primary data source for the fundamental analysis of equity and other securities, such as corporate credit, where the value depends on the business prospects and financial health of the issuer.
Automated processing – XBRL
Automated analysis of regulatory filings has become much easier since the SEC introduced XBRL, which is a free, open, and global standard for the electronic representation and exchange of business reports. XBRL is based on XML; it relies on taxonomies that define the meaning of the elements of a report and map to tags that highlight the corresponding information in the electronic version of the report. One such taxonomy represents the U.S. Generally Accepted Accounting Principles (GAAP).
The SEC introduced voluntary XBRL filings in 2005 in response to accounting scandals before requiring this format for all filers as of 2009, and it continues to expand the mandatory coverage to other regulatory filings. The SEC maintains a website that lists the current taxonomies that shape the content of different filings and can be used to extract specific items.
The following datasets provide information extracted from EX-101 attachments submitted to the commission in a flattened data format to assist users in consuming data for analysis. The data reflects selected information from the XBRL-tagged financial statements. It currently includes numeric data from the quarterly and annual financial statements, as well as certain additional fields, for example, Standard Industrial Classification (SIC).
There are several avenues to track and access fundamental data reported to the SEC:
- As part of the EDGAR Public Dissemination Service (PDS), electronic feeds of accepted filings are available for a fee.
- The SEC updates the RSS feeds, which list the structured disclosure submissions, every 10 minutes.
- There are public index files for the retrieval of all filings through FTP for automated processing.
- The financial statement (and notes) datasets contain parsed XBRL data from all financial statements and the accompanying notes.
The SEC also publishes log files containing the internet search traffic for EDGAR filings through SEC.gov, albeit with a six month delay.
Building a fundamental data time series
The scope of the data in the financial statement and notes datasets consists of numeric data extracted from the primary financial statements (balance sheet, income statement, cash flows, changes in equity, and comprehensive income) and footnotes on those statements. The available data is from as early as 2009.
Extracting the financial statements and notes dataset
The following code downloads and extracts all historical filings contained in the financial statement and notes (FSN) datasets for the given range of quarters (refer to edgar_xbrl.ipynb
for additional details):
SEC_URL = 'https://www.sec.gov/files/dera/data/financial-statement-and-notes-data-sets/'
first_year, this_year, this_quarter = 2014, 2018, 3
past_years = range(2014, this_year)
filing_periods = [(y, q) for y in past_years for q in range(1, 5)]
filing_periods.extend([(this_year, q) for q in range(1, this_quarter +
1)])
for i, (yr, qtr) in enumerate(filing_periods, 1):
filing = f'{yr}q{qtr}_notes.zip'
path = data_path / f'{yr}_{qtr}' / 'source'
response = requests.get(SEC_URL + filing).content
with ZipFile(BytesIO(response)) as zip_file:
for file in zip_file.namelist():
local_file = path / file
with local_file.open('wb') as output:
for line in zip_file.open(file).readlines():
output.write(line)
The data is fairly large, and to enable faster access than the original text files permit, it is better to convert the text files into a binary, Parquet columnar format (refer to the Efficient data storage with pandas section later in this chapter for a performance comparison of various data-storage options that are compatible with pandas DataFrames):
for f in data_path.glob('**/*.tsv'):
file_name = f.stem + '.parquet'
path = Path(f.parents[1]) / 'parquet'
df = pd.read_csv(f, sep='\t', encoding='latin1', low_memory=False)
df.to_parquet(path / file_name)
For each quarter, the FSN data is organized into eight file sets that contain information about submissions, numbers, taxonomy tags, presentation, and more. Each dataset consists of rows and fields and is provided as a tab-delimited text file:
File |
Dataset |
Description |
|
Submission |
Identifies each XBRL submission by company, form, date, and so on |
|
Tag |
Defines and explains each taxonomy tag |
|
Dimension |
Adds detail to numeric and plain text data |
|
Numeric |
One row for each distinct data point in filing |
|
Plain text |
Contains all non-numeric XBRL fields |
|
Rendering |
Information for rendering on the SEC website |
|
Presentation |
Details of tag and number presentation in primary statements |
|
Calculation |
Shows the arithmetic relationships among tags |
Retrieving all quarterly Apple filings
The submission dataset contains the unique identifiers required to retrieve the filings: the Central Index Key (CIK) and the Accession Number (adsh). The following shows some of the information about Apple's 2018Q1 10-Q filing:
apple = sub[sub.name == 'APPLE INC'].T.dropna().squeeze()
key_cols = ['name', 'adsh', 'cik', 'name', 'sic', 'countryba',
'stprba', 'cityba', 'zipba', 'bas1', 'form', 'period',
'fy', 'fp', 'filed']
apple.loc[key_cols]
name APPLE INC
adsh 0000320193-18-000070
cik 320193
name APPLE INC
sic 3571
countryba US
stprba CA
cityba CUPERTINO
zipba 95014
bas1 ONE APPLE PARK WAY
form 10-Q
period 20180331
fy 2018
fp Q2
filed 20180502
Using the CIK, we can identify all of the historical quarterly filings available for Apple and combine this information to obtain 26 10-Q forms and 9 annual 10-K forms:
aapl_subs = pd.DataFrame()
for sub in data_path.glob('**/sub.parquet'):
sub = pd.read_parquet(sub)
aapl_sub = sub[(sub.cik.astype(int) == apple.cik) &
(sub.form.isin(['10-Q', '10-K']))]
aapl_subs = pd.concat([aapl_subs, aapl_sub])
aapl_subs.form.value_counts()
10-Q 15
10-K 4
With the accession number for each filing, we can now rely on the taxonomies to select the appropriate XBRL tags (listed in the TAG
file) from the NUM
and TXT
files to obtain the numerical or textual/footnote data points of interest.
First, let's extract all of the numerical data that is available from the 19 Apple filings:
aapl_nums = pd.DataFrame()
for num in data_path.glob('**/num.parquet'):
num = pd.read_parquet(num).drop('dimh', axis=1)
aapl_num = num[num.adsh.isin(aapl_subs.adsh)]
aapl_nums = pd.concat([aapl_nums, aapl_num])
aapl_nums.ddate = pd.to_datetime(aapl_nums.ddate, format='%Y%m%d')
aapl_nums.shape
(28281, 16)
Building a price/earnings time series
In total, the 9 years of filing history provide us with over 28,000 numerical values. We can select a useful field, such as earnings per diluted share (EPS), that we can combine with market data to calculate the popular price-to-earnings (P/E) valuation ratio.
We do need to take into account, however, that Apple split its stock by 7:1 on June 4, 2014, and adjust the earnings per share values before the split to make the earnings comparable to the price data, which, in its adjusted form, accounts for these changes. The following code block shows you how to adjust the earnings data:
field = 'EarningsPerShareDiluted'
stock_split = 7
split_date = pd.to_datetime('20140604')
# Filter by tag; keep only values measuring 1 quarter
eps = aapl_nums[(aapl_nums.tag == 'EarningsPerShareDiluted')
& (aapl_nums.qtrs == 1)].drop('tag', axis=1)
# Keep only most recent data point from each filing
eps = eps.groupby('adsh').apply(lambda x: x.nlargest(n=1, columns=['ddate']))
# Adjust earnings prior to stock split downward
eps.loc[eps.ddate < split_date,'value'] = eps.loc[eps.ddate <
split_date, 'value'].div(7)
eps = eps[['ddate', 'value']].set_index('ddate').squeeze()
# create trailing 12-months eps from quarterly data
eps = eps.rolling(4, min_periods=4).sum().dropna()
We can use Quandl to obtain Apple stock price data since 2009:
import pandas_datareader.data as web
symbol = 'AAPL.US'
aapl_stock = web.DataReader(symbol, 'quandl', start=eps.index.min())
aapl_stock = aapl_stock.resample('D').last() # ensure dates align with
eps data
Now we have the data to compute the trailing 12-month P/E ratio for the entire period:
pe = aapl_stock.AdjClose.to_frame('price').join(eps.to_frame('eps'))
pe = pe.fillna(method='ffill').dropna()
pe['P/E Ratio'] = pe.price.div(pe.eps)
axes = pe.plot(subplots=True, figsize=(16,8), legend=False, lw=2);
We get the following plot from the preceding code:
Figure 2.11: Trailing P/E ratio from EDGAR filings
Other fundamental data sources
There are numerous other sources for fundamental data. Many are accessible using the pandas_datareader
module that was introduced earlier. Additional data is available from certain organizations directly, such as the IMF, the World Bank, or major national statistical agencies around the world (refer to the references section on GitHub).
pandas-datareader – macro and industry data
The pandas-datareader
library facilitates access according to the conventions introduced at the end of the preceding section on market data. It covers APIs for numerous global fundamental macro- and industry-data sources, including the following:
- Kenneth French's data library: Market data on portfolios capturing returns on key risk factors like size, value, and momentum factors, disaggregated by industry (refer to Chapter 4, Financial Feature Engineering – How to Research Alpha Factors)
- St. Louis FED (FRED): Federal Reserve data on the U.S. economy and financial markets
- World Bank: Global database on long-term, lower-frequency economic and social development and demographics
- OECD: Similar to the World Bank data for OECD countries
- Enigma: Various datasets, including alternative sources
- Eurostat: EU-focused economic, social, and demographic data