The new API docs can be found here: https://apidocs.mediamath.com/docs/reporting-api-v2
<!-- theme: warning -->
Notice about adama_cookie
Support for adama_cookie will be removed later this year. We ask all of our clients who have not yet migrated to OAuth2 authorization to do so as soon as possible.
The Reports API on MediaMath Platform allows advertisers to access, query and aggregate reporting data. It is also the API that powers all reporting seen on our MediaMath Platform flagship UI.
The Reports API offers a number of reports. Each report offers its own pre-aggregated, time-based metrics on entities that users can query. A query can filter, aggregate, sort, paginate, and format this data.
It is a read-only system meaning no request against it can alter the data in any way.
It is also a metadata-driven system. The supported reports are described in a human- and machine-readable format. This ensures easy one-off and repeatable programmatic data querying. It also provides a way for clients to adapt to changes in reports programmatically. This gives client-developers the ability to create UIs that provide useful operational feedback in a navigable and understandable way.
The $API_BASE for the latest version of Reports API is
Report data is stored and output in a tabular format. This means that the output consists of rows and columns. Every row has a value for every column.
The structure object can be used to determine the columns a report can output. Its fields are divided among three mappings - time_field, dimensions, and metrics. Each mapping, maps field name to #model:Fqbi8siNTuFGqA2W5.
The time field represents the time-component of the report's metrics. There is (currently) only one time field for each report. It can be of one of these data types: datetime or interval. Its data type determines the type of report (datetime or interval based report).
Note: The type of report is not to be confused with the Type attribute. The Type attribute is purely informational.
The time field was distinguished from the dimension fields to allow its use in grouping and filtering rows easier to understand. It gets its own set of parameters and language.
datetime-typed Time Fields
Datetime-typed time fields contains a combination of year, month, day, and possibly down to hour, minute and second. The time_aggregation property indicates the field's finest grain (by_hour, by_day, etc). The time_rollups property indicates the available grouping options that can go beyond the time_aggregation of the report.
time_rollup=by_week ### group rows by the week - week starts on Monday time_rollup=by_month ### group rows by the monthtime_rollup=all ### produces at most one row of output per combination of dimension fields ### listed in the dimensions parameter
The time field for these reports are typically represented in the output by a start_date and end_date column. The format of the column will depend on the time window and time_rollup.
YYYY-MM-DD
YYYY-MM-DD hh:mi:ss
The only exception to this rule is detailed in the Special Time Windows section.
The timezone of the start_date and end_date columns will match the timezone property.
interval-typed Time Fields
interval-typed time fields, contain a predefined non-calendar date based aggregation (1 day, 7 days, 30 days, and so on). In this case, the only accepted way to specify a time interval is by using the parameter time_window. The only value supported for the parameter time_rollup for interval-based reports is all.
The time field for these reports are represented in the output by a interval column. The value of the column will depend on the time_window chosen. The following gives example column values based on the time_window.
yesterday - 1
last_7_days - 7
last_30_days - 30
campaign_to_date - CTD
flight_to_date - FTD
Filtering with the Time Field
The API requires the specification of a time window in order to narrow the data set operated on. A time window must be specified in one of the two possible ways.
start_date and end_date (optional - defaults to "yesterday").
time_window may be set to one of the formats defined by the time_windows attribute.
### operate on data timestamped between Jan 01, 2013 and Feb 01, 2013, inclusivestart_date=2013-01-01&end_date=2013-02-01 ### operate on data timestamped between May 05, 2013 and yesterday, inclusivestart_date=2013-05-01 ### other usage examples time_window=last_30_days time_window=month_to_date time_window=yesterday
start_date and end_date
start_date and end_date may only be used when the report's time field is of the datetime type. They may not be used when the report's time field is of the interval data type. This is because the time_windows are pre-defined intervals that cannot be split.
The start_date and end_date parameters define inclusive boundaries for the data. In order to ease the burden of calculating an inclusive end, the inputs may be specified at various granularities.
month - YYYY-MM
day - YYYY-MM-DD
hour - YYYY-MM-DDThh
minute - YYYY-MM-DDThh:mi
second - YYYY-MM-DDThh:mi:ss
Each granularity matches a substring of the ISO 8601 format.
If the report is at a coarser granularity (see time_aggregation) than the input, the input will be taken to mean the entirety of the time unit.
start_date=2016-04-12T01%3A30%3A00&end_date=2016-04-12T02%3A30%3A00# ie. start_date=2016-04-12T01:30:00&end_date=2016-04-12T02:30:00# For a report with a time_aggregation of by_hour:# 2016-04-12T01:00:00 to 2016-04-12T02:59:59.# For a report with a time_aggregation of by_day:# 2016-04-12T00:00:00 to 2016-04-12T23:59:59
time_window
All values mentioned in the time_windows array will be accepted verbatim by the time_window parameter with the exception of any time window that starts with last_X_. They may be interpreted as such.
The last_X_days time window ends yesterday (inclusive) and starts X days before that.
The last_X_hours time window ends at the previous hour (inclusive) and starts X hours before that.
Future windows of this type may be defined following this nomenclature, but for different units. Rules for the time window may vary slightly.
Special Time Windows
The following time_windows are considered to be special time windows.
campaign_to_date
flight_to_date
For reports with a datetime-typed time_field, the start_date and end_date columns that would normally be present, will be replaced by the interval column. Additionally, the following validation rules apply when a special time_window is chosen.
The results will have an interval column instead of start_date and end_date columns.
The time_rollup parameter must be set to all.
Any mention of the time_field for the report in the order parameter will be rejected.
Dimension Fields
Dimension fields describe an entity. The example reports provide dimension fields for campaign, and strategy entities. Dimension fields are used to group rows during aggregation in conjunction with the time field.
Metric Fields
Once the rows have been grouped. The metric fields are calculated based on the values of the group's underlying rows. These calculations are generally sums or averages. These fields are usually a numeric data type.
Please note that fees (eg: managed_service_fee, optimization_fee, platform_access_fee, and mm_total_fee), cost (eg: adserving_cost, adverification_cost, media_cost, and tota_ad_cost), and margin data are only available to users who have “edit margin” access.
Data Types
The id type allows any character except whitespace.
The datetime type may be filtered by dates, or datetimes in either of the following ISO 8601 based formats.
date - YYYY-MM-DD
datetime - YYYY-MM-DDThh:mi:ss
Year, month, and day are all 1-based. Hour, minute, and second are all 0-based. Valid hours are 0-23.
The dimension fields of the datetime type will always be output in the aforementioned datetime format. The output columns for the time field will be output in the same format, but without the separating 'T'.
Field Data Type Groupings
This documentation may refer to multiple data types via a group name. The following table details the group names.
The new API docs can be found here: https://apidocs.mediamath.com/docs/reporting-api-v2
The Brain Feature Summary report
Sample usage: GET https://api.mediamath.com/reporting/v1/std/brain_feature_summary/meta?v1
Background
The Brain Feature Summary report provides transparency into how the Brain optimizes. The Brain Feature Summary Report centralizes impression features, such as Exchange, Site, Creative Size or Day of Week, that have the largest predictive impact. Features are a column of dataset used as an input to the model. For example; device_model, day_of_week or browser. Descriptions of Current Brain Features are available here. Feature values are not listed in the report. This report gives you transparency into the importance of each Brain Feature at the aggregated level.
The information in this report is provided when a new Brain model is generated, but some days in the reporting period may be missing (which is normal). Depending on the training data, we may not always generate a new model so you may see gaps (not all days will generate the report). The report time rollups are updated daily, typically before 18:00 UTC. The data contained for the date in the report will be for the latest model picked up on that day and loaded into reporting. Retention is on a rolling 30 days, with a date range therefore of up to 30 days. T1 continues to train Brain models just in case the campaign is set live again, maximizing your opportunity to take advantage of the Brain Optimization on spend. Once there is no more training data (after 23 days of no activity) T1 will stop generating new models until spend recommences.
The information included in the Brain Feature Summary report is not relevant in the following circumstances:
Campaigns powered by a Custom Brain
CPA/ROI strategies within a campaign using 3rd-party attribution
CPA strategies within a campaign with post-view attribution discounted below 100%
How to run a Brain Feature Summary report via T1 instead of by API:
Navigate to the Reports module
Click on the Data Export tab
Type the name of the report in the File Name field
Select the Brain Feature Summary report from the Report Type drop-down list
Select the date range you want your report to cover
Select Agency, Advertiser and Campaign from the relevant drop-down lists
Select your preferred Dimensions
The Brain Feature Summary report itself
This report exports multiple campaigns and Goal Types at once.
Date when these values were being used in bidding.
End Date
8/31/2019
Date when these values were being used in bidding.
Organization ID
100001
Agency ID
10002
Advertiser ID
10003
Campaign ID
111111
MediaMath unique ID for campaign.
Model Goal
CPA, ROI, CPC, etc.
The Goal Type that the model was trained for. Each Goal Type will have a different Brain model.
Feature
isp_id, day_of_week, size, week_part, category_id
The Dimension used as input into the Brain Machine Learning model. For a full list, see here.
Position
0, 1, 2, 3, 4, ..., 99
For bottom_features, a Position of 0 means it had the lowest "Mean". For top_features, a Position of 0 means it had the highest "Mean".
Index
2.1925095
The index column gives you a comparative indication of the relative importance of features based on normalized attribution, at an aggregated level. While the values might not make much sense for a global understanding of the model, the index gives an indication of what the model sees as important features when its trained. The higher the Index number, the higher the importance.
*Note: Only 1 value will appear, not a comma-delimited list, although feature_value may be a pipe-delimited list)
curl -i -X GET \ https://api.mediamath.com/reporting/v1/std/brain_feature_summary/meta
Responses
Body*/*
object
Brain Feature Value report
Request
<!-- theme: danger -->
This API is deprecated and will be removed soon
The new API docs can be found here: https://apidocs.mediamath.com/docs/reporting-api-v2
Brain Feature Value Report
Sample usage: GET https://api.mediamath.com/reporting/v1/std/brain_feature_value/meta?v1
Background
The Brain Feature Value report gives you transparency into the top 100 and bottom 100 feature values (e.g. site A, creative 123, exchange 1) that impact predictions for each optimization Goal Type (for example, ROI, CPA, VCR). This shows how each Brain Feature Value directly influences and informs impression prediction and pricing. Descriptions of Current Brain Features themselves are available here. When T1 generates a Brain, there can be 200K+ predictions for Feature Values. To make these predictions more accessible, T1 selects the top 100 and bottom 100 Feature Values (which can vary daily) generated for that day for your viewing per optimization goal.
The information in this report is provided when a new Brain model is generated, but some days in the reporting period may be missing (which is normal). Depending on the training data, we may not always generate a new model so you may see gaps (not all days will generate the report). The report time rollups are updated daily, typically before 18:00 UTC. The data contained for the date in the report will be for the latest model picked up on that day and loaded into reporting. Retention is on a rolling 30 days, with a date range therefore of up to 30 days. T1 continues to train Brain models just in case the campaign is set live again, maximizing your opportunity to take advantage of the Brain Optimization on spend. Once there is no more training data (after 23 days of no activity) T1 will stop generating new models until spend recommences.
The information included in the Brain Feature Value report is not relevant in the following circumstances:
Campaigns powered by a Custom Brain (for more on Custom Brain see here)
CPA/ROI strategies within a campaign using 3rd-party attribution
CPA strategies within a campaign with post-view attribution discounted below 100%
How to run a Brain Feature Value report via T1 instead of by API:
Navigate to the Reports module
Click on the Data Export tab
Type the name of the report in the File Name field
Select the Brain Feature Value report from the Report Type drop-down list
Select the date range you want your report to cover
Select Agency, Advertiser and Campaign from the relevant drop-down lists
Select your preferred Dimensions
The Brain Feature Value report itself
This report exports multiple campaigns and Goal Types at once.
Date when these values were being used in bidding.
End Date
8/31/2019\t
Date when these values were being used in bidding.
Organization ID
100000
Agency ID
10001
User-specified (Step 6, below).
Advertiser ID
20000
User-specified (Step 6, below).
Campaign ID
30000
MediaMath unique ID for campaign. User-specified (Step 6, below).
Model Goal
CPA, ROI, CPC, etc
The Goal Type that the model was trained for. Each Goal Type will have a different Brain model.
Feature Report Type
bottom_features or
top_feature
bottom_features: a feature value that was in the bottom 100 (lowest "Mean"). top_features: a feature value that as in the top 100 (highest "Mean")
Position
0, 1, 2, 3, 4, ..., 9
For bottom_features, a Position of 0 means it had the lowest "Mean". For top_features, a Position of 0 means it had the highest "Mean".
Feature
isp_id, day_of_week, size
The Feature that the next column’s Feature Value belongs to, used as input into the Brain Machine Learning model. For a full Feature list, see here.
Feature Value
Verizon, Tuesday, 728x90
Readable name of the Feature Value. Some Feature Values display in the report as hashed or otherwise manipulated in a way that does not accurately map back to a human-readable name, fields including Site ID, Category ID, and Region ID. In Q1 2020 or later, we hope to change the hashing to return these back to human readable names but until then it is in the backlog. Please provide feedback if you would find this an important feature to support sooner. The pipe character 'I' indicating that hashing has been applied may occur when there is a large number of Feature Values in a given Feature. For example; site_id: "539409862I130821751I231186" or isp_id: "WindstreamITelefonica"
Is Numeric
Y, N
A few of the Features (data types used in the prediction) are numerical in nature and will have a different impact on the prediction depending on the actual number (Y = Yes, it is numeric; N = No, it isn't numeric). One example is "id_vintage" which represents how long we’ve had an ID for the particular user we’re trying to predict the Response Rate for. Longer time means better quality (as they don't clear cookies/cache etc.) and improves the odds of conversions. The "id_vintage" field simplifies the large time spectrum into 4 specific ranges.
Index
123.855316, -60.73726
This is based on the Mean. The index column gives you a comparative indication of the relative importance of features based on normalized attribution. While the mean values might not make much sense for a global understanding of the model, the index gives an indication of what the model sees as important features when its trained. The higher the Index number, the higher the importance.
Mean
0.00000648, -0.00000308
When we train a model, we run a sample of the available data through our proprietary machine learning algorithm. The mean shown in this report is the Mean Attribution of the feature across samples of attribution. While technical, it may interest Data Science users. For the layman, it describes the "average" contribution of that Feature Value to the Predicted Response Rate.
Bid Impact
1.27748963, 0.93687577
The marginal multiplicative contribution (> 1.0 increases, > 1.0 decreases) of a feature value (e.g. a particular exchange or site or creative) to a Predicted Response Rate (norm_rr). The bid_impact is helpful for understanding pricing impact on that feature value, and to calculate CPM. An oversimplification: the baseline score is also impacted by a calibration coefficient. normrr = [campaign baseline score] x [bid impact of feature value #1] x [bid impact of feature value #2] …CPM = normrr x [strategy’s goal value] x 1,000 T1 logs the predicted Action Rate of specific impressions in the Log Level Data Service under a field called “norm_rr”. Please speak to your MediaMath Account representative for more information about accessing this data.
*Note: Only 1 value will appear, not a comma-delimited list, although feature_value may be a pipe-delimited list)
curl -i -X GET \ https://api.mediamath.com/reporting/v1/std/brain_feature_value/meta
Responses
Body*/*
object
By Hour
Request
<!-- theme: danger -->
This API is deprecated and will be removed soon
The new API docs can be found here: https://apidocs.mediamath.com/docs/reporting-api-v2
Standard performance metrics broken out by standard dimensions, available in precise time windows - down to the hour - with the option to aggregate by hour or day.
Query
dimensionsstringrequired
Selects the dimension fields to use in grouping rows.
This should be a comma-separated list of dimension field names.
The position of the fields in the output is not guaranteed to match the parameter value. Always use the header line from the response body to identify the position of each column in the output.
metricsstring
Controls the selection of metric fields.
This should be a comma-separated list of metric field names.
The position of the fields in the output is not guaranteed to match the parameter value. Always use the header line from the response body to identify the position of each column in the output.
filterstringrequired
Controls filtering against dimension fields. The filtering is performed before row grouping. See the Predicate Parameters section of https://apidocs.mediamath.com/docs/api/aa580375b0b65-report-data for more information about interpretation and accepted formats.
havingstring
Controls filtering against metric and dimension fields. The filtering is performed after row grouping and metric calculation. See the Predicate Parameters section of https://apidocs.mediamath.com/docs/api/aa580375b0b65-report-data for more information about interpretation and accepted formats.
This should be a comma-separated list of field names. Each field name may optionally be prefixed with a minus sign to indicate descending order. Otherwise, ascending order is assumed.
The first field has the most significance in sorting.
page_limitinteger>= 1
Controls the maximum number of rows of data.
This parameter is required, if page_offset is one or greater.
Default 100
page_offsetinteger>= 0
Controls how many pages to skip.
precisioninteger[ 0 .. 8 ]
Controls how many digits to the right of the decimal places to display for float types.
Default 8
Headers
Cookiestringrequired
_
Default adama_session=<<sessionid>>
Accept-Encoding:string
If the client supports the compression methods also supported by the server (currently gzip and deflate), the response data set (text/csv) will be returned in a compressed format. The metadata and error responses will not be compressed.
In order to receive the data in compressed format, the client must use the HTTP request header Accept-Encoding, and enumerate the preferred compression methods (see RFC 2616). If the header Accept-Encoding is not present in the request, or if it contains only compression methods not supported by the system, the data will be returned in an uncompressed format.
If the data has not changed, the server responds with a 304 Not Modified status code and an empty response body.
If the same request was issued and the data had been updated since the specified time, the server responds with the data requested and a new Last-Modified header.