Storing Multiple Metrics in a Single Time Series
john.sanda Nov 15, 2014 4:58 PMThere has been discussion recently about storing multiple metrics in a single time series. I was talking about this yesterday with tsegismont while we were looking at some sample output from cadvisor that eventually gets stored in InfluxDB. Here is a simpler example from the InfluxDB docs that illustrates storing multiple metrics in a time series,
[
{
"name" : "response_times",
"columns" : ["code", "value", "controller_action"],
"points" : [
[200, 234, "users#show"]
]
}
]
If we wanted to transpose this into a format supported by RHQ Metrics, we would have three separate metrics like response_times.code, response_times.value, and response_times.controller_action. This is fine for writes, but it does incur some overhead for reads. Three separate queries are needed since each metric is stored in its own partition. If we are usually querying this data together, we want to optimize for that read path, ideally reading from a single partition.
Here is a simplified version of the data table in the schema-changes branch,
CREATE TABLE data ( metric text, time timeuuid, attributes map<text, text> static, value double, PRIMARY KEY (metric, time) );
Yesterday's discussion got me thinking about something I had previously considered,
CREATE TABLE grouped_data ( group text, metric text, time timeuuid, attributes map<text, text> static, value double, PRIMARY KEY (group, time, metric) );
With this schema, response_times would be the group, and code, value, and controller_action would be the metric columns (we will ignore for the moment that controller_action is text and not numeric).
INSERT INTO data (group, time, metric, value) VALUES ('response_times', now(), 'code', 200); INSERT INTO data (group, time, metric, value) VALUES ('response_times', now(), 'value', 234);
Now we can store the data points with a single write using a batch update, and we can fetch the data by reading from only a single partition. After giving this some thought, I wound up with some concerns that makes me think a different approach (that I think tsegismont was suggesting) would be better. Here is the simplified schema,
CREATE TABLE data ( metric text, time timeuuid, attributes map<text, text> static, value double, values map<text, double>, PRIMARY KEY (metric, time) );
The only difference here from the first data table definition is the addition of the values map. If we want to store multiple metrics within a single time series, then we write to the values column instead of the value column. Let's consider cpu usage metrics as an example. I am collecting cpu usage for my 4 cores. Since I will likely collect, write, and read this data together as a group, it is a perfect candidate for the values map. Here is an example of inserting data,
INSERT INTO data (metric, time, values) VALUES ('myserver.cpu', now(), { 'cpu0': 100, 'cpu1': 100, 'cpu2': 100, 'cpu3: 100 });
I am still working through some of the details on how to expose this in the APIs. There are two scenarios I am focusing on right now. First, from the outset I know that I want to group metric data in a single time series. There needs to be something in the APIs (both REST and Java) to indicate that the metrics should be grouped. In the second scenario, I have some existing metrics that are not grouped, and I decided that I want to group them. We will need to expose a "grouping" function which creates a new time series. In this case, I think we would keep the original metric time series and create the new group one which would be implicitly updated any time the server stores data for one of the original metrics. The nice things here is that you still have the ability to query the individual metrics.