|
|
Created:
3 years, 3 months ago by Lei Lei Modified:
3 years, 3 months ago Reviewers:
benjhayden CC:
catapult-reviews_chromium.org, tracing-review_chromium.org Target Ref:
refs/heads/master Project:
catapult Visibility:
Public. |
DescriptionLog the detailed results for each metric for diagnosing and debugging purpose.
There are several outliers in our
results(https://chromeperf.appspot.com/report?sid=21cf7b214eb1004ed8b109f9edb40a1667e1d5eb308b7f2ea9040d25f07913a0),
we would like to log the detailed results for debugging purpose.
BUG=764035
Patch Set 1 #
Messages
Total messages: 7 (2 generated)
Description was changed from ========== Log the detailed results for each metric for diagnosing and debugging purpose. There are several outliers in our results(https://chromeperf.appspot.com/report?sid=21cf7b214eb1004ed8b109f9edb40a1667e1d5eb308b7f2ea9040d25f07913a0), we would like to log the detailed results for debugging purpose. BUG=chromium:#764035 ========== to ========== Log the detailed results for each metric for diagnosing and debugging purpose. There are several outliers in our results(https://chromeperf.appspot.com/report?sid=21cf7b214eb1004ed8b109f9edb40a1667e1d5eb308b7f2ea9040d25f07913a0), we would like to log the detailed results for debugging purpose. BUG=764035 ==========
leilei@chromium.org changed reviewers: + benjhayden@chromium.org
On 2017/09/15 at 01:17:29, leilei wrote: > You can already access the individual samples via the Buildbot Status Page link > xr.webvr.static on Android link json.output link: https://uberchromegw.corp.google.com/i/internal.chrome.fyi/builders/VR%20Perf... Search for test-slow-render?canvasClickPresents=1&renderScale=1.5 and webvr_frame_time_javascript_avg. Close to the bottom, you'll find the value from the chart: "test-slow-render?canvasClickPresents=1&renderScale=1.5": { "important": false, "improvement_direction": "down", "name": "webvr_frame_time_javascript_avg", "page_id": 1, "std": 0.0, "type": "list_of_scalar_values", "units": "ms", "values": [ 1301.8776978417275 ] } I think what you really want is the trace, but I'm not sure why it wasn't uploaded. Maybe try running the benchmark locally several times until you find an unusually high value? Run_benchmark serializes the traces to local disk either when you don't specify any --output-format or when you include --output-format=html.
On 2017/09/16 20:32:04, benjhayden wrote: > On 2017/09/15 at 01:17:29, leilei wrote: > > > > You can already access the individual samples via the Buildbot Status Page link > > xr.webvr.static on Android link json.output link: > https://uberchromegw.corp.google.com/i/internal.chrome.fyi/builders/VR%20Perf... > Search for test-slow-render?canvasClickPresents=1&renderScale=1.5 and > webvr_frame_time_javascript_avg. Close to the bottom, you'll find the value from > the chart: > "test-slow-render?canvasClickPresents=1&renderScale=1.5": { > "important": false, > "improvement_direction": "down", > "name": "webvr_frame_time_javascript_avg", > "page_id": 1, > "std": 0.0, > "type": "list_of_scalar_values", > "units": "ms", > "values": [ > 1301.8776978417275 > ] > } > > I think what you really want is the trace, but I'm not sure why it wasn't > uploaded. > > Maybe try running the benchmark locally several times until you find an > unusually high value? Run_benchmark serializes the traces to local disk either > when you don't specify any --output-format or when you include > --output-format=html. Yes, trace is even better for us to diagnose the outliers in our results. The value in the chart json is not helpful, since it is already the aggregated result, we would like to see each data point, not the average result. Trace is exactly what we are looking for, that will help us to know when those outliers happened, if it happens in the beginning of the test. Running benchmark locally is time consuming, since the outliers don't happen all the time, I tried to reproduce it locally with 4~5 runs, however I can't reproduce it locally. It will be nice to see the trace from continuous runs. I compared our command to run benchmarks with the one in chromium.perf waterfall, what we are missing is --upload-results flag, however I can not find the code which uses that flag in cs.chromium.org. I am wondering if --upload-results flag uploads the trace to perf dashboard?
On 2017/09/19 at 00:49:20, leilei wrote: > On 2017/09/16 20:32:04, benjhayden wrote: > > On 2017/09/15 at 01:17:29, leilei wrote: > > > > > > > You can already access the individual samples via the Buildbot Status Page link > > > xr.webvr.static on Android link json.output link: > > https://uberchromegw.corp.google.com/i/internal.chrome.fyi/builders/VR%20Perf... > > Search for test-slow-render?canvasClickPresents=1&renderScale=1.5 and > > webvr_frame_time_javascript_avg. Close to the bottom, you'll find the value from > > the chart: > > "test-slow-render?canvasClickPresents=1&renderScale=1.5": { > > "important": false, > > "improvement_direction": "down", > > "name": "webvr_frame_time_javascript_avg", > > "page_id": 1, > > "std": 0.0, > > "type": "list_of_scalar_values", > > "units": "ms", > > "values": [ > > 1301.8776978417275 > > ] > > } > > > > I think what you really want is the trace, but I'm not sure why it wasn't > > uploaded. > > > > Maybe try running the benchmark locally several times until you find an > > unusually high value? Run_benchmark serializes the traces to local disk either > > when you don't specify any --output-format or when you include > > --output-format=html. > > Yes, trace is even better for us to diagnose the outliers in our results. > > The value in the chart json is not helpful, since it is already the aggregated result, we would like to see each data point, not the average result. Trace is exactly what we are looking for, that will help us to know when those outliers happened, if it happens in the beginning of the test. Running benchmark locally is time consuming, since the outliers don't happen all the time, I tried to reproduce it locally with 4~5 runs, however I can't reproduce it locally. It will be nice to see the trace from continuous runs. > > I compared our command to run benchmarks with the one in chromium.perf waterfall, what we are missing is --upload-results flag, however I can not find the code which uses that flag in cs.chromium.org. I am wondering if --upload-results flag uploads the trace to perf dashboard? Traces are uploaded to one of a few cloud storage buckets when --upload-results is specified. https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry... https://cs.chromium.org/chromium/src/third_party/catapult/common/py_utils/py_... When you navigate to a timeseries in chromeperf for a specific story (as opposed to a summarized metric for an entire benchmark), and click a point in the chart, the tooltip contains a link to "View trace from this run".
On 2017/09/19 05:01:00, benjhayden wrote: > On 2017/09/19 at 00:49:20, leilei wrote: > > On 2017/09/16 20:32:04, benjhayden wrote: > > > On 2017/09/15 at 01:17:29, leilei wrote: > > > > > > > > > > You can already access the individual samples via the Buildbot Status Page > link > > > > xr.webvr.static on Android link json.output link: > > > > https://uberchromegw.corp.google.com/i/internal.chrome.fyi/builders/VR%20Perf... > > > Search for test-slow-render?canvasClickPresents=1&renderScale=1.5 and > > > webvr_frame_time_javascript_avg. Close to the bottom, you'll find the value > from > > > the chart: > > > "test-slow-render?canvasClickPresents=1&renderScale=1.5": { > > > "important": false, > > > "improvement_direction": "down", > > > "name": "webvr_frame_time_javascript_avg", > > > "page_id": 1, > > > "std": 0.0, > > > "type": "list_of_scalar_values", > > > "units": "ms", > > > "values": [ > > > 1301.8776978417275 > > > ] > > > } > > > > > > I think what you really want is the trace, but I'm not sure why it wasn't > > > uploaded. > > > > > > Maybe try running the benchmark locally several times until you find an > > > unusually high value? Run_benchmark serializes the traces to local disk > either > > > when you don't specify any --output-format or when you include > > > --output-format=html. > > > > Yes, trace is even better for us to diagnose the outliers in our results. > > > > The value in the chart json is not helpful, since it is already the aggregated > result, we would like to see each data point, not the average result. Trace is > exactly what we are looking for, that will help us to know when those outliers > happened, if it happens in the beginning of the test. Running benchmark locally > is time consuming, since the outliers don't happen all the time, I tried to > reproduce it locally with 4~5 runs, however I can't reproduce it locally. It > will be nice to see the trace from continuous runs. > > > > I compared our command to run benchmarks with the one in chromium.perf > waterfall, what we are missing is --upload-results flag, however I can not find > the code which uses that flag in http://cs.chromium.org. I am wondering if > --upload-results flag uploads the trace to perf dashboard? > > Traces are uploaded to one of a few cloud storage buckets when --upload-results > is specified. > https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry... > https://cs.chromium.org/chromium/src/third_party/catapult/common/py_utils/py_... > > When you navigate to a timeseries in chromeperf for a specific story (as opposed > to a summarized metric for an entire benchmark), and click a point in the chart, > the tooltip contains a link to "View trace from this run". Thanks for the information! I will delete this patch and create another patch to enable --upload-results flag for our benchmarks. |