Idea for local performance monitoring tool?

Hey all, I started prototyping an idea for a new performance tool and would love your thoughts + advice.

Problem

When I’m working locally, it’s really hard to tell whether my code changes actually made any difference.

Goal

As a developer, I want to see the performance impact of my changes in realtime as I’m actively working on a website.

Current Solutions

  • Run multiple Lighthouse tests and compare.
  • Manually check DevTools’ Performance tab and compare.
  • Deploy to a dev/staging/QA environment and run PSI or WPT multiple times and compare.

:cold_sweat: Oh, and don’t forget your baseline! Remember to repeat your tests before and after your code changes so you can actually measure the difference… I forget this all the time.

Automated tests on Pull Requests are helpful, but still, the feedback loop is so delayed. I want more realtime feedback.

:question: What other tools already do this? Or at least come close to the idea?

Idea: Monitoring for local development

Here’s a prototype I’ve started hacking on to get some ideas flowing:

  • Left side: An example web page with a basic heading, picture, and paragraph. Every time I reload the page, metrics are sent over to the monitoring app on the right.
  • Right side: A prototype monitor collects metrics from the page I’m developing and shows them in a line chart where I can easily see the impact of my changes over time.

Source code

If you want to try out what I have so far, here’s the GitHub repo for “perfwatch”: GitHub - tannerhodges/perfwatch: Monitor performance changes during development.

Features I want

  • Connection: Easily connect to any website I’m working on. Automatic if possible, minimal setup otherwise.
  • File detection: Detect file changes on my local filesystem, so I can see which code changes had what impact. Ideally, I’d like to be able to select a folder and say, “These are the files for my website. Let me see when these files changed in the timeline.” And then in the monitor, I could see the files and code changes annotated somewhere on or below the timeline.
  • Notifications: Notify me of any large differences in metrics. For example, if I have the monitor running in the background, I want a push notification that a metric suddenly jumped 3 standard deviations.
  • Compare ranges: I want to be able to easily compare two ranges of metrics and report what the difference was. For example, if I start working on a change, I want to mark my first 10 page loads as the baseline, then make my code change, then do another 10 page loads and automatically calculate what % improvement I’ve made. Ultimately, I just want to know was my change significantly better or worse.
  • Check current device/connection settings: To help qualify any statements people make using this tool, I’d also like to detect and list somewhere the current environment: What device/OS/browser/connection speed do you currently have? ← Might be interesting to treat this like file changes too and notify when, for example, a drop in connection affects metrics for a particular page load.

:sweat_smile: I realize this is a lot. (I haven’t even mentioned Storage, Custom Metrics, and Reports as features…)

Challenges

I’m not 100% sure how to make some of these features work. In particular, I feel like the connection and file detection features are my biggest challenges:

  • How can this monitor connect to websites? (Injected script? Manual snippet?)
  • How can it access and detect local file changes? (File System API? Electron app?)

Anyhow, that’s the idea: local performance monitoring. Wadda ya think?

:pray: Appreciate any thoughts, advice, or feedback y’all might have!

I personally don’t mind re-running tests using Lighthouse or Network/Performance tab all the time. I have a feeling it would take a lot of effort to create a tool that I would prefer over the tools mantioned.

I wonder how running a tool like that locally would affect the performance itself (wouldn’t this monitoring be very heavy to run with all the features mentioned? I guess browser running could be similar?)

What are the pros of using it over a browser extension that shows the metrics real time?

I think with a lot of time and a big budget it would be a great tool, not sure how it would do as an indie project, because the challenges seem quite complex

1 Like

I worry about a couple of factors that I see confounding teams in development when it comes to performance:

  • dev server setups have very different latency and protocol behaviour to live sites
  • many dev servers send extra hot-reloading gunk that confounds analysis
  • perf impacts may be obscured by dev builds of various dependencies (think framework dev bundles)

…which is why I’m interested in helping teams get their apps into a state where you can run WPT against each PR in a live-like environment. It’s not perfect, but it both makes the analysis shareable and still provides a fast-enough inner loop for a lot changes. WDYT?

3 Likes

I wonder how running a tool like that locally would affect the performance itself (wouldn’t this monitoring be very heavy…?)

I don’t think so—at least I hope not. I figure it would be like “local RUM”: add a small snippet of JS to your site that sends data to a separate app (all local). In theory, it’d be the same overhead as other RUM scripts. The rest of the processing (chart, math, etc.) would all be handled separately. (Unless that’s what you mean, that the additional CPU from the app might affect measurements of the page being monitored?)

What are the pros of using it over a browser extension that shows the metrics real time?

I’d actually love this to be a browser extension. That way it’d be light, easy to install, etc.

The biggest pro/difference is showing the metrics in a timeline (where each dot in the line is a measurement from previous page loads) instead of only showing metrics for the current page load.

(^This could really—and maybe should?—just be a PR into GitHub - GoogleChrome/web-vitals-extension: A Chrome extension to measure essential metrics for a healthy site…)

But yeah, the timeline makes it easy for me to see the impact of my changes. (Does the line go up or down?) Ultimately, that’s what I’m looking for: directional feedback. (Did my change make things better or worse?)

The only reason I’d jump from browser extension to app is file system access. It’s probably unnecessary, but I feel like if a tool like this could tell me, “Hey, this code change you just did triggered a spike in LCP,” that’d be amazing. Giving me directional feedback + a lead on the cause = way faster debugging.

I think with a lot of time and a big budget it would be a great tool, not sure how it would do as an indie project

:crossed_fingers: Hopefully I can simplify enough to make this indie-feasible.

I’m interested in helping teams get their apps into a state where you can run WPT against each PR in a live-like environment.

:100: My kingdom for this on every project.

[It] still provides a fast-enough inner loop for a lot changes.

:grimacing: Selfishly, I think I want an even faster loop…

Really, I just want directional feedback. Something that tells me ASAP whether I’m making things faster or slower before I do a full WPT analysis.

This is probably a poor analogy, but I feel like what I want is almost a jshint for performance (immediate feedback) where WPT is more like Playwright end-to-end tests.

jshint : Playwright :: this tool : WebPageTest

End-to-end tests are more accurate than linters, but they’re also more demanding. The linter is cheap, fast, and passive. I barely have to think about it. Just code and catch issues as I notice them. I think that’s partly what I’m looking for here: something I can pop open and let drift in and out of focus while I work. (The important thing is I still use both tools!)

:thinking: But confounding is a good callout…

My hope is that, even if the local environment differs greatly from prod, the impact of code changes will still be directionally consistent, so that feedback from local dev is still relevant (just like running local Lighthouse reports to validate changes).

I’ll have to try this prototype out on different frameworks and see what happens. (Would be awesome to compare the results with more rigorous testing tools, a la Lighthouse’s Lantern Accuracy).

Got a prototype working:

  • Electron app.
  • Connects by spinning up a local server + adding a <script> to your website.
  • Script sends beacons to local server, saves data in localStorage, renders in chart.
  • Detects file changes in your project folder via Node (ignores hidden files and node_modules).

You can try it out by downloading the macOS app perfwatch-darwin-x64-0.2.1.zip from: