Consider using desaturated (pastel) colors by default:
(see Tufte’s “Visual Explanations” p 76 or “Envisioning Information” p 91: http://openlearn.open.ac.uk/mod/resource/view.php?id=179550 )
Professional maps use muted pastel colors for areas of large coverage, and so should we.
There are three fundamental dimensions of color, and they are not Redness/Blueness/Greenness. Humans find meaning in transitions along Hue (red-green-blue), saturation (bright-pastel-gray) and value (white — earthy — black), and it’s best to constrain each data dimension thoughtfully to a color dimension.
This is a beautiful tool:
The three bars on the top far right control Hue, Saturation and Value respectively — mess around with them and see how the colors change. I made a sample palette:
* “Idle time” is empty — it should be transparent, or the background color.
* “nice” time is “background, innocuous, calm” — it should be a very desaturated sky blue.
* “User time” is what you care about. It’s assumedly spent doing what you want it do do, so go with green. It should be prominent but not pushy, so make it semi-saturated
* “System time” is usually a problem — you bought your machine to run programs, not kernel. So it’s red, and it’s a bit louder.
* We want to see the competition between user time and system time, so start with two well-separated colors.
So that’s hue and saturation: two axes, conveying two variables (hue=segment, sat=urgency). As for value: if this thing sits in front of you it should be audible but not deafening. Let’s give each color the same value (brightness), and let’s set it to 70%, which is the foreground color of a safari window (90%, the background color, seemed intense).
This won’t be your actual palette, but if you start thinking of the colors in perceptual rather than RGB terms your program will be prettier and more meaningful. Someone with a better eye than mine could certainly fix the balance of hues…
So all that blather and we still have red/green/blue: my point is that you should choose three tasteful shades that bear meaning and chuck the color controls. A future version could intensify each segment’s color saturation as its system burden climbs.…
- I don’t find “Size in dock” or “Bar Width” that interesting. A principle of information design is to “Maximize your data ink”. The graph can fill its full dock slot, so it should; and the bar gaps carry no information (the bottom of each waterfall shows its axis) so they should be omitted altogether. Now, a control to occupy two or three dock slots would be interesting, but I bet there’s better uses of your time than crawling through the underbelly of the API.
For resizing in general, however, the best interfaces are direct interfaces:
You can toss out that whole dialog box if you put a dashboard-style (i)nformation-please button in the corner to expose adjustment handles.
The dock actually does both: the preference panel gives you a slider which resizes the dock in-flight, while the little ribbed section between file and app gives you an expert feature — it resizes the dock with immediate feedback.
For direct manipulation of the time: the faster you sample the more space one second occupies on the screen: at 30 chunks and 0.5 S/s the graph is 60 seconds wide, and at 30 chunks and 0.5 S/s sampling the graph is 15 seconds wide. So, show a ‘scale bar’ that is 10s long, with handles to resize it, and report on the sampling rate:
0.5 S/s |—-|
2 S/s |—————-|
Something like that… The intutive variables are, I think, “How much time does this graph show” and “How often does it move over a slot”: so, not “Bar Width” but “Graph Duration”.
This adjustment should be discrete: snap to exact rates so I can map graph to a number. 0.1, 0.5, 1s, 2s, 5s, 10s…, with graph duration ranging from “one minute” through “ten minutes” to “1 hour” or something.
If you’re showing a refresh rate (my vote) then report a rate: “2 S/s” or “2 Samples per second” — if you’re showing “sample duration” then report a period: “every 0.5 seconds” (right now you say rate but show period.)
- You should redraw the graph when its scale (refresh rate) changes.
An experiment: draw system time down from the top, user time up from the bottom, and nice time atop that. (“Enforce Comparisons” — Tufte.) As the graph is currently set up you can’t directly compare user time. Dunno if this will be more or less clear.
I have a quad-core machine, and I hear there are folks out ther with octo-cores — I don’t think the stacked graphs stay meaningful at that point.
What do we really want to know? I’d like to know 1) The overall system load; 2) user time vs system time within overall load 3) the variance across CPUs (a 25% load: one 100% process and 3 idle cores, or 4 25% processes?) 4) how much of the overall load is coming from the piggiest process (judging pigginess based on “fraction of the total integrated non-idle time shown on the graph” — basically, how much of the ink is each little piggie’s fault?)
I don’t really know how to do #4, but we do have a couple ways to do 1–3 and maximize data ink. Draw ONE waterfall graph, and segment each User, System, Nice bar into (#cores) segments (so, four segments for my quadcore). Then experiment: you can draw each core with alternating shades (remember, we still have the ‘value’ dimension on the shelf), or you could put a 1px bar separating each segment. You could even just alternate (user-system-nice#1)/(user-system-nice#2)/(user-system-nice#3)/(user-system-nice#4) — the different colors mark the cores — but I bet that looks soupy in the end.
I hope you don’t mind this ridiculously nitpicky post — I only went into this detail because I think what you’re doing is neat.