We’ve recently launched our personal productivity software which currently has over 500 users. Although it’s still a beta version, Kiply allows you to track your activity in order to know how you really spend your time while using your computer.

With Kiply you automatically record your activity in real time and keep it completely private. You can view your activity either on the web or on your desktop app, as well as create new projects and see your progress according to your goals.

You can download Kiply beta version from our website http://www.kiply.com at no cost. Please share your thoughts about it! :)

]]>Surprising! Best to email support@beeminder.com with a link to the Beeminder goal in question.

]]>Could someone help me figure out how to get my actual tagtime values to plot in beeminder? I have tagtime linked to abeeminder goal and it posts the tags themselves to the beeminder goal, however the value is still zero. Is this an easy fix you can point me to? Thanks.

]]>It catches back up when the computer is turned on and automatically tags all the missing pings as “off” and “afk”.

]]>Aha! It’s not that there’s a missing parameter, it’s that I’m making a hidden assumption. I think it will be more clear with an extreme example:

Suppose my TagTime gap is 45 minutes, per usual, and I work for 45 minutes. Despite the googol-to-one odds, I manage to get pinged 100 times during that time. Half the pings are for activity X and half are for activity Y. So *given* the highly unlikely ping sequence, the best estimate of my time is 22.5 minutes on activity X and 22.5 minutes on activity Y.

That’s much more accurate that my formula above which will think I spent thousands of minutes on each activity (50 pings’ worth each).

BUT, if pings actually happen every 45 minutes — and in the long run, on average, they do — then both methods agree, and you don’t have to worry about the denominator.

So you’re quite right, but over the course of days and weeks it doesn’t matter, and TagTime is only accurate over days or weeks anyway, where the average gap between pings will be very close to 45 minutes.

So you might as well just treat each ping as representing 45 minutes for the tagged activity, regardless of how many pings didn’t match or how long you were tagging your time.

PS: You’re quite right (in your other comment below) that you can model this as many consecutive Bernoulli trials. Like every second there’s a tiny independent probability of a ping happening. The equations we’re using actually compute this for the limit where the Bernoulli trials happen continuously with infinitesimal probabilities — but still happen every 45 minutes on average.

]]>Forgot to mention, the z=1.96 in the Wolfram Alpha link is hard-coded normal-distribution-quantile for 95% confidence interval.

]]>Actually, I think you should model pings as independent Bernoulli trials. You have an activity that is happening with some probability p and tagtime occasionally takes a sample which is either a yes or a no. Given the total number of samples you took, and how many “yes” samples you saw, you want a confidence interval for p.

This is exactly the confidence interval for a binomial distribution, and there are many formulas to estimate this: http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval

I used the Wilson score interval with the example earlier in this thread: 13 out of 58 samples were “yes” samples, for a sample p of 13/58 = 0.224. Plugging that into wolfram alpha:

I get a confidence interval for p of {.1359, .3466}. Multiply by (gap=0.75) * (n=58) for a confidence interval for time spent of {5.912, 15.077}.

Those other formulas for Poisson confidence interval seem to be related to sampling a Poisson variable for an estimate of its parameter lambda. But with tagtime, the only Poisson process is the one that decides “do I ping in this second or not?” and its parameter lambda is already known (it’s the “gap” setting).

In contrast, given that a ping already means tagtime has decided to ping (how’s that for tautological), the Poisson aspects are irrelevant. Your response to the ping is a Bernoulli sample of whether your desired tag is present or not. The unknown parameter is the true probability p of your tag being present at any given point in time. The Poisson properties only come into play when scaling p into real time (supposing that the true probability is actually such-and-such, and that I was sampling on average every 45 minutes, how much time was I really spending doing that activity?)

]]>It seems that there is a missing parameter. Shouldn’t you need the number of pings matching the condition as well as the number of pings not matching the condition? I only see one of those parameters here (n, which I think refers to the former only).

Consider situation A: I have 1 week of pings, 10 of which for activity X. Situation B: I have 10 weeks of pings, 100 of which are for activity X. Surely both of these sample means will be the same but the confidence interval for B should be much smaller.

I am not strong in statistics so can’t really help beyond that, though.

]]>Ooh, yes, TagTime tracks exactly how long you spend using TagTime itself. It doesn’t even need any special accounting to do so. I just use the tag “ap” (for “answering pings”) and if a new ping pings while I’m answering the last one then the new ping is tagged “ap”. I have averaged just under 5 minutes per day answering pings over the last couple years.

Bethany spends quite a bit less than that because she puts less thought into her tag ontology. (For example, she can’t answer this question very well because she hasn’t been consistent in using a tag for “answering previous pings” :))

]]>