EmbeddedRelated.com
Blogs
The 2026 Embedded Online Conference

Stuck with Jira — and Stuckons

Jason SachsJanuary 1, 20261 comment

I’m venting today, because I am very frustrated with Jira’s quirks and limitations as issue-tracking software. I spend too much time each day sifting through notification messages, and managing issues in Jira.

I’m also trying to get my head around some tough aspects of project management, and I’ll share some of that thinking with my usual signal-processing-plus-zany-perspective approach.

But first: Jira.

This article is available in PDF format for easy printing

(Disclaimer: the opinions expressed herein are my own, and do not represent that of my employer or of EmbeddedRelated.com)

Issue-tracking Software Requirements

Here is the basic set of requirements for issue-tracking software. Suppose I’m working on Project Zonk, and I have issues that need addressing.

  • Each issue is an entry in a database, with the following properties:
    • unique key (ZONK-10101) that can be used to reference the issue
    • summary (“Zonk won’t load properly after pressing Ctrl-F”)
    • description (“I pressed Ctrl-F, and the next action pauses indefinitely, with the page unable to load....”)
    • type — Bug, Task, etc. (“Epic” and “Story” are part of JIRA’s support for “Agile” methodologies)
    • additional user-defined properties, as required by the project
    • creation timestamp — timestamp of when the issue was created
    • reporter — username of the person who created the issue
    • assignee — username of the person who is responsible for addressing the issue, or a special value meaning “unassigned”
    • last updated timestamp
    • status — one of several possible status types, in the project’s workflow (see below)
    • resolution — one of several possible resolutions, in the project’s workflow, or “unresolved”
    • resolved timestamp — timestamp of when the issue was resolved (if it was resolved)
    • relational links to one or more other issues in the same or other projects, each forming a semantic triple subject-predicate-object: ZONK-10101 is related to ZONK-7730; ZONK-10101 is related to BEEBEE-909; BEEBEE-944 is a subtask of BEEBEE-909; BEEBEE-909 depends on ZONK-3877; ZONK-4055 is a duplicate of ZONK-2198; etc.
  • Issues may have files attached to them
  • Issues may have comments, each with a timestamp
  • Text in the issue description and comments must allow rich text format:
    • the widely-used Markdown is used in most systems
    • rich text editor for WYSIWYG editing must be available, for people who don’t want to mess around with markup language formatting
    • issue keys in the text (ZONK-7730 or BEEBEE-909) should automatically display as hyperlinks to the corresponding issue
  • Projects (ZONK in this case) have particular groups of settings, behaviors, and data associated with them:

    • Projects contain issues; each issue is associated with one project
    • Workflow — each issue status represents a state, typically something like “Open”, “In progress”, “Resolved”, “Closed”, and there are specific transitions allowed between states. The “Resolved” state is special, representing that an issue has been resolved, and is associated with a reason (“Complete”, “Rejected”, “Unable to Reproduce”, “User Error”, “Duplicate”, etc.)
    • Components — the project may contain various named components (“UI”, “Database”, “Authentication”, “Parser logic”, etc.); each issue is associated with zero or more components
    • Versions — if the issue tracker is configured for software development, there will typically be specific numbered versions that represent product releases (Zonk 1.0, Zonk 1.1, Zonk 2.0, Zonk 2.0.1, Zonk 2.1, etc.)
    • Applicability — issues are associated with zero or more versions for which the issue applies
    • Fix version — the issue is addressed in a particular version of the project. (For example, ZONK-10101 may occur in Zonk 1.0 and Zonk 1.1, but has a fix version of Zonk 2.0)
  • Users — the database server requires authenticated users to access it.

  • Permissions — it may be appropriate to restrict certain permissions to individual users; for example one user may be able to edit an issue, and others may not; another project may be “secret” and issues only accessable by a limited set of users.
  • Auditability — any update in an issue must be recorded permanently as a transaction, so that in the event of an audit, the auditor must be able to see all changes in the issue over time. Editing the issue summary / description / comments / metadata represents an update, and the older information before the edit must be retained for auditability reasons.
  • Notification — Users may be “watching” an issue for certain updates, and those updates will trigger a notification to the user, typically by e-mail.
  • Searching and querying the database
    • Users must be able to search interactively for issues that match certain criteria (example: all the issues in the ZONK project created from Jan 1 2015 to July 1 2019, which contain the text “pancake” and have not been resolved)
    • Programmatic queries of the database must be possible through some sort of API

These are the basics of any good issue tracking system, and Jira has all of these — except for Markdown; JIRA has used its own bizarro markup language.

Common optional features include things like:

  • Batch edits — if 373 issues matching a certain criteria need to be relocated from one component to another or one project to another, or need to be resolved with “Rejected”, it should be possible to do this as one batch operation rather than making the same change by hand, over and over again, one issue at a time
  • Time-tracking and estimation — must be able to enter an estimated time to complete the issue, and log work (example: 1 hour 35 minutes “designed new user interface”, 25 minutes “implemented UI changes”, etc.)
  • Hierarchical structure — it should be possible to organize issues in a tree hierarchy, so that a large group project (“Release ZONK 2.1”) can be broken down visually into moderate pieces (“Support MQTT”, “Export as HDF5”, “Add Chinese translation”, etc.), each of which are broken down further into individual tasks or subtasks, etc.
  • Dashboards — display common analytics for task completion, etc.

Jira supports all of these; the hierarchical structure feature is available through a third-party plugin developed by ALM Works, now part of Tempo. I’m not particularly impressed by Jira’s dashboards, but that’s just me.

At any rate — yes, Jira meets all the basic requirements. There are irritations, however.

Irritations

At a high level, these are my areas of complaint:

  • Lack of commitment to fixing bugs
  • Markup is nonstandard and full of problems
  • Notification is unwieldy and has low signal-to-noise ratio
  • Categorization features are unreasonably specialized around Jira’s “Agile” feature
  • Lack of support for “meta-tasks” which represent the difference between tracked issues for the project, and the responsibilities of its users to manage those issues
  • Analytic visualization features are limited

I’ll address each of them in some detail, but first, a historical note.

In October 2009, Atlassian introduced their Starter Pack licenses for Jira, Confluence, and a few other tools. At the time, the company I worked for had some kind of primitive issue database… maybe some kind of custom Microsoft Access database; I don’t remember. Or it might have been a set of paper files. All I can remember is that for us engineers it was this foreign inaccessible thing that the Quality Assurance staff managed.

I was part of a small group working on a power electronics project, and we were trying to get going. One of my colleagues worked at a company that used FogBugz, and I’d heard of Bugzilla, so I thought maybe I’d try to experiment with one of these as an issue tracking server. I had been messing around with some simple server management on an old Windows 2000 PC, and I got Apache and MySQL and PHP and Wikimedia running successfully. FogBugz did have a trial program — but I think it was through an instance on their own server, and it wasn’t something you could just download and run easily to see if it worked for you. You had to buy their software and install it on a Microsoft Internet Information Services (IIS) webserver, and I wasn’t in a position to do either.

Jira was another choice, and I was investigating it in mid-2009; unlike FogBugz, I could download and install it myself on a free Apache Tomcat server. Then the Atlassian Starter Packs came out: \$10 for each software program for up to 10 users. I could afford to buy these starter pack licenses out of my own pocket, and set them up myself — so I did! It was great! All of a sudden, our own issue tracking system!

Atlassian even ate their own dog food: they used Jira to track bugs in their own software, and allowed the general public to enter new issues directly. I filed a number of bugs and feature requests in 2009 and 2010. But on the whole, it was exciting to get something up and running quickly for our group that we could afford.

Other customers must have gotten excited about Jira as well; Atlassian’s revenue grew from about \$35 million in FY 2008 to \$59 million in FY 2010, \$110 million in FY 2012, and \$215 million in FY 2014, with the company going public in 2015.

I changed jobs in 2012 and my new employer was also using Jira. Perfect! I knew how to use the software already. I was rather busy at the time, but occasionally filed more issues with Atlassian.

Lack of commitment to fixing bugs

A few years went by. At some point, it dawned on me that most if not all of the issues I filed with Atlassian were not addressed.

Here some of the issues I filed or commented on back then, from 2009 – 2012; since then, my Atlassian account was unfortunately marked as “Deleted Account (Inactive)”:

After my job change, I created a new account, and have filed or commented on more:

As I read these, I realized that I still run into the same basic edge cases over and over again. But sure, Atlassian has thousands of issues filed; mine are probably obscure. Here are some more popular long-standing issues:

Too bad, no luck. Maybe that’s not fair… what about all the issues that Atlassian has fixed or implemented? I’m not sure how to judge the situation. I do think little issues that are barriers should be given high importance for fixing, and it just seems like that’s not happening the way it should. You can decide for yourself by looking at the most popular issues in the JRASERVER / JRACLOUD projects that were created before January 1, 2016; this gives a sense of how the company has done on 10+ year old issues:

project in (JRACLOUD, JRASERVER) and votes >= 250 and created < "2016-01-01" order by votes desc

It’s especially frustrating to me to see that while Atlassian’s revenues are increasing so much — Atlassian’s revenue was \$4.4B in FY 2024, 20x that of FY 2014 — as a basic user, I really don’t see that much difference between JIRA now and JIRA ten years ago. The bugs and barriers are still there.

Markup is nonstandard and full of problems

Markdown is the de facto standard for online markup. (See Reddit, Github, Discourse, and StackOverflow for example) It’s fairly easy to learn and reasonably robust against edge cases.

Jira unfortunately predates Markdown by a few years, so the Atlassian folks created their own markup syntax. And it has some weird edge cases involving escaping/backslashes (60818, 23558, 27359), whitespace, bulleted lists, strikethrough, super/subscripts, and so on.

But it seems like there should be a way to give us a choice: either allow selection of the “classic” Atlassian markup, or Markdown instead, and enable it by using a tag at the beginning like {markdown} or something.

Instead, we’re stuck with what we have had in Jira for the last 15+ years. I use Markdown in my job in many contexts, and then when I use Jira, I have to switch my mental mindset back to Atlassian’s bizarro-markup. I don’t have detailed notes of each issue I run into with Jira — but will say I rarely run into formatting issues with Markdown, whereas I often run into them in Jira.

Notification is unwieldy and has low signal-to-noise ratio

If you “watch” a Jira issue or are the reporter/assignee, you get email notifications for every change of the issue. Every change.

Someone edits a comment? You get a Jira notification message.

Someone updates a label or component or fix version? You get a Jira notification message.

Granted, sometime in recent years, Atlassian changed the Jira server to batch up groups of several changes to the same issue made in close succession, so you will get fewer email messages — at the minor cost of a little bit of latency.

But there’s still a lot of email messages. I typically get 50 or more Jira notification email messages each day, and it increases to 100-200 a day if things get really hectic on a project. So I have to go through them one by one, checking whether I can ignore the change notification or not. It doesn’t take too long — maybe 15-30 seconds in most cases — but that adds up and wastes time I don’t have to spare.

Yes, this is a hard problem to solve. But there’s got to be a way to do it. Back in the day, newsgroup (Usenet) readers and some forum websites used to have per-user state that kept track of posts that were read or not, in a local file for newsgroup readers and something on the server for forum websites. I am sure I have used at least one forum website (based on vBulletin?) that only sent one notification email per thread, until you logged in. So if you went on vacation for two weeks, and there were 1000 updates in 11 forum threads, you would get 11 email messages, one for each thread with a change, not 1000 messages. What’s important to me is that I know quickly that new content exists, like a blinking light on an answering machine. I don’t need a notification email message for each and every change — but when I do login, I need to know which new content I haven’t read yet.

Please solve this!

Categorization features are unreasonably specialized around Jira’s “Agile” feature

Because I spend a lot of my time organizing Jira issues, there’s one feature that I need to make this easier: a way to quickly associate Jira issues with what I call a “bucket”. Imagine a list of issues in a table that you can select, drag, and drop into some labeled area representing a container, and then those issues will be associated with that container.

Potential containers include fields such as:

  • A fix version
  • An epic
  • A component
  • A label
  • A sprint

When I have dozens of issues to go through, I need to be able to review them one by one and do this quickly. The “default” way of doing this in Jira is to open each issue in sequence, find the field in question, and edit it. This can take 30-60 seconds just for one issue, and is repetitive.

Jira does have a “Backlog” view as part of its Agile (Scrum) board where you can drag and drop issues onto either Epic or Fix Version panels, and it assigns those issues to the corresponding Epic or Fix Version. Or you can drag and drop issues from one sprint to another. But it’s very specifically designed around the Scrum methodology, and you have to find the issues first, which are located in the Backlog or in one of various sprints, in the order someone has placed them.

So using the Backlog view for issue organization becomes a bit cumbersome, and it only works for assigning sprints / Epic / Fix Version, not labels or components.

What I’d really like is to have a screen where I can run a JQL query to see a list of issues, review them one by one, with a side panel that shows the issue description, and whenever I feel like it, I can just drag that issue to one side of the screen where I can create a drop target that represents a “bucket”, and placing issues in the bucket assigns a specified component or label (or fix version or Epic or whatever) to those issues.

The labels are the most flexible, because one issue can have many labels, and they’re arbitrary and don’t need significant coordination. You can just create labels at the spur of the moment by typing them; you don’t have to go through any separate process to create a label before you can use it. For example, suppose it’s December 30, 2025 and I need to have a meeting later in the day to discuss some issues. Then I can add decide to use the label for-discussion-20251230 and assign it to the issues I’m interested in. The rest of my team can do the same, and then we can run a query on this label during our meeting to review the issues one by one.

In my group we usually assign A/B/C priorities to near-term issues — A is “must-have”, B is “strongly preferred”, and C is “nice to have”. We use labels for this — for example, the label zonk-14-a for the A priority issues of Zonk version 14, and zonk-14-b for the B priority issues. (These A/B/C priorities are is subtly different from the issue priority field, which is set by the reporter or stakeholder, for reasons that aren’t relevant to this article.) If I go through the normal issue edit screen, it might take me 30 seconds to update the label field. I’m looking for a way to take 5 seconds or less. This matters when you have dozens of issues you are trying to categorize quickly, especially if it’s in a meeting where time is short and we need to spend it on discussions to reach consensus, rather than spending time holding up the meeting while one of us edits the attributes of an issue in a database.

Lack of support for “meta-tasks”

I run into the lack of support for “meta-tasks” frequently. There is a subtle difference between the tasks that the project needs to track, and “meta-tasks” which are what the project participants need to track when working on tasks.

Suppose we have an issue, ZONK-10101: Zonk won’t load properly after pressing Ctrl-F. My coworker Bobby is investigating it, and my coworker Nick and I are also involved in this issue from time to time.

What are some typical day-to-day tasks related to ZONK-10101?

  • Bobby needs to update the description of ZONK-10101 to clarify which version of Zonklib was included in the program
  • Nick needs to confirm the bug on the Macintosh
  • Bobby makes a comment in ZONK-10101 which I skim at a glance, but I want to make sure I read it more carefully later.
  • After I read his comment, I want Bobby to clarify his comment
  • I want Bobby to check to see if ZONK-10101 is related to any other known keyboard bugs

These are all project participant tasks, and we need a way to manage them to make sure they get done. Of course, we could open a new Jira task for each of these… but that gets unwieldy — imagine opening ZONK-12904 just for me to read one of Bobby’s comments in ZONK-10101 — and these are all temporary tasks that would just add noise to the issue tracking system if they drown out the real tasks like ZONK-10101.

What I really need is a more lightweight mechanism to open up a meta-task in a particular location within a task (and without having to enter in a whole bunch of information explaining the context, since the context is implied), assign it to someone, and then this meta-task exists for someone to address until someone closes it out.

For example, I could add a comment @TODO:Bobby please check if there are any other known keyboard bugs. The issue tracker system would see this, recognize it as a meta-task, and could auto-assign it a number (let’s say it’s 34) for the meta-tasks, and give Bobby and me a way to see what meta-tasks we still have left to do. Bobby later replies @DONE:34 I looked and I found two, ZONK-3370 and ZONK-4000; they don’t seem to be related, though.

Or I could add a comment @TODO:me read Bobby’s 2025-12-30 comment more carefully, and because it’s a comment to myself, it’s temporary and not part of the public comment stream, and will disappear when I remove it.

Anyway, Jira doesn’t have this. You can complicate your workflow, or you can open more Jira issues, but you don’t have any way in Jira to keep track of meta-task work. As a result, we end up keeping our own to-do lists of meta-tasks outside of Jira, either in our heads or on pieces of paper or text files. Yuck.

Analytic visualization features are limited

Lastly, I’ve found it very difficult to get any useful visualizations from Jira. I end up having to create my own graphs and tables by accessing data from the Jira REST API, processing it in Python, and graphing in Matplotlib, or making tables with Pandas. The built-in charts in Jira are these predefined things, like sprint velocity charts, which either aren’t applicable to my team at all, or they’re almost applicable but Jira isn’t flexible enough to make them useful for us.

I suppose the practical workaround is what I’m doing now — processing the data via the Jira REST API, and doing my own thing — but it seems like there should be features built into Jira that are more customizable and extensible, and cover 95% of use cases. Most of what I do in my own Python program is just aggregation and categorization of the issues in a Jira query, to show how many issues are still open in various parts of the Jira structure of our current release.

Admittedly, some of the frustration I have isn’t Jira’s fault; we have some difficult project planning challenges, and a lot of the time I feel like I’m stumbling around in the dark trying to look at the right kind of quantitative data to get some useful insight.

So here I have to take a tangent.

Keptons: A Particle Model for Insights into Engineering Project Planning

I’m going to describe a simple model of project planning that applies to certain kinds of engineering projects, and to the way we manage tasks in issue trackers like Jira. This isn’t meant to be close to reality, but rather for pointing out some useful real-world phenomena that can be analyzed and experimented with simple simulations. The model I describe is applicable to the type of projects I work on, which involve a lot of uncertainty in planning, and have very little repetition of previous efforts.

This model is based on contrived particles which I’m going to call keptons. Each kepton corresponds to an amount of work that is appropriate to enter as an issue in a database. There are three kinds of keptons:

  • planons which are higher-level groups of work that describe planning effort
  • workons which are low-level tasks that describe intended effort
  • stuckons which are low-level tasks that describe unexpected effort

It’s the stuckons that make this model interesting, which we’ll see.

Each kepton has two key attributes: size and progress, both measured in time. In this article both are measured in hours, and I’ll note them visually as follows, with two-part or one-part circles:

If a kepton is shown as a circle with two parts, the upper number \( p \) represents progress, and the lower number \( n \) represents size. The difference between size and progress is the remaining effort \( m = n - p \). If a kepton is shown with only one number, it represents both size and progress as a completed effort, and the remaining effort is zero. Planons are shown here in very light gray, workons are light green, and stuckons are dark red. In the example above, the upper row shows a 56-hour planon with 1 hour of progress, and 55 hours remaining; a 20-hour workon with 1 hour of progress (19 hours remaining); and a 30-hour stuckon with 1 hour of progress (29 hours remaining). The lower row shows completed keptons: a 56-hour planon, a 20-hour workon, and a 30-hour stuckon.

Keptons change when people try to make progress toward completion, with at most one person handling a kepton in any given unit of time (= 1 hour). If someone works on an incomplete kepton (\( p < n \)) for one unit of time, here’s what happens:

  • the progress \( p \) increases by 1
  • the size \( n \) may change
  • another kepton may be created as a result

Keptons are stochastic, meaning their behavior is described by random variables with certain simple characteristics.

Planons

When someone works on a planon for one unit of time, there is a 1/3 chance that the remaining effort is unchanged (shown as (A) in the diagram below), and a 2/3 chance that the planon will split (shown as (B) in the diagram below) and convert some amount of remaining effort \( q \le n-p \) to a new planon or workon of size \( q \).

In either case, the total size is increased by 1, and the total remaining effort is unchanged, and equal to \( n-p \).

The real-world analogy here is that planning work is distinct from task effort and progress doesn’t actually decrease remaining task effort — but it does allow that task effort to be divided into smaller pieces for making progress, becoming less abstract and more concrete in the process.

See the Addenda for the specific behavior in this model, but \( q \) averages to 1/4 of the remaining effort. The new kepton is more likely to be a planon for larger values of \( q \), and more likely to be a workon for smaller values of \( q \), with even odds at \( q=20 \). It’s rare to have workons created larger than \( n=40 \).

Workons

When someone works on a workon for one unit of time, there is an 80% chance that the remaining effort is unchanged (shown as (A) in the diagram below), a 10% chance that the planon will split (shown as (B) in the diagram below) and convert some amount of remaining effort \( q \le n−p \) to a new planon or workon of size \( q \), and a 10% chance that a new stuckon will be created with size 1 (shown as (C) in the diagram below).

There is a 90% chance in each hour that the effort was successful, and the size of the workon is unchanged (adjustment \( a=0 \)), so that the increase in progress reduces the remaining effort by one hour. There is a 10% chance that the effort was unsuccessful, and the size of the workon is increased by one (\( a=1 \)).

The real-world analogy here is that most of the time, workons represent remaining task effort, with a little bit of uncertainty. Someone works on a workon, and work usually gets done, causing a steady reduction in remaining effort.

See the Addenda for the specific behavior in this model, but \( q \) averages to 1/4 of the remaining effort, with the same behavior as in planons. Because the normal size of workons is relatively small, the new kepton created by a split is almost always another workon; it is possible but very unlikely that a planon is created by a split.

Stuckons

When someone works on a stuckon for one unit of time, progress increases, but the size of the stuckon may increase by an adjustment \( a \):

  • 25% chance that \( a=0 \), so that the size \( n \) remains the same, and the remaining effort \( n-p \) decreases by one. (Progress has been made!)
  • 50% chance that \( a=1 \), so that the size \( n \) increases by one, and the remaining effort \( n-p \) remains the same.
  • 25% chance that \( a=2 \), so that the size \( n \) increaess by two, and the remaining effort \( n-p \) increases by one. (An unfortunate case of one step forward, two steps back.)

In addition, in some cases, some amount \( q \) of the remaining effort is split off, and a workon or planon is created. See the Addenda for specifics, but the probability of this happening in any given hour is very low for small stuckons and increases for larger stuckons, reaching a maximum of 20%.

Stuckons always start at a size of \( n=1 \), but they can get much larger. They represent a progress barrier; they can persist for a long time, and are unpredictable.

Interaction

That’s it! Those are our three keptons. They can be handled with different strategies depending on available workers, but it’s a very simple model.

The progress on any given kepton is known (we know how long our workers spent trying to make progress), but assume that the remaining effort of a given kepton is an unknown quantity and can’t be measured, leading to uncertainty. In addition, beyond the early-created workons, distinguishing workons and stuckons may not be possible! All our workers know is that while they are working to complete a task, they have identified a new one that has to be completed. (Assume that planons are clearly identifiable and used for planning purposes to define smaller tasks.)

An Example

Suppose we start a project with one planon of size \( n=100 \), and we have a team of five people, one project manager and four engineers. For each hour of work, the project manager locates and works on the planon with the largest remaining amount of work. The engineers each work on workons or stuckons with the largest remaining amount of work, but once they located a suitable kepton, they continue working on it until it is complete.

Since the project manager only does planning, and the four engineers do all the task progress, we might expect that the \( n=100 \) hours of work will take 25 hours to complete.

Here’s a sample run. We start with the single planon at time \( t=0 \).

To track individual keptons, I’ve shown their ID in the upper left of the kepton, so this planon is kepton #0.

Five hours later, part of the remaining effort has been successfully split off into two additional planons, two workons, and a stuckon:

Engineers started attacking the workon (kepton #2) as soon as it was available, and managed to complete the stuckon quickly.

Another five hours later, at \( t=10 \), the splitting continues. No new planons have been created, but there are a number of new workons and stuckons.

Two tasks (#3 and #5) have been completed.

Another five hours later, at \( t=15 \), more workons and stuckons have been created. Stuckons are increasing like a disease.

A few workons are complete, but most are not, and even the planning is not completed.

It turns out in this case that it takes 126 hours to complete the project, more than five times longer than expected. Some of the stuckons keep ballooning in size and take 20-50 hours to finish.

If we look at a few statistics over the time of the project, we can note a few things.

The top subgraph counts the number of workons and stuckons and how many of them are still incomplete. The total number of keptons starts at one and increases in this simulation to 74, with a total of three planons. The number of incomplete keptons increases, and then gradually and unpredictably decreases, going up as new workons and stuckons are discovered, and down as they are completed.

If we were keeping track of the keptons in an issue tracker like Jira, with one issue per kepton, the number of open issues would increase during the project like the blue curve, and the number of unresolved issues would bounce up and down like the orange curve, eventually reaching zero.

The other three subgraphs measure quantities that are not observable by the project team — they can only estimate the remaining work and cannot know the actual time remaining — but since this is a simulation, we can graph them.

In the beginning, most of the remaining work is contained within planons, but as the planning effort is made, this is split off and converted to workons, and eventually to stuckons and late-stage workons. The stuckons persist a long time.

The bottom two graphs show that the total size of the keptons at any given time can be partitioned in two different ways.

First, we can categorize total size by kepton type (planon/stuckon/workon), in which planons get partially converted to workons, and then stuckons enter the scene.

The second and probably more insightful partitioning is a four-way split:

  • the base time (\( n=100 \) when we started), which remains unchanged
  • planon progress — remember, working on planons can never decrease the remaining effort, so all planon progress is cumulative overhead in addition to task progress
  • time added to workons due to unsuccessful progress — the 10% chance in any given hour that remaining effort is not reduced
  • stuckon “cost”, representing unanticipated effort — this includes both the size of the stuckons themselves, and any new kepton split off from the stuckons

In this case, stuckon cost makes up more than half of the total effort!

Kepton System Observations

The question I am asked at work, repeatedly, is how long it will take to complete the next phase of the project. And we can take the same approach with the kepton simulation: how long will it take?

If there weren’t stuckons, this would be very easy to answer; projects of a given size have a certain overhead, about 10% to cover unsuccessful progress on workons, and some empirical fraction to cover planning that depends on project size, here about 50% of the base time. That’s it! There’s a small amount of uncertainty, but it averages out over the long term.

Stuckons throw a monkey wrench into the situation, however, and the uncertainty is so high at any given point in time that looking at the number of remaining issues doesn’t lend a lot of insight. You can try to extrapolate progress in resolving issues, but the rate at which new keptons are created is highly variable. Imagine you are a manager and you are looking at the top subgraph to gauge the team’s progress, and twice a week (every 20 work hours) you check in to see how they’re doing. Here’s that graph again:

  • The first 20 hours (\( t \le 20 \)) are a discovery period; the team is dividing up work, with the number of remaining issues steadily increasing. No big deal; this is just how things work at the beginning.

  • The next 20 hours (\( t=20 \) to \( t=40 \)) the number of remaining issues comes down a very small amount, with lots more new issues created. Okay, well maybe things are still early. But this project was supposed to take about 25 hours, right?

  • The next 20 hours (\( t=40 \) to \( t=60 \)) the number of remaining issues essentially remains constant, and those open issues are still increasing. How long is this going to take? No one knows.

  • The next 20 hours (\( t=60 \) to \( t=80 \)) the number of remaining issues drops steadily from 20 to 10. Great! The team has evidently gotten over a barrier, and is getting some momentum.

  • The next 20 hours (\( t=80 \) to \( t=100 \)) the number of remaining issues goes down a little bit, but then climbs up again. Ugh. The team is stuck again.

  • During the rest of the project (\( t \ge 100 \)) the number of remaining issues drops steadily toward zero. Finally, the team has entered the home stretch to push toward completion.

We can make those kind of evaluations — “the team is stuck” or “the team has some momentum” — by looking at project metrics like the number of open issues, but there’s a high degree of randomness here, and the ideas of “stuck” or “momentum” really aren’t valid. Stuckons have an aspect of random walks associated with them, and there’s nothing to do but keep trying to make progress until they are complete.

Back to the Real World

In the real world, we don’t have this easy-to-model behavior, so creating a realistic simulation of a true engineering project, with intentional and unexpected efforts, is probably not possible. We can estimate the size of a given task — with smaller tasks generally easier to estimate than larger ones — but in almost every project I’ve worked on, unexpected issues pop up. These are the real-world equivalents of stuckons, and they’re the ones that blow your schedule out of the water. All of a sudden, there’s some design difficulty you need to overcome, and you’re not sure how to do it, so you try a few things, and hopefully get past it. Or the thing you try doesn’t work, and you try something else, and then another thing, and who knows how long it will take until you can conquer the problem, but eventually you do. Or your attempts to solve the problem drag on long enough that your project manager changes the rules and respecifies the project enough to make an end run around the problem instead, finding a lesser evil to throw your way in its place. (“Your motor control software still can’t reach 5000 RPM? Okay, well what if we re-wound the motor? Or we use an 8S battery instead of a 6S? Or we change the gear ratio. Or ....”) Or the project is canceled because of the schedule overrun.

I guess what I’m trying to say is that stuckons have similarity to real-world problems, and their cumulative behavior in causing major project delays is realistic. The kepton model won’t give you quantitative data to model your real world project, but it might open your eyes for how hard it is to manage unexpected effort.

The real challenge, as I see it, is not to keep track of “how many issues are remaining” or “what is our team’s sprint velocity and how many story points do we have left”, but rather, keeping track of risks and unknowns. If we could somehow put bounds on the unexpected effort left on the project, then everything else would be like workons in the kepton model without stuckons, and we’d be able to have some confidence about how many hours remain, whether it’s through project estimation or story points or whatever. But as long as you have significant amounts of unexpected effort, it’s just too hard to model.

There are strategies to pursue here: for example, we could try to keep track of areas in the project with higher chances of unexpected effort — assuming that our team’s estimate of these areas actually has some correlation with reality — and focus efforts to tackle those areas with higher priority. I think this is possible to some extent.

There are activities which are low risk: for example creating software to write specified content to a file is pretty straightforward, as is going to the post office to mail a package. There’s always a small chance you could run into something weird — file permissions aren’t set up properly, and this changes from an easy programming exercise to a system configuration problem that has to be negotiated with the DevOps team, or you get into a car accident on the way to the post office. But those are remote, and unless something actually happens, there are so many remote chances of risk that trying to manage them all is just intractable.

Whereas other areas are higher risk, and these are the ones you should keep an eye on. If your software developer hasn’t worked with Software Library XYZ before and is unfamiliar with its limitations, or your circuit designer has to make a sharp-cutoff band-pass filter in analog electronics and has to go dust off the books to remember what to do, these are areas of risk. Tackle them early. Or the product has to undergo EMI/EMC testing, and the testing company is a two-hour drive away, and you have no idea how well things are going to go. So set up an in-house pre-compliance test bench — it won’t be perfect, but you’ll at least have some idea whether your system is okay, and whether certain changes will improve your emissions or susceptibility, before you take the time and money to go through an official test.

The big question for issue tracking software, that relates to this line of thinking, is how are you going to use it to help manage project risk? And that’s a challenge. Atlassian has a webpage called What is project risk management? that mentions a lot of ways to handle risk, including Jira, but I really don’t see anything here to help with the day-to-day efforts to identify unexpected effort — at least, not the way I have encountered it

This topic has been on my mind for several months now, and if I learn anything new that might shed some light on successful project management in regard to unexpected effort, I’ll post another article.

Wrapup

I talked about the issue tracker Jira, and a bunch of aspects that are either frustrating to me, or aren’t helpful in alleviating the difficulties of project management.

I gave a simple model of three types of “keptons” — planons, workons, and stuckons — and showed how stuckons are an example of uncertainty getting in the way of forecasting schedule because of their high variability, analogous to project risks that are discovered throughout the project and result in unexpected work. This is important food for thought, and aside from general advice to keep an eye on project risks and unknowns, I don’t have specific strategies to share. But I do wish that my issue tracking software helped facilitate managing those risks and unknowns.

I do hope that I find improvements in tracking project issues over the next few years, whether it’s through improvements in Jira, or learning how to use it more efficiently, or finding an alternative.

In the meantime, have a happy new year, and I hope that 2026 brings good fortune to everyone!

Addenda

Details of the Kepton Model

There are three particles (keptons), known as planons, workons, and stuckons, each with a progress \( p \) and a size \( n \), and a remaining effort \( m=n-p \ge 0 \). The kepton is complete if \( m=0 \) and incomplete if \( m > 0 \).

Working on a kepton for one unit of time causes three things to happen:

  • progress \( p \) increases by one
  • size \( n \) is increased by an amount \( a-q \), where \( a \) represents the adjustment of size, and \( q \) represents the size of a newly-created kepton if one is split off from its parent:
    • planons: \( a=1 \) (total remaining effort stays unchanged)
    • workons: \( a=0 \) with probability 90% (progress is made) and \( a=1 \) with probability 10% (unsuccessful progress)
    • stuckons: \( a=0 \) with probability 25% (progress is made), \( a=1 \) with probability 50% (no progress made), and \( a=2 \) with probability 25% (backwards progress: more required effort is discovered)
  • another kepton may be created as a result:
    • planons: a split occurs with probability 2/3; no split with probability 1/3
    • workons: a split occurs with probability 10%; no split with probability 80%; a new stuckon with \( n=1 \) and \( p=0 \) is created with 10% chance (this is not a split, so the workon size is not reduced)
    • stuckons: a split occurs with probability \( f_2(n) \) as described below

Kepton splits

Splits involve determining a value \( q \), which reduces the size of the parent kepton and creates a new workon or planon with \( p=0 \) and \( n=q \). The value of \( q \) follows a binomial distribution with \( m \) trials and a probability that depends on the type of parent kepton:

  • planons and workons have probability 1/4
  • stuckons have probability 1/3

Think of each of the remaining \( m \) units of work as independently rolling a 12-sided die and escaping from the parent kepton, if the die roll is 1, 2, or 3 (probability 1/4) for planons and workons, or if the die roll is 1, 2, 3, or 4 (probability 1/3) for stuckons. The escaped units of work form a new kepton.

Example: you have a planon with 5 units of work remaining. Roll a 12-sided die five times. Set \( q = \) the number of times the die roll is 1, 2, or 3. (Or just use a random number generator to simulate \( q \) as the result of a binomial distribution with \( n=5 \) and \( p=1/4 \).)

The new kepton is either a planon or a workon, depending on a uniform random variable and the value of \( q \). The probability that the new kepton is a planon is a sigmoidal function of \( q \):

$$Pr(\text{planon}) = f_1(q) = \frac{1}{1 + \left(\frac{20}{q}\right) ^ {K_1}}$$

where \( K_1 = \log_2 99 \).

Here are some example values of the planon probability:

\(q\)\(f_1(q)\)
1\(2.37\times 10^{-9}\)
100.01
200.50
400.99
1000.999977

So higher values of \( q \) are much more likely to be planons, and lower values of \( q \) are much more likely to be workons, with the breakeven probability at \( q=20 \).

Probability a stuckon will split

The probability a stuckon of size \( n \) will split is

$$f_2(n) = \frac{1/5}{1 + \left(\frac{20}{q}\right) ^ {K_2}}$$

where \( K_2 = \log_2 24 \).

Here are some example values of the probability of a stuckon split:

\(n\)\(f_2(n)\)
1\(2.17\times 10^{-7}\)
100.008
150.042
200.1
300.173
400.192
1000.1999

© 2026 Jason M. Sachs, all rights reserved.


The 2026 Embedded Online Conference
[ - ]
Comment by MatthewEshlemanJanuary 12, 2026

Loved FogBugz back in the day, before it was sold and re-acquired. Haven't used it in over a decade though.

Everyone has a love/hate relationship with Jira. Not sure why it took over, but it did.

Been using clickup with one client. Mostly happy with it.

But still miss FogBugz.

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: