Most project management methodologies are good at answering the question
What should we do?
They are far less precise when it comes to a more
critical one: What state is the project in right now?
We measure deadlines, budgets, completion percentages, and the number of
finished tasks — and yet we often miss the moment when a project stops
being manageable, even though it still formally appears to be
on track.
The issue lies in how we usually think about management. It is commonly viewed as a sequence of actions: define the task, execute it, review the result, make adjustments. This perspective is convenient — but incomplete. A project does not evolve step by step. It evolves through states.
Between everything is under control
and
the project has failed
lies a wide range of intermediate conditions.
It is within these transitional states that risk accumulates.
This article proposes looking at task management from a different angle — not as supervision of execution, but as work with the space of possible states in which a task may exist. In this approach, uncertainty is not a side effect of the process; it is one of its central characteristics. Management, in turn, becomes a way of redistributing uncertainty so that the goal remains achievable.
Below, a model will be introduced that allows us to describe uncertainty, its accumulation and its reduction, and to explain why project segmentation and management activities are effective — even when their impact is difficult to express using traditional metrics.
The model does not aim to replace existing methodologies. Instead, it provides a language and a set of tools for articulating what practitioners often sense intuitively.
This article can be read independently. However, for a deeper understanding of the overall framework, it may be helpful to review the earlier pieces in the segmentation series — including the introductory classification article and the discussions of principles and causes.
Managing a Task as Work with a State Space
Let us consider a task not as a sequence of actions, but as a process that, at any given moment, occupies one of several possible states. The total set of these states forms the task’s state space.
This space is not uniform. It can be divided into several fundamentally different regions, each influencing the outcome in its own way.
-
Target States ($P_f$)
These are states in which the task’s conditions have been satisfied and the required result has been achieved. Reaching any of them means successful completion.
-
Controllable States ($P_c$)
This group includes states from which the goal can be achieved under normal conditions — without extraordinary measures or significant cost escalation.
During execution, the task should remain within this region. Controllable states form the task’s working field.
-
Recoverable States ($P_r$)
These are states in which the goal remains achievable, but only with additional effort — whether in time, resources, or organizational coordination.
They typically arise from mistakes, external disturbances, or accumulated uncertainty. Such states do not signify failure, but they do indicate a disruption of normal progress.
-
Marginal States ($M$)
This set includes states from which achieving the required result is no longer possible.
If a task enters this region, it has effectively failed — regardless of any further effort.
Uncertainty and Task Entropy
In practice, we never know exactly which state a task occupies. Information is incomplete, estimates lag, and external conditions change.
For this reason, the actual condition of a task is better described not as a single point, but as a distribution of uncertainty across its state space.
We will refer to the measure of this uncertainty as the task’s entropy. The greater the number of possible states, and the more evenly uncertainty is distributed among them, the higher the entropy.
It is essential, however, to consider not only the magnitude of entropy, but
also its quality — meaning its impact on the outcome. The same amount of
entropy may be acceptable or destructive depending on where it is
concentrated within the state space. The more uncertainty is confined to a
narrow and favorable region, the higher the quality
of that entropy.
Useful and Harmful Entropy
Uncertainty associated with target, controllable, and recoverable states forms useful entropy. It creates room for adaptation and maneuver without pushing the task beyond the boundary of achievable results.
The situation is different with marginal states. Uncertainty associated with them does not expand possibilities — it only increases the likelihood of irreversible failure. This is harmful entropy.
Management, therefore, is not about eliminating uncertainty. It is about redistributing it.
In short: a task remains manageable as long as uncertainty is concentrated within $P_c$ and $P_r$. Management is not control in the narrow sense — it is the maintenance of this configuration.
Task Execution Dynamics
In an ideal process, a task moves sequentially through controllable states. As progress is made, the range of possible future states narrows, and uncertainty shifts toward target states. Recoverable states either return to the controllable region or are removed from consideration.
Progress is reflected not only in completed actions, but also in the gradual compression of the uncertainty space around the target result.
The Function of Management
From this perspective, managing a task means shaping conditions and decisions such that:
- uncertainty remains concentrated within controllable and recoverable states;
- the probability of entering marginal states is minimized;
- transitions from controllable states to target states occur with minimal cost.
Management does not eliminate uncertainty entirely, nor does it require a rigid script. Its role is to structure the state space so that useful entropy supports the achievement of the result, while harmful entropy is consistently suppressed.
From Model to Instrument
This perspective allows us to view management not as control over every step, but as work with acceptable and unacceptable zones of task development. It explains why flexibility, recovery reserves, and early risk detection increase controllability, while excessive rigidity and the neglect of uncertainty, on the contrary, increase the probability of failure.
Once the model is formalized, the nature of the difficulties associated with the quantitative evaluation of management becomes clearer. Entropy is a complex and inconvenient quantity for practical use. Even in deterministic physical models, it is often impossible to determine it precisely.
For this reason, in practice we do not measure entropy directly. Instead, we measure its change indirectly — for example, through temperature or other macroscopic parameters that are sensitive to the redistribution of states within a system.
It is important to emphasize that entropy, heat, temperature, and heat capacity are used here as illustrative management metaphors, not as physical quantities.
Managerial Heat
Let us introduce the concept of managerial heat of a task — $Q_m$.
By heat we mean an indicator reflecting the redistribution of uncertainty toward unfavorable regions of the state space. As the share of entropy associated with recoverable ($P_r$) and marginal ($M$) states increases, managerial heat grows.
As noted earlier, uncertainty tends to accumulate during execution. Within the model, this appears as an increase in managerial heat. For simplicity, we assume linear growth. In real projects, the increase may be nonlinear, especially when approaching critical states.
The physical analogy is appropriate here: a project uses energy to perform
useful work, but part of this energy inevitably dissipates as heat
,
warming the project environment and reducing its hypothetical efficiency.
The growth of entropy reflects energy moving into states that are difficult
to use productively.
Management activities — actions aimed at reducing uncertainty — in terms of
the model decrease managerial heat, effectively cooling
the project.
They redistribute entropy back into its useful portion.
The indicator Qm is additive in meaning and can be aggregated across tasks. This makes it possible to identify areas where uncertainty is not merely present but accumulates over time, forming zones of increased managerial tension — or, in other words, overheating.
Such areas require priority attention.
By constructing a time profile of the project’s total managerial heat, it
becomes possible to identify peak values and place control activities before
them. These interventions intentionally slow the pace of execution and
cut off
overheating at moments when it is managerially justified.
Within the model, it is also possible to define a threshold level of total
heat beyond which controllability is no longer preserved.
As a result, sprints cease to be mechanical slices of time and begin to reflect the internal logic of the work. Artificial rhythmic segmentation moves closer to natural segmentation, forming a meaningful hybrid.
The method does not eliminate uncertainty. It manages its distribution in time and across the state space. Management, in this sense, is the maintenance of an acceptable thermal regime of the project — one in which the goal remains achievable even in the presence of errors and deviations.
“Thermal” Parameters of a Production Task
A key part of the method is defining parameters that determine the relative managerial heat of tasks. Since absolute uncertainty is difficult to evaluate, what interests us is not its exact value, but a comparable measure that allows tasks to be contrasted and their dynamics observed.
For simplicity, let us assume that uncertainty is distributed relatively evenly and has a comparable impact on task parameters. In this case, it is sufficient to evaluate it along a single dimension — the most convenient of which is execution time.
We use the three-point PERT estimate:
- $O$ — optimistic estimate
- $E$ — expected time
- $P$ — pessimistic estimate
The standard deviation is defined as:
$$\sigma = \frac{P - O}{6}$$To obtain a dimensionless value, we normalize σ by dividing it by the duration of a working day $T_d$:
$$Q_m=\frac{\sigma}{T_d}$$All else being equal, the dispersion of time estimates ($\sigma$) tends to increase as the execution horizon grows. This indicator therefore reflects the degree of uncertainty and, as a rule, increases with task duration.
Absolute values of heat are not essential. What matters are relative relationships and their dynamics over time.
Parameters of a Management Activity
Let us consider the effect of management interventions.
The basic control mechanism is the introduction of independent perspectives, which reduces the average deviation of estimates.
Assume that by the time of review the task has already formed a stable direction, and that the management activity adds N additional expert assessments from participants not directly involved in execution.
The corrected heat is defined as:
$$Q_c = Q_m \left( 1 - \frac{1}{\sqrt{N + 1}} \right)$$where $N$ is the number of additional expert opinions.
It is assumed that independent estimates reduce standard deviation proportionally to the square root of the number of observations — similar to the reduction of variance when aggregating independent measurements.
If a management intervention is applied simultaneously to multiple tasks, the effectiveness of the correction decreases proportionally to their number.
Management activities typically have an optimal duration at which their effectiveness is maximal. Deviations from this duration — whether too short or excessively long — reduce their impact.
The Project as a Thermal System
At this stage, the project is represented as a set of production tasks that, alongside achieving their objectives, introduce uncertainty into the project environment. We have defined this uncertainty as the managerial heat of a task.
The project environment consists primarily of people and the established processes of their interaction. Different teams perceive emerging challenges and uncertainties differently. For this reason, we can introduce the concept of heat capacity for a project team — a parameter that determines how many degrees the project’s temperature rises when a given amount of heat is absorbed.
We define the temperature of the project environment as:
$$T_p = k\,{\sum_i \frac{Q_{m,i}}{C_t}}$$where:
- $Q_{m,i}$ — the heat generated by completed tasks;
- $C_t$ — the heat capacity of the project team;
- $k$ — a normalization coefficient defining a convenient temperature scale.
At this stage, heat capacity is treated as a constant, although in real conditions it may change.
If a project is left without management intervention, critical uncertainty will eventually accumulate. Participants will gradually lose a clear understanding of the current state of work. Discussions will become spontaneous, meetings less effective, and decisions poorly documented or inconsistently applied.
This condition resembles boiling: accumulated heat begins to escape as
steam
— in the form of communication noise and chaotic attempts at
stabilization.
Let us call the corresponding temperature value $T_k$ the project’s boiling point and choose the normalization coefficient $k$ such that this point corresponds to 100°M (degrees of managerial temperature).
Naturally, boiling
is an abnormal state. In practice, it is more
reasonable to maintain temperature at or below 80°M. Such a regime does not
require excessive regulation, while still preserving the necessary margin
for managerial maneuver.
If no management activities are introduced, the project temperature curve will be monotonically increasing over time. This curve allows us to anticipate the optimal moment for introducing a management intervention.
It is important to note that a project never starts at zero temperature. It initially possesses non-zero entropy in its state distribution — and this is precisely what makes it a project. When the distribution stabilizes and variability becomes governed statistically rather than through structural decisions, the activity effectively transitions into an operational process. In this sense, the transition from project to process may be viewed as a kind of phase change occurring as entropy decreases.
By scheduling a management activity, we adjust the project’s temperature profile and thereby gain the ability to determine the optimal timing for the next intervention.
Planning management in this way allows the project to remain controllable without overloading it with excessive non-productive activities.
It is easy to observe that with a stable team and evenly planned workload, control activities will tend to occur at nearly regular intervals — similar to what happens in agile methodologies.
However, the proposed model preserves adaptability. The pace of management adjusts to the current characteristics of the work and to possible unevenness in workload. Moreover, it becomes possible to plan local management interventions precisely in those areas where risks and uncertainty are growing most rapidly.
The compatibility of the proposed model with Scrum is both practical and useful. The metrics of a typical Scrum project are well studied, which makes it possible to calibrate the model against accumulated empirical data rather than treating it as a purely theoretical construct.
The purpose of this article has been to outline a thermodynamic view of the project within the broader theory of segmentation and to demonstrate its basic mechanisms. At this stage, the model serves as a conceptual framework — a way to rethink how uncertainty accumulates, how management interventions operate, and why segmentation is not merely procedural but structural.
However, a conceptual model alone is not sufficient.
If temperature can be estimated, it can be monitored.
If it can be monitored, it can be regulated.
And if it can be regulated, segmentation ceases to be a ritual and becomes
a measurable management instrument.
In the next article, the model will move from abstraction to application. We will formalize its parameters, define practical thresholds, and show how this thermodynamic perspective can be implemented as a consistent management method rather than a metaphor.