[eng] A Critical Adaptive Distributed Embedded System
(CADES) is a group of interconnected nodes that must carry
out a set of tasks to achieve a common goal, while fulfilling
several requirements associated to their critical (e.g. hard realtime
requirements) and adaptive nature. In these systems, a key
challenge is to solve, in a timely manner, the combinatorial
optimization problem involved in finding the best way to allocate
the tasks to the available nodes (i.e. the tasks allocation) taking
into account aspects such as the computational costs of the tasks
and the computational capacity of the nodes. This problem is
not trivial and there is no known polynomial time algorithm to
find the optimal solution. Several studies have proposed Deep
Reinforcement Learning (DRL) approaches to solve combinatorial
optimization problems and, in this work, we explore the
application of such approaches to the tasks allocation problem
in CADESs. We first discuss the potential advantages of using a
DRL-based approach over several heuristic-based approaches to
allocate tasks in CADESs, and we then demonstrate how a DRLbased
approach can achieve similar results to the best performing
heuristic in terms of optimality of the allocation, while requiring
less time to generate such allocation.