Login Form

Editors

Activity recognition results on UCF Sports and Holywood2

Table above shows the results, obtained on UCF Sports dataset (http://crcv.ucf.edu/data/UCF_Sports_Action.php). We report recognition rate with respect to the number...


Read More...

Computational efficiency and parallel implementation

The developed algorithms are computationally effective and the compositional processing pipeline is well-suited for implementation on massively parallel architectures. Many...


Read More...

Motion hierarchy structure

Our model is comprised of three processing stages, as shown in the Figure. The task of the lowest stage (layers...


Read More...

Server crash

After experiencing a total server failure, we are back online. We apologize for the inconvenience - we are still in...


Read More...

L1: motion features

Layer L1 provides an input to the compositional hierarchy. Motion, obtained in L0 is encoded using a small dictionary.


Read More...
01234

Task 3.2: Parallelization

Identified bottlenecks will be examined in detail, and it will be determined whether the parallelization of a particular bottleneck would contribute to the overall algorithm speedup. In case of the positive assessment, we will decide on the most appropriate parallelization strategy. It is expected, that low­level computations will be best suited for fine­grained massively parallel execution on the graphic cards (GPUs) and that higher­level computations will benefit more from coarse­grained functional decomposition parallelization strategies, which would allow them to run in parallel on multiple cores (CPUs) of computing nodes. Finally, algorithm in question will be ported or redesigned for parallel processing. It is our opinion that the hierarchical compositional structure offers excellent opportunities for efficient parallelization. In addition to implementation of low­level algorithms on massively parallel architectures, there is also an opportunity to exploit inherently parallel structure in the upper layers of compositional model. We plan to introduce speculative computing on the potentially idling resources as well. The reasoning through the layers of the model will be done by evaluating hypotheses, and selecting the most probable ones. The algorithm traverses through the hierarchy, its direction is determined by the hypothesis evaluation. If it arrives into a dead end, a part of its path through the hierarchy has to be re­evaluated. This offers exciting possibility of speculative parallelization, where additional computing resources, such as idle processor cores, perform evaluation of less likely hypotheses, in hope that they might be useful in future processing steps, and thus minimizing the potential impact of suboptimal decision at any of the stages.

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

You have declined cookies. This decision can be reversed.

You have allowed cookies to be placed on your computer. This decision can be reversed.