Learning, analysis, and detection of motion in the framework of a hierarchical compositional visual architecture
Table above shows the results, obtained on UCF Sports dataset (http://crcv.ucf.edu/data/UCF_Sports_Action.php). We report recognition rate with respect to the number...
The developed algorithms are computationally effective and the compositional processing pipeline is well-suited for implementation on massively parallel architectures. Many...
Our model is comprised of three processing stages, as shown in the Figure. The task of the lowest stage (layers...
After experiencing a total server failure, we are back online. We apologize for the inconvenience - we are still in...
Layer L1 provides an input to the compositional hierarchy. Motion, obtained in L0 is encoded using a small dictionary.
Identified bottlenecks will be examined in detail, and it will be determined whether the parallelization of a particular bottleneck would contribute to the overall algorithm speedup. In case of the positive assessment, we will decide on the most appropriate parallelization strategy. It is expected, that lowlevel computations will be best suited for finegrained massively parallel execution on the graphic cards (GPUs) and that higherlevel computations will benefit more from coarsegrained functional decomposition parallelization strategies, which would allow them to run in parallel on multiple cores (CPUs) of computing nodes. Finally, algorithm in question will be ported or redesigned for parallel processing. It is our opinion that the hierarchical compositional structure offers excellent opportunities for efficient parallelization. In addition to implementation of lowlevel algorithms on massively parallel architectures, there is also an opportunity to exploit inherently parallel structure in the upper layers of compositional model. We plan to introduce speculative computing on the potentially idling resources as well. The reasoning through the layers of the model will be done by evaluating hypotheses, and selecting the most probable ones. The algorithm traverses through the hierarchy, its direction is determined by the hypothesis evaluation. If it arrives into a dead end, a part of its path through the hierarchy has to be reevaluated. This offers exciting possibility of speculative parallelization, where additional computing resources, such as idle processor cores, perform evaluation of less likely hypotheses, in hope that they might be useful in future processing steps, and thus minimizing the potential impact of suboptimal decision at any of the stages.
You have declined cookies. This decision can be reversed.
You have allowed cookies to be placed on your computer. This decision can be reversed.