Login Form

Editors

Activity recognition results on UCF Sports and Holywood2

Table above shows the results, obtained on UCF Sports dataset (http://crcv.ucf.edu/data/UCF_Sports_Action.php). We report recognition rate with respect to the number...


Read More...

Computational efficiency and parallel implementation

The developed algorithms are computationally effective and the compositional processing pipeline is well-suited for implementation on massively parallel architectures. Many...


Read More...

Motion hierarchy structure

Our model is comprised of three processing stages, as shown in the Figure. The task of the lowest stage (layers...


Read More...

Server crash

After experiencing a total server failure, we are back online. We apologize for the inconvenience - we are still in...


Read More...

L1: motion features

Layer L1 provides an input to the compositional hierarchy. Motion, obtained in L0 is encoded using a small dictionary.


Read More...
01234

Task 1.3: Categorization of complex motion

The algorithms that we will develop under Task 1.2 will allow building of multilayer hierarchical motion models, with each new layer being a composition of motion primitives from the previous layer. We also expect to find each layer exhibit an invariance to a particular aspect of the observed motion. Our previous studies [Perš2003] indicate strong evidence that different categories of activities are most clearly observed at different levels of details. We therefore anticipate that different classes of activities will be best represented by different layers in the hierarchy. To construct a classifier for detecting different categories of motion we will have to select a particular layer of the hierarchy and apply some type of classification model. One approach is to determine the maximum number of layers at which the relevant motion information is still observable and construct an “or”­based classifier as [Fidler2010a] have done for shape­based compositional models. Another approach is to encode the frequencies of the motion primitives at a particular layer in form of histogram­ based models and apply a support­vector machine as was done by [Felzenszwalb2009] in texture­based object categorization. We will explore these alternative techniques within this task.

A drawback of the existing multilayered compositional frameworks such as [Fidler2010a] is that they are presented as feed­forward networks. This means that during detection, a compositional part at a certain level is detected only if it receives a significant support from the layer directly below. If an object is not entirely clearly visible in the image, some parts of the object are not detected at a certain level of hierarchy. We will explore the mechanisms proposed in object detection [Wu2010] and categorization [Lee2003], to introduce the feed­ back loops into the hierarchy. We will explore approximate stochastic approaches of message passing to consolidate all the evidence in the network and thus improve its robustness of motion detection.

Another drawback of existing multi­layered compositional frameworks is that they are entirely reconstructive in nature. This means that they are trained for reconstruction and the same information is used to in detection tasks. A direct consequence is that they perform poorly in discriminating between similar categories. This drawback is expected to be pronounced in detection of motion categories as well. We will explore possibility of introducing discriminative information in the network during learning to boost its performance. We will determine at which level of hierarchy the introduction of the discriminative information results in the largest performance improvement.

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

You have declined cookies. This decision can be reversed.

You have allowed cookies to be placed on your computer. This decision can be reversed.