Mercurial Hosting > traffic-intelligence
view tracking.cfg @ 190:36968a63efe1
Got the connected_components to finally work using a vecS for the vertex list in the adjacency list.
In this case, the component map is simply a vector of ints (which is the type of UndirectedGraph::vextex_descriptor (=graph_traits<FeatureGraph>::vertex_descriptor) and probably UndirectedGraph::vertices_size_type).
To use listS, I was told on the Boost mailing list:
>> If you truly need listS, you will need to create a vertex index
>> map, fill it in before you create the property map, and pass it to the
>> vector_property_map constructor (and as a type argument to that class).
It may be feasible with a component map like
shared_array_property_map< graph_traits<FeatureGraph>::vertex_descriptor, property_map<FeatureGraph, vertex_index_t>::const_type > components(num_vertices(g), get(vertex_index, g));
author | Nicolas Saunier <nicolas.saunier@polymtl.ca> |
---|---|
date | Wed, 07 Dec 2011 18:51:32 -0500 |
parents | 6f10a227486c |
children | 23da16442433 |
line wrap: on
line source
# filename of the video to process video-filename = ~/Research/Data/minnesota/Rice-and-University-12_50.avi # filename of the database where results are saved database-filename = ~/Research/Data/minnesota/results.sqlite # filename of the homography matrix homography-filename = ~/Research/Data/minnesota/Rice-and-University-12_50-homography.txt # filename of the mask image (where features are detected) mask-filename = ~/Research/Data/minnesota/Rice-and-University-12_50-mask.png # load features from database load-features = false # display trajectories on the video display = false # original video frame rate video-fps = 29.97 # number of digits of precision for all measurements derived from video # measurement-precision = 3 # first frame to process frame1 = 0 # number of frame to process nframes = -1 # feature tracking # maximum number of features added at each frame max-nfeatures = 1000 # quality level of the good features to track feature-quality = 0.1 # minimum distance between features min-feature-distanceklt = 5 # size of the search window at each pyramid level window-size = 7 # use of Harris corner detector use-harris-detector = false # k parameter to detect good features to track (OpenCV) k = 0.4 # maximal pyramid level in the feature tracking algorithm pyramid-level = 5 # number of displacement to test minimum feature motion ndisplacements = 3 # minimum displacement to keep features min-feature-displacement = 0.05 # maximum feature acceleration acceleration-bound = 3 # maximum feature deviation deviation-bound = 0.6 # number of frames to smooth positions (half window) smoothing-halfwidth = 5 # number of frames to compute velocities #nframes-velocity = 5 # maximum number of iterations to stop feature tracking max-number-iterations = 20 # minimum error to reach to stop feature tracking min-tracking-error = 0.3 # minimum length of a feature (number of frames) to consider a feature for grouping min-feature-time = 20 # Min Max similarity parameters (Beymer et al. method) # connection distance in feature grouping mm-connection-distance = 3.75 # segmentation distance in feature grouping mm-segmentation-distance = 1.5 # maximum distance between features for grouping max-distance = 5 # minimum cosine of the angle between the velocity vectors for grouping min-velocity-cosine = 0.8 # minimum average number of features per frame to create a vehicle hypothesis min-nfeatures-group = 3